Regression Evaluation and Counterfactual Explanations
InterpretML is an interpretable machine studying library designed by Microsoft, with the aim of constructing machine studying fashions extra comprehensible and open to human interpretation.
This has explicit worth when speaking findings with enterprise stakeholders, who in lots of instances are non-technical and search to know the enterprise implications of findings yielded by a machine studying mannequin.
The aim of this text is for instance how interpretable machine studying and counterfactual evaluation can enable for a greater understanding of underlying tendencies in a dataset and the methods by which InterpretML can communicative such findings in an intuitive means.
The dataset used for this instance is the NYC Taxi & Limousine Fee — yellow taxi journey information dataset. This dataset was sourced utilizing Azure Open Information, which in flip was sourced from the nyc.gov web site and is ruled beneath the nyc.gov Phrases of Use. The dataset is made accessible by NYC Open Information, which makes its knowledge accessible beneath the CC0: Public Area license as cited beneath the corporate’s Kaggle account.
Notice that Python 3.8.0 was used for conducting the beneath evaluation.
The aforementioned dataset comprises knowledge factors on the quite a few points of yellow taxi journeys throughout NYC, together with whole quantity charged, journey distance, tip quantity, and tolls quantity.
The dataset offers quite a few different variables corresponding to pick-up and drop-off occasions and places, passenger counts, and fee varieties.
Nonetheless, for the needs of figuring out the principle influences on whole quantity charged (which would be the consequence variable for the beneath evaluation) — journey distance, tip quantity, and tolls quantity are chosen because the unbiased (characteristic) variables for this evaluation.
For this evaluation, one month of information (Could 6 2018 to June 6 2018) was used for modelling functions.
import numpy as np
from azureml.opendatasets import NycTlcYellow
from datetime import datetime
from dateutil import parserend_date = parser.parse('2018-06-06')
start_date = parser.parse('2018-05-06')
nyc_tlc = NycTlcYellow(start_date=start_date, end_date=end_date)
nyc_tlc_df = nyc_tlc.to_pandas_dataframe()
nyc_tlc_df
Over 9 million rows of information have been yielded for evaluation — here’s a snippet of the info:
>>> nyc_tlc_df
vendorID tpepPickupDateTime tpepDropoffDateTime passengerCount tripDistance puLocationId ... additional mtaTax improvementSurcharge tipAmount tollsAmount totalAmount
0 2 2018-05-27 17:50:34 2018-05-27 17:56:41 3 0.82 161 ... 0.0 0.5 0.3 0.00 0.0 6.80
1 2 2018-05-23 08:20:41 2018-05-23 08:37:06 1 1.69 142 ... 0.0 0.5 0.3 3.08 0.0 15.38
3 2 2018-05-23 09:02:54 2018-05-23 09:17:59 2 6.64 140 ... 0.0 0.5 0.3 0.00 0.0 20.30
5 2 2018-05-23 13:28:48 2018-05-23 13:35:15 1 0.61 170 ... 0.0 0.5 0.3 1.00 0.0 7.80
7 2 2018-05-23 07:05:50 2018-05-23 07:07:40 2 0.48 48 ... 0.0 0.5 0.3 0.00 0.0 4.30
... ... ... ... ... ... ... ... ... ... ... ... ... ...
339945 2 2018-06-04 14:03:37 2018-06-04 14:17:11 1 1.95 262 ... 0.0 0.5 0.3 2.00 0.0 13.30
339946 2 2018-06-04 17:15:23 2018-06-04 17:16:38 1 0.55 262 ... 1.0 0.5 0.3 0.00 0.0 5.30
339947 2 2018-06-04 16:59:23 2018-06-04 18:24:02 6 16.95 88 ... 1.0 0.5 0.3 0.00 0.0 62.30
339948 2 2018-06-04 10:34:44 2018-06-04 10:40:46 1 1.16 229 ... 0.0 0.5 0.3 0.00 0.0 6.80
339949 1 2018-06-04 12:35:57 2018-06-04 12:58:32 1 2.80 231 ... 0.0 0.5 0.3 0.00 0.0 17.30[9066744 rows x 21 columns]
It’s value noting that the dataset in its uncooked format comprises some spurious knowledge which must be handled in the course of the preprocessing stage. For example, descriptive statistics for the variables of curiosity reveals variables corresponding to tipAmount comprise destructive values — when clearly paying a “destructive tip” shouldn’t be doable.
>>> nyc_tlc_df['totalAmount'].describe()
depend 9.066744e+06
imply 1.676839e+01
std 1.502198e+01
min -4.003000e+02
25% 8.750000e+00
50% 1.209000e+01
75% 1.830000e+01
max 8.019600e+03
Title: totalAmount, dtype: float64>>> nyc_tlc_df['tipAmount'].describe()
depend 9.066744e+06
imply 1.912497e+00
std 2.658866e+00
min -1.010000e+02
25% 0.000000e+00
50% 1.410000e+00
75% 2.460000e+00
max 4.000000e+02
Title: tipAmount, dtype: float64
>>> nyc_tlc_df['tollsAmount'].describe()
depend 9.066744e+06
imply 3.693462e-01
std 1.883414e+00
min -1.800000e+01
25% 0.000000e+00
50% 0.000000e+00
75% 0.000000e+00
max 1.650000e+03
Title: tollsAmount, dtype: float64
>>> nyc_tlc_df['tripDistance'].describe()
depend 9.066744e+06
imply 3.022766e+00
std 3.905009e+00
min 0.000000e+00
25% 1.000000e+00
50% 1.650000e+00
75% 3.100000e+00
max 9.108000e+02
Title: tripDistance, dtype: float64
With the intention to cope with this difficulty, destructive values have been changed with a 0 worth throughout the variables of curiosity:
y=nyc_tlc_df['totalAmount']
y[y < 0] = 0tripDistance=nyc_tlc_df['tripDistance']
tripDistance[tripDistance < 0] = 0
tipAmount=nyc_tlc_df['tipAmount']
tipAmount[tipAmount < 0] = 0
tollsAmount=nyc_tlc_df['tollsAmount']
tollsAmount[tollsAmount < 0] = 0
Now, checking the minimal worth for every of those variables yields a minimal of 0 — which is what we would like.
>>> np.min(tollsAmount)
0.0
>>> np.min(tipAmount)
0.0
>>> np.min(tripDistance)
0.0
>>> np.min(y)
0.0
Notice that the uncooked dataset is extraordinarily massive in dimension — with 1.5B rows as of 2018 — over 50 GB. Moreover, knowledge for this dataset has been collected since 2009. On this regard, the 9 million rows of information being analysed on this occasion continues to be solely the tip of the iceberg.
For example, visitors patterns over winter months might look fairly totally different to that of Could and June, and there’s no assure that findings yielded throughout this explicit month will essentially translate to different time durations.
With that being stated, the purpose on this occasion is to make use of InterpretML to get a greater understanding of the info — which ought to be achievable given the section of information that’s analysed on this occasion.
To conduct the evaluation, the related libraries are imported and a train-test break up is carried out throughout the dataset:
from interpret.glassbox import LinearRegression
from interpret import present
from sklearn.model_selection import train_test_split
seed = 1X = np.column_stack((tripDistance, tipAmount, tollsAmount))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)
X_train
y_train
Now, a regression evaluation is run throughout the coaching knowledge:
lr = LinearRegression(random_state=seed)
lr
lr.match(X_train, y_train)
lr_global = lr.explain_global()
present(lr_global)
lr_local = lr.explain_local(X_test[:5], y_test[:5])
present(lr_local)
You’ll discover from the above code that each world and native explanations are being generated by the mannequin.
In line with the Microsoft whitepaper “InterpretML: A toolkit for understanding machine studying fashions”:
- World explanations enable a consumer to get a greater understanding of a mannequin’s holistic behaviour
- Native explanations enable for a greater understanding of particular person predictions
Within the above mannequin — world explanations illustrate the general relationship between the options and the end result variable, whereas native explanations illustrate the relationships throughout 5 separate observations throughout the take a look at set.
When wanting on the above graph for world explanations, we will see that feature_0001 (or tripDistance) is ranked as crucial characteristic, adopted by tipAmount and tollsAmount.
Let’s check out the native explanations throughout the 5 observations within the take a look at set. The y_test[:5] variable comprises the next values:
>>> y_test[:5]
451746 9.95
161571 15.30
72007 20.16
115597 21.36
37697 22.77
Title: totalAmount, dtype: float64
The same graph to the earlier is produced — however with the aim of predicting every worth as indicated above.
For example, a price of 12.2 is predicted for the precise y_test worth of 9.95, with probably the most significance being yielded to the tripDistance characteristic with tipAmount as a secondary characteristic of significance.
Nonetheless, when predicting a price of 13.3 for the precise y_test worth of 15.3, we will see that solely tripDistance is ranked as having significance — the opposite two options usually are not included:
Right here is the graph for the anticipated worth of twenty-two.8 in opposition to an precise worth of 19.1.
The aim of counterfactual explanations is to evaluate how a change in sure characteristic metrics will be anticipated to affect the end result variable.
For the needs of this instance, the totalAmount variable is transformed to a categorical one — any totalAmount worth above $10 is taken into account a considerable fare and assigned a price of 1. Any totalAmount worth beneath $10 is taken into account a low fare and given a price of 0.
Right here is the query we want to reply:
What adjustments in journey distance and tip quantity would lead to a 1 worth altering to a 0, and vice versa?
To conduct the evaluation, the dice_ml library is used — with the continual options and consequence variable outlined:
import dice_ml
from dice_ml.utils import helpers # helper features
d = dice_ml.Information(dataframe=nyc_tlc_df, continuous_features=['tripDistance', 'tipAmount'], outcome_name='totalAmount')
The info is break up into coaching and take a look at units, with numerical and categorical variables outlined, and the preprocessing pipelines for the numeric and categorical knowledge is created — with the mannequin then educated utilizing a Random Forest Classifier. The total code for conducting that is accessible on the DiCE repository.
Listed below are the counterfactual outcomes as offered by dice-ml:
>>> # generate counterfactuals
>>> dice_exp_random.visualize_as_dataframe(show_only_changes=True)
Question occasion (authentic consequence : 1)
tripDistance tipAmount totalAmount
0 1.8 1.85 1Numerous Counterfactual set (new consequence: 0.0)
tripDistance tipAmount totalAmount
0 0.4 1.0 0.0
1 0.3 1.0 0.0
Question occasion (authentic consequence : 1)
tripDistance tipAmount totalAmount
0 2.3 2.0 1
Numerous Counterfactual set (new consequence: 0.0)
tripDistance tipAmount totalAmount
0 1.0 - 0.0
1 0.6 - 0.0
2 0.1 - 0.0
3 0.4 - 0.0
these counterfactual outcomes offers a few fascinating insights.
Firstly, we will see that when the tipAmount variable has been larger than 1, the totalAmount variable additionally reveals a price of 1 — i.e. indicating the full quantity charged was larger than $10.
When situations the place the end result variable adjustments to 0 (a fare lower than $10), we will see that the tipAmount variable isn’t any larger than 1.0 and the journey distance is shorter than 1.0 in all situations.
This may point out that suggestions usually tend to be paid on journeys with longer distances. It may additionally point out that suggestions are a considerable contributor to the full quantity obtained by the taxi driver for that exact journey — there could also be situations the place a visit is perhaps brief in distance, however the cost of a tip yields the next whole quantity than could also be yielded throughout journeys the place the gap is longer however no tip is paid.
On this article, we’ve got explored:
- Methods to analyse and preprocess massive datasets
- The way to use InterpretML to conduct regression evaluation
- The distinction between world and native explanations in an InterpretML mannequin
- The usage of DICE-ML for producing counterfactual explanations and insights that may be yielded from this method
If you need, you can too attempt operating the above fashions throughout totally different time durations for the dataset and see what you give you. Hope you loved this text and would admire any questions or suggestions!
Disclaimer: This text is written on an “as is” foundation and with out guarantee. It was written with the intention of offering an outline of information science ideas, and shouldn’t be interpreted as skilled recommendation. The findings and interpretations on this article are these of the writer and usually are not endorsed by or affiliated with any third-party talked about on this article. The writer has no relationship with any third events talked about on this article.