Sending Request...
A very simple script in which we decide whether it's better to use a model or an ensemble for making predictions by creating both (given an input source) and evaluating the results, choosing the one with best f-1
measure in its evaluation if the objective field is categorical, or r-measure
for regression problems.
Given an input dataset:
Create a dataset with the input source.
Split it into training and test parts (80%/20%).
Create a model using the training dataset.
Create an ensemble using the training dataset.
Evaluate both the model and the ensemble using the test dataset.
Compare their evaluations and choose the best.
This script uses the BigML anomaly detection functions to assess covariate shift between a dataset used to train a model and a production dataset.
In brief, the principle of the method is to compute an average anomaly score of the production dataset relative to the model training dataset as a measure of the covariate shift between the training dataset and the production dataset. An anomaly detector is trained from the same dataset used to train the model. This anomaly detector is then used to derive a batch anomaly score for the production dataset. Finally, the average value of that batch anomaly score is computed as an indicator of covariate shift.
In practice, one might compute the average batch anomaly scores for several pairs of subsets of the training and production datasets, and then assess covariate shift based on the mean and variance of the average batch anomaly scores from those iterations.
Check this readme for more information.