pyepo.metric.metrics
Metrics for SKlearn model
Functions
|
A function to calculate normalized true regret |
|
A function to create sklearn scorer |
|
A function to create Auto-SKlearn scorer |
|
A function to calculate MSE for testing |
|
A function to create MSE scorer for testing |
Module Contents
- pyepo.metric.metrics.SPOError(pred_cost, true_cost, model_type, args)
A function to calculate normalized true regret
- Parameters:
pred_cost (numpy.array) – predicted costs
true_cost (numpy.array) – true costs
model_type (ABCMeta) – optModel class type
args (dict) – optModel args
- Returns:
regret loss
- Return type:
float
- pyepo.metric.metrics.makeSkScorer(optmodel)
A function to create sklearn scorer
- Parameters:
optmodel (optModel) – optimization model
- Returns:
callable object that returns a scalar score; less is better.
- Return type:
scorer
- pyepo.metric.metrics.makeAutoSkScorer(optmodel)
A function to create Auto-SKlearn scorer
- Parameters:
optmodel (optModel) – optimization model
- Returns:
callable object that returns a scalar score; less is better.
- Return type:
scorer
- pyepo.metric.metrics.testMSE(pred_cost, true_cost, model_type, args)
A function to calculate MSE for testing
- Parameters:
pred_cost (array) – predicted costs
true_cost (array) – true costs
model_type (ABCMeta) – optModel class type
args (dict) – optModel args
- Returns:
mse
- Return type:
float