pyepo.func
Pytorch autograd function for end-to-end training
Submodules
Classes
An autograd module for SPO+ Loss, as a surrogate loss function of SPO |
|
An autograd module for PG Loss, as a surrogate loss function of objective |
|
An autograd module for differentiable black-box optimizer, which yield |
|
An autograd module for the differentiable optimizer, which yields optimal a |
|
An autograd module for Fenchel-Young loss using perturbation techniques. The |
|
An autograd module for Fenchel-Young loss using perturbation techniques. The |
|
An autograd module for Implicit Maximum Likelihood Estimator, which yield |
|
An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which |
|
An autograd module for noise contrastive estimation as surrogate loss |
|
An autograd module for Maximum A Posterior contrastive estimation as |
|
An autograd module for listwise learning to rank, where the goal is to learn |
|
An autograd module for pairwise learning to rank, where the goal is to learn |
|
An autograd module for pointwise learning to rank, where the goal is to |
Package Contents
- class pyepo.func.SPOPlus(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for SPO+ Loss, as a surrogate loss function of SPO (regret) Loss, which measures the decision error of the optimization problem.
For SPO/SPO+ Loss, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.
The SPO+ Loss is convex with subgradient. Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://doi.org/10.1287/mnsc.2020.3922>
- spop
- forward(pred_cost, true_cost, true_sol, true_obj)
Forward pass
- class pyepo.func.perturbationGradient(optmodel, sigma=0.1, two_sides=False, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for PG Loss, as a surrogate loss function of objective value, which measures the decision quality of the optimization problem.
For PG Loss, the objective function is linear, and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.
According to Danskin’s Theorem, the PG Loss is derived from different zeroth order approximations and has the informative gradient. Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://arxiv.org/abs/2402.03256>
- sigma = 0.1
- two_sides = False
- forward(pred_cost, true_cost)
Forward pass
- _finiteDifference(pred_cost, true_cost)
Zeroth order approximations for surrogate objective value
- class pyepo.func.blackboxOpt(optmodel, lambd=10, processes=1, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for differentiable black-box optimizer, which yield an optimal solution and derive a gradient.
For differentiable block-box, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.
The block-box approximates the gradient of the optimizer by interpolating the loss function. Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://arxiv.org/abs/1912.02175>
- lambd = 10
- dbb
- forward(pred_cost)
Forward pass
- class pyepo.func.negativeIdentity(optmodel, processes=1, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for the differentiable optimizer, which yields optimal a solution and use negative identity as a gradient on the backward pass.
For negative identity backpropagation, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.
If the interpolation hyperparameter λ aligns with an appropriate step size, then the identity update is equivalent to DBB. However, the identity update does not require an additional call to the solver during the backward pass and tuning an additional hyperparameter λ.
Reference: <https://arxiv.org/abs/2205.15213>
- nid
- forward(pred_cost)
Forward pass
- class pyepo.func.perturbedOpt(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss.
For the perturbed optimizer, the cost vector needs to be predicted from contextual data and are perturbed with Gaussian noise.
Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>
- n_samples = 10
- sigma = 1.0
- rnd
- ptb
- forward(pred_cost)
Forward pass
- class pyepo.func.perturbedFenchelYoung(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss.
For the perturbed optimizer, the cost vector need to be predicted from contextual data and are perturbed with Gaussian noise.
The Fenchel-Young loss allows to directly optimize a loss between the features and solutions with less computation. Thus, allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>
- n_samples = 10
- sigma = 1.0
- rnd
- pfy
- forward(pred_cost, true_sol)
Forward pass
- class pyepo.func.implicitMLE(optmodel, n_samples=10, sigma=1.0, lambd=10, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Implicit Maximum Likelihood Estimator, which yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.
For I-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data.
The I-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://proceedings.neurips.cc/paper_files/paper/2021/hash/7a430339c10c642c4b2251756fd1b484-Abstract.html>
- n_samples = 10
- sigma = 1.0
- lambd = 10
- distribution
- two_sides = False
- imle
- forward(pred_cost)
Forward pass
- class pyepo.func.adaptiveImplicitMLE(optmodel, n_samples=10, sigma=1.0, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which adaptively choose hyperparameter λ and yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.
For AI-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data.
The AI-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://ojs.aaai.org/index.php/AAAI/article/view/26103>
- n_samples = 10
- sigma = 1.0
- distribution
- two_sides = False
- alpha = 0
- grad_norm_avg = 1
- step = 0.001
- aimle
- forward(pred_cost)
Forward pass
- class pyepo.func.NCE(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for noise contrastive estimation as surrogate loss functions, based on viewing suboptimal solutions as negative examples.
For the NCE, the cost vector needs to be predicted from contextual data and maximizes the separation of the probability of the optimal solution.
Thus allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://www.ijcai.org/proceedings/2021/390>
- solpool
- forward(pred_cost, true_sol)
Forward pass
- class pyepo.func.contrastiveMAP(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Maximum A Posterior contrastive estimation as surrogate loss functions, which is an efficient self-contrastive algorithm.
For the MAP, the cost vector needs to be predicted from contextual data and maximizes the separation of the probability of the optimal solution.
Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://www.ijcai.org/proceedings/2021/390>
- solpool
- forward(pred_cost, true_sol)
Forward pass
- class pyepo.func.listwiseLTR(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for listwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly.
For the listwise LTR, the cost vector needs to be predicted from the contextual data and the loss measures the scores of the whole ranked lists.
Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://proceedings.mlr.press/v162/mandi22a.html>
- solpool
- forward(pred_cost, true_cost)
Forward pass
- class pyepo.func.pairwiseLTR(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for pairwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly.
For the pairwise LTR, the cost vector needs to be predicted from the contextual data and the loss learns the relative ordering of pairs of items.
Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://proceedings.mlr.press/v162/mandi22a.html>
- relu
- solpool
- forward(pred_cost, true_cost)
Forward pass
- class pyepo.func.pointwiseLTR(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for pointwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly.
For the pointwise LTR, the cost vector needs to be predicted from contextual data, and calculates the ranking scores of the items.
Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://proceedings.mlr.press/v162/mandi22a.html>
- solpool
- forward(pred_cost, true_cost)
Forward pass