pyepo.func

Pytorch autograd function for end-to-end training

Submodules

Package Contents

Classes

SPOPlus

An autograd module for SPO+ Loss, as a surrogate loss function of SPO Loss,

blackboxOpt

An autograd module for differentiable black-box optimizer, which yield

negativeIdentity

An autograd module for the differentiable optimizer, which yields optimal a

perturbedOpt

An autograd module for Fenchel-Young loss using perturbation techniques. The

perturbedFenchelYoung

An autograd module for Fenchel-Young loss using perturbation techniques. The

implicitMLE

An autograd module for Implicit Maximum Likelihood Estimator, which yield

adaptiveImplicitMLE

An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which

NCE

An autograd module for noise contrastive estimation as surrogate loss

contrastiveMAP

An autograd module for Maximum A Posterior contrastive estimation as

listwiseLTR

An autograd module for listwise learning to rank, where the goal is to learn

pairwiseLTR

An autograd module for pairwise learning to rank, where the goal is to learn

pointwiseLTR

An autograd module for pointwise learning to rank, where the goal is to

class pyepo.func.SPOPlus(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for SPO+ Loss, as a surrogate loss function of SPO Loss, which measures the decision error of the optimization problem.

For SPO/SPO+ Loss, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

The SPO+ Loss is convex with subgradient. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://doi.org/10.1287/mnsc.2020.3922>

forward(pred_cost, true_cost, true_sol, true_obj)

Forward pass

class pyepo.func.blackboxOpt(optmodel, lambd=10, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for differentiable black-box optimizer, which yield an optimal solution and derive a gradient.

For differentiable block-box, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

The block-box approximates the gradient of the optimizer by interpolating the loss function. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://arxiv.org/abs/1912.02175>

forward(pred_cost)

Forward pass

class pyepo.func.negativeIdentity(optmodel, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for the differentiable optimizer, which yields optimal a solution and use negative identity as a gradient on the backward pass.

For negative identity backpropagation, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

If the interpolation hyperparameter λ aligns with an appropriate step size, then the identity update is equivalent to DBB. However, the identity update does not require an additional call to the solver during the backward pass and tuning an additional hyperparameter λ.

Reference: <https://arxiv.org/abs/2205.15213>

forward(pred_cost)

Forward pass

class pyepo.func.perturbedOpt(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss.

For the perturbed optimizer, the cost vector needs to be predicted from contextual data and are perturbed with Gaussian noise.

Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>

forward(pred_cost)

Forward pass

class pyepo.func.perturbedFenchelYoung(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss.

For the perturbed optimizer, the cost vector need to be predicted from contextual data and are perturbed with Gaussian noise.

The Fenchel-Young loss allows to directly optimize a loss between the features and solutions with less computation. Thus, allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>

forward(pred_cost, true_sol)

Forward pass

class pyepo.func.implicitMLE(optmodel, n_samples=10, sigma=1.0, lambd=10, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Implicit Maximum Likelihood Estimator, which yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.

For I-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data.

The I-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://proceedings.neurips.cc/paper_files/paper/2021/hash/7a430339c10c642c4b2251756fd1b484-Abstract.html>

forward(pred_cost)

Forward pass

class pyepo.func.adaptiveImplicitMLE(optmodel, n_samples=10, sigma=1.0, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which adaptively choose hyperparameter λ and yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.

For AI-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data.

The AI-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://ojs.aaai.org/index.php/AAAI/article/view/26103>

forward(pred_cost)

Forward pass

class pyepo.func.NCE(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for noise contrastive estimation as surrogate loss functions, based on viewing suboptimal solutions as negative examples.

For the NCE, the cost vector needs to be predicted from contextual data and maximizes the separation of the probability of the optimal solution.

Thus allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://www.ijcai.org/proceedings/2021/390>

forward(pred_cost, true_sol)

Forward pass

class pyepo.func.contrastiveMAP(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Maximum A Posterior contrastive estimation as surrogate loss functions, which is an efficient self-contrastive algorithm.

For the MAP, the cost vector needs to be predicted from contextual data and maximizes the separation of the probability of the optimal solution.

Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://www.ijcai.org/proceedings/2021/390>

forward(pred_cost, true_sol)

Forward pass

class pyepo.func.listwiseLTR(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for listwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly.

For the listwise LTR, the cost vector needs to be predicted from the contextual data and the loss measures the scores of the whole ranked lists.

Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://proceedings.mlr.press/v162/mandi22a.html>

forward(pred_cost, true_cost)

Forward pass

class pyepo.func.pairwiseLTR(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for pairwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly.

For the pairwise LTR, the cost vector needs to be predicted from the contextual data and the loss learns the relative ordering of pairs of items.

Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://proceedings.mlr.press/v162/mandi22a.html>

forward(pred_cost, true_cost)

Forward pass

class pyepo.func.pointwiseLTR(optmodel, processes=1, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for pointwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly.

For the pointwise LTR, the cost vector needs to be predicted from contextual data, and calculates the ranking scores of the items.

Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://proceedings.mlr.press/v162/mandi22a.html>

forward(pred_cost, true_cost)

Forward pass