pyepo.func.perturbed
Perturbed optimization function
Classes
An autograd module for Fenchel-Young loss using perturbation techniques. The |
|
A autograd function for perturbed optimizer |
|
An autograd module for Fenchel-Young loss using perturbation techniques. The |
|
A autograd function for Fenchel-Young loss using perturbation techniques. |
|
An autograd module for Implicit Maximum Likelihood Estimator, which yield |
|
A autograd function for Implicit Maximum Likelihood Estimator |
|
An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which |
|
A autograd function for Adaptive Implicit Maximum Likelihood Estimator |
Functions
|
|
|
A function to solve optimization in the forward pass |
|
A function to use solution pool in the forward/backward pass |
|
A global function to solve function in parallel processors |
Module Contents
- class pyepo.func.perturbed.perturbedOpt(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss.
For the perturbed optimizer, the cost vector needs to be predicted from contextual data and are perturbed with Gaussian noise.
Thus, it allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>
- n_samples = 10
- sigma = 1.0
- rnd
- ptb
- forward(pred_cost)
Forward pass
- class pyepo.func.perturbed.perturbedOptFunc(*args, **kwargs)
Bases:
torch.autograd.Function
A autograd function for perturbed optimizer
- static forward(ctx, pred_cost, module)
Forward pass for perturbed
- Parameters:
pred_cost (torch.tensor) – a batch of predicted values of the cost
module (optModule) – perturbedOpt module
- Returns:
solution expectations with perturbation
- Return type:
torch.tensor
- static backward(ctx, grad_output)
Backward pass for perturbed
- class pyepo.func.perturbed.perturbedFenchelYoung(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, reduction='mean', dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss.
For the perturbed optimizer, the cost vector need to be predicted from contextual data and are perturbed with Gaussian noise.
The Fenchel-Young loss allows to directly optimize a loss between the features and solutions with less computation. Thus, allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>
- n_samples = 10
- sigma = 1.0
- rnd
- pfy
- forward(pred_cost, true_sol)
Forward pass
- class pyepo.func.perturbed.perturbedFenchelYoungFunc(*args, **kwargs)
Bases:
torch.autograd.Function
A autograd function for Fenchel-Young loss using perturbation techniques.
- static forward(ctx, pred_cost, true_sol, module)
Forward pass for perturbed Fenchel-Young loss
- Parameters:
pred_cost (torch.tensor) – a batch of predicted values of the cost
true_sol (torch.tensor) – a batch of true optimal solutions
module (optModule) – perturbedFenchelYoung module
- Returns:
solution expectations with perturbation
- Return type:
torch.tensor
- static backward(ctx, grad_output)
Backward pass for perturbed Fenchel-Young loss
- class pyepo.func.perturbed.implicitMLE(optmodel, n_samples=10, sigma=1.0, lambd=10, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Implicit Maximum Likelihood Estimator, which yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.
For I-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data.
The I-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://proceedings.neurips.cc/paper_files/paper/2021/hash/7a430339c10c642c4b2251756fd1b484-Abstract.html>
- n_samples = 10
- sigma = 1.0
- lambd = 10
- distribution
- two_sides = False
- imle
- forward(pred_cost)
Forward pass
- class pyepo.func.perturbed.implicitMLEFunc(*args, **kwargs)
Bases:
torch.autograd.Function
A autograd function for Implicit Maximum Likelihood Estimator
- static forward(ctx, pred_cost, module)
Forward pass for IMLE
- Parameters:
pred_cost (torch.tensor) – a batch of predicted values of the cost
module (optModule) – implicitMLE module
- Returns:
predicted solutions
- Return type:
torch.tensor
- static backward(ctx, grad_output)
Backward pass for IMLE
- class pyepo.func.perturbed.adaptiveImplicitMLE(optmodel, n_samples=10, sigma=1.0, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None)
Bases:
pyepo.func.abcmodule.optModule
An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which adaptively choose hyperparameter λ and yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.
For AI-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data.
The AI-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent.
Reference: <https://ojs.aaai.org/index.php/AAAI/article/view/26103>
- n_samples = 10
- sigma = 1.0
- distribution
- two_sides = False
- alpha = 0
- grad_norm_avg = 1
- step = 0.001
- aimle
- forward(pred_cost)
Forward pass
- class pyepo.func.perturbed.adaptiveImplicitMLEFunc(*args, **kwargs)
Bases:
implicitMLEFunc
A autograd function for Adaptive Implicit Maximum Likelihood Estimator
- static backward(ctx, grad_output)
Backward pass for IMLE
- pyepo.func.perturbed._solve_or_cache(ptb_c, module)
- pyepo.func.perturbed._solve_in_pass(ptb_c, optmodel, processes, pool)
A function to solve optimization in the forward pass
- pyepo.func.perturbed._cache_in_pass(ptb_c, optmodel, solpool)
A function to use solution pool in the forward/backward pass
- pyepo.func.perturbed._solveWithObj4Par(perturbed_costs, args, model_type)
A global function to solve function in parallel processors
- Parameters:
perturbed_costs (np.ndarray) – costsof objective function with perturbation
args (dict) – optModel args
model_type (ABCMeta) – optModel class type
- Returns:
optimal solution
- Return type:
list