pyepo.func.perturbed

Perturbed optimization function

Classes

perturbedOpt

An autograd module for Fenchel-Young loss using perturbation techniques. The

perturbedOptFunc

An autograd function for perturbed optimizer

perturbedFenchelYoung

An autograd module for Fenchel-Young loss using perturbation techniques. The

perturbedFenchelYoungFunc

An autograd function for Fenchel-Young loss using perturbation techniques.

implicitMLE

An autograd module for Implicit Maximum Likelihood Estimator, which yields

implicitMLEFunc

An autograd function for Implicit Maximum Likelihood Estimator

adaptiveImplicitMLE

An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which

adaptiveImplicitMLEFunc

An autograd function for Adaptive Implicit Maximum Likelihood Estimator

Functions

_solve_or_cache(ptb_c, module)

Solve or use cached solutions for perturbed costs (3D: n_samples × batch × vars).

_solve_in_pass(ptb_c, optmodel, processes, pool[, ...])

Solve optimization for perturbed 3D costs and update solution pool.

_cache_in_pass(ptb_c, optmodel, solpool)

Use solution pool for perturbed 3D costs (n_samples × batch × vars).

Module Contents

class pyepo.func.perturbed.perturbedOpt(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithm by the specific expression of the gradients of the loss.

For the perturbed optimizer, the cost vector needs to be predicted from contextual data and is perturbed with Gaussian noise.

Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>

n_samples = 10
sigma = 1.0
rnd
forward(pred_cost)

Forward pass

class pyepo.func.perturbed.perturbedOptFunc(*args, **kwargs)

Bases: torch.autograd.Function

An autograd function for perturbed optimizer

static forward(ctx, pred_cost, module)

Forward pass for perturbed

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • module (optModule) – perturbedOpt module

Returns:

solution expectations with perturbation

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for perturbed

class pyepo.func.perturbed.perturbedFenchelYoung(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, reduction='mean', dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithm by the specific expression of the gradients of the loss.

For the perturbed optimizer, the cost vector needs to be predicted from contextual data and is perturbed with Gaussian noise.

The Fenchel-Young loss allows directly optimizing a loss between the features and solutions with less computation. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://papers.nips.cc/paper/2020/hash/6bb56208f672af0dd65451f869fedfd9-Abstract.html>

n_samples = 10
sigma = 1.0
rnd
forward(pred_cost, true_sol)

Forward pass

class pyepo.func.perturbed.perturbedFenchelYoungFunc(*args, **kwargs)

Bases: torch.autograd.Function

An autograd function for Fenchel-Young loss using perturbation techniques.

static forward(ctx, pred_cost, true_sol, module)

Forward pass for perturbed Fenchel-Young loss

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • true_sol (torch.tensor) – a batch of true optimal solutions

  • module (optModule) – perturbedFenchelYoung module

Returns:

solution expectations with perturbation

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for perturbed Fenchel-Young loss

class pyepo.func.perturbed.implicitMLE(optmodel, n_samples=10, sigma=1.0, lambd=10, distribution=None, two_sides=False, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Implicit Maximum Likelihood Estimator, which yields an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.

For I-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

The I-MLE approximates the gradient of the optimizer smoothly. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://proceedings.neurips.cc/paper_files/paper/2021/hash/7a430339c10c642c4b2251756fd1b484-Abstract.html>

n_samples = 10
sigma = 1.0
lambd = 10
distribution = None
two_sides = False
forward(pred_cost)

Forward pass

class pyepo.func.perturbed.implicitMLEFunc(*args, **kwargs)

Bases: torch.autograd.Function

An autograd function for Implicit Maximum Likelihood Estimator

static forward(ctx, pred_cost, module)

Forward pass for IMLE

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • module (optModule) – implicitMLE module

Returns:

predicted solutions

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for IMLE

class pyepo.func.perturbed.adaptiveImplicitMLE(optmodel, n_samples=10, sigma=1.0, distribution=None, two_sides=False, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which adaptively chooses hyperparameter λ and yields an optimal solution in a constrained exponential family distribution via Perturb-and-MAP.

For AI-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

The AI-MLE approximates the gradient of the optimizer smoothly. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://ojs.aaai.org/index.php/AAAI/article/view/26103>

n_samples = 10
sigma = 1.0
distribution = None
two_sides = False
alpha = 0
grad_norm_avg = 1
step = 0.001
forward(pred_cost)

Forward pass

class pyepo.func.perturbed.adaptiveImplicitMLEFunc(*args, **kwargs)

Bases: implicitMLEFunc

An autograd function for Adaptive Implicit Maximum Likelihood Estimator

static backward(ctx, grad_output)

Backward pass for IMLE

pyepo.func.perturbed._solve_or_cache(ptb_c, module)

Solve or use cached solutions for perturbed costs (3D: n_samples × batch × vars). Delegates to the shared 2D functions in utils after flattening.

pyepo.func.perturbed._solve_in_pass(ptb_c, optmodel, processes, pool, solpool=None, solset=None)

Solve optimization for perturbed 3D costs and update solution pool.

Parameters:
  • ptb_c (torch.tensor) – perturbed costs, shape (n_samples, batch, vars)

  • optmodel (optModel) – optimization model

  • processes (int) – number of processors

  • pool – process pool

  • solpool (torch.tensor) – solution pool

  • solset (set) – hash set for deduplication

Returns:

(solutions shape (batch, n_samples, vars), updated solpool)

Return type:

tuple

pyepo.func.perturbed._cache_in_pass(ptb_c, optmodel, solpool)

Use solution pool for perturbed 3D costs (n_samples × batch × vars). Unlike the 2D version in utils, this handles the extra sample dimension.