pyepo.func.blackbox

Differentiable Black-box optimization function

Classes

blackboxOpt

An autograd module for differentiable black-box optimizer, which yields

blackboxOptFunc

An autograd function for differentiable black-box optimizer

negativeIdentity

An autograd module for the differentiable optimizer, which yields an optimal

negativeIdentityFunc

An autograd function for negative identity optimizer

Module Contents

class pyepo.func.blackbox.blackboxOpt(optmodel, lambd=10, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for differentiable black-box optimizer, which yields an optimal solution and derive a gradient.

For differentiable black-box, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

The black-box approximates the gradient of the optimizer by interpolating the loss function. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://arxiv.org/abs/1912.02175>

lambd = 10
forward(pred_cost)

Forward pass

class pyepo.func.blackbox.blackboxOptFunc(*args, **kwargs)

Bases: torch.autograd.Function

An autograd function for differentiable black-box optimizer

static forward(ctx, pred_cost, module)

Forward pass for DBB

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • module (optModule) – blackboxOpt module

Returns:

predicted solutions

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for DBB

class pyepo.func.blackbox.negativeIdentity(optmodel, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for the differentiable optimizer, which yields an optimal solution and uses negative identity as a gradient on the backward pass.

For negative identity backpropagation, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

If the interpolation hyperparameter λ aligns with an appropriate step size, then the identity update is equivalent to DBB. However, the identity update does not require an additional call to the solver during the backward pass and tuning an additional hyperparameter λ.

Reference: <https://arxiv.org/abs/2205.15213>

forward(pred_cost)

Forward pass

class pyepo.func.blackbox.negativeIdentityFunc(*args, **kwargs)

Bases: torch.autograd.Function

An autograd function for negative identity optimizer

static forward(ctx, pred_cost, module)

Forward pass for NID

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • module (optModule) – negativeIdentity module

Returns:

predicted solutions

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for NID