pyepo.func.blackbox

Differentiable Black-box optimization function

Module Contents

Classes

blackboxOpt

An autograd module for differentiable black-box optimizer, which yield

blackboxOptFunc

A autograd function for differentiable black-box optimizer

negativeIdentity

An autograd module for the differentiable optimizer, which yields optimal a

negativeIdentityFunc

A autograd function for differentiable black-box optimizer

class pyepo.func.blackbox.blackboxOpt(optmodel, lambd=10, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for differentiable black-box optimizer, which yield an optimal solution and derive a gradient.

For differentiable block-box, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

The block-box approximates the gradient of the optimizer by interpolating the loss function. Thus, it allows us to design an algorithm based on stochastic gradient descent.

Reference: <https://arxiv.org/abs/1912.02175>

forward(pred_cost)

Forward pass

class pyepo.func.blackbox.blackboxOptFunc(*args, **kwargs)

Bases: torch.autograd.Function

A autograd function for differentiable black-box optimizer

static forward(ctx, pred_cost, module)

Forward pass for DBB

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • module (optModule) – blackboxOpt module

Returns:

predicted solutions

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for DBB

class pyepo.func.blackbox.negativeIdentity(optmodel, processes=1, solve_ratio=1, dataset=None)

Bases: pyepo.func.abcmodule.optModule

An autograd module for the differentiable optimizer, which yields optimal a solution and use negative identity as a gradient on the backward pass.

For negative identity backpropagation, the objective function is linear and constraints are known and fixed, but the cost vector needs to be predicted from contextual data.

If the interpolation hyperparameter λ aligns with an appropriate step size, then the identity update is equivalent to DBB. However, the identity update does not require an additional call to the solver during the backward pass and tuning an additional hyperparameter λ.

Reference: <https://arxiv.org/abs/2205.15213>

forward(pred_cost)

Forward pass

class pyepo.func.blackbox.negativeIdentityFunc(*args, **kwargs)

Bases: torch.autograd.Function

A autograd function for differentiable black-box optimizer

static forward(ctx, pred_cost, module)

Forward pass for NID

Parameters:
  • pred_cost (torch.tensor) – a batch of predicted values of the cost

  • module (optModule) – blackboxOpt module

Returns:

predicted solutions

Return type:

torch.tensor

static backward(ctx, grad_output)

Backward pass for NID