pyepo.func.perturbed ==================== .. py:module:: pyepo.func.perturbed .. autoapi-nested-parse:: Perturbed optimization function Classes ------- .. autoapisummary:: pyepo.func.perturbed.perturbedOpt pyepo.func.perturbed.perturbedOptFunc pyepo.func.perturbed.perturbedFenchelYoung pyepo.func.perturbed.perturbedFenchelYoungFunc pyepo.func.perturbed.implicitMLE pyepo.func.perturbed.implicitMLEFunc pyepo.func.perturbed.adaptiveImplicitMLE pyepo.func.perturbed.adaptiveImplicitMLEFunc Functions --------- .. autoapisummary:: pyepo.func.perturbed._solve_or_cache pyepo.func.perturbed._solve_in_pass pyepo.func.perturbed._cache_in_pass pyepo.func.perturbed._solveWithObj4Par Module Contents --------------- .. py:class:: perturbedOpt(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, dataset=None) Bases: :py:obj:`pyepo.func.abcmodule.optModule` An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss. For the perturbed optimizer, the cost vector needs to be predicted from contextual data and are perturbed with Gaussian noise. Thus, it allows us to design an algorithm based on stochastic gradient descent. Reference: .. py:attribute:: n_samples :value: 10 .. py:attribute:: sigma :value: 1.0 .. py:attribute:: rnd .. py:attribute:: ptb .. py:method:: forward(pred_cost) Forward pass .. py:class:: perturbedOptFunc(*args, **kwargs) Bases: :py:obj:`torch.autograd.Function` A autograd function for perturbed optimizer .. py:method:: forward(ctx, pred_cost, module) :staticmethod: Forward pass for perturbed :param pred_cost: a batch of predicted values of the cost :type pred_cost: torch.tensor :param module: perturbedOpt module :type module: optModule :returns: solution expectations with perturbation :rtype: torch.tensor .. py:method:: backward(ctx, grad_output) :staticmethod: Backward pass for perturbed .. py:class:: perturbedFenchelYoung(optmodel, n_samples=10, sigma=1.0, processes=1, seed=135, solve_ratio=1, reduction='mean', dataset=None) Bases: :py:obj:`pyepo.func.abcmodule.optModule` An autograd module for Fenchel-Young loss using perturbation techniques. The use of the loss improves the algorithmic by the specific expression of the gradients of the loss. For the perturbed optimizer, the cost vector need to be predicted from contextual data and are perturbed with Gaussian noise. The Fenchel-Young loss allows to directly optimize a loss between the features and solutions with less computation. Thus, allows us to design an algorithm based on stochastic gradient descent. Reference: .. py:attribute:: n_samples :value: 10 .. py:attribute:: sigma :value: 1.0 .. py:attribute:: rnd .. py:attribute:: pfy .. py:method:: forward(pred_cost, true_sol) Forward pass .. py:class:: perturbedFenchelYoungFunc(*args, **kwargs) Bases: :py:obj:`torch.autograd.Function` A autograd function for Fenchel-Young loss using perturbation techniques. .. py:method:: forward(ctx, pred_cost, true_sol, module) :staticmethod: Forward pass for perturbed Fenchel-Young loss :param pred_cost: a batch of predicted values of the cost :type pred_cost: torch.tensor :param true_sol: a batch of true optimal solutions :type true_sol: torch.tensor :param module: perturbedFenchelYoung module :type module: optModule :returns: solution expectations with perturbation :rtype: torch.tensor .. py:method:: backward(ctx, grad_output) :staticmethod: Backward pass for perturbed Fenchel-Young loss .. py:class:: implicitMLE(optmodel, n_samples=10, sigma=1.0, lambd=10, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None) Bases: :py:obj:`pyepo.func.abcmodule.optModule` An autograd module for Implicit Maximum Likelihood Estimator, which yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP. For I-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data. The I-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent. Reference: .. py:attribute:: n_samples :value: 10 .. py:attribute:: sigma :value: 1.0 .. py:attribute:: lambd :value: 10 .. py:attribute:: distribution .. py:attribute:: two_sides :value: False .. py:attribute:: imle .. py:method:: forward(pred_cost) Forward pass .. py:class:: implicitMLEFunc(*args, **kwargs) Bases: :py:obj:`torch.autograd.Function` A autograd function for Implicit Maximum Likelihood Estimator .. py:method:: forward(ctx, pred_cost, module) :staticmethod: Forward pass for IMLE :param pred_cost: a batch of predicted values of the cost :type pred_cost: torch.tensor :param module: implicitMLE module :type module: optModule :returns: predicted solutions :rtype: torch.tensor .. py:method:: backward(ctx, grad_output) :staticmethod: Backward pass for IMLE .. py:class:: adaptiveImplicitMLE(optmodel, n_samples=10, sigma=1.0, distribution=sumGammaDistribution(kappa=5), two_sides=False, processes=1, solve_ratio=1, dataset=None) Bases: :py:obj:`pyepo.func.abcmodule.optModule` An autograd module for Adaptive Implicit Maximum Likelihood Estimator, which adaptively choose hyperparameter λ and yield an optimal solution in a constrained exponential family distribution via Perturb-and-MAP. For AI-MLE, it works as black-box combinatorial solvers, in which constraints are known and fixed, but the cost vector need to be predicted from contextual data. The AI-MLE approximate gradient of optimizer smoothly. Thus, allows us to design an algorithm based on stochastic gradient descent. Reference: .. py:attribute:: n_samples :value: 10 .. py:attribute:: sigma :value: 1.0 .. py:attribute:: distribution .. py:attribute:: two_sides :value: False .. py:attribute:: alpha :value: 0 .. py:attribute:: grad_norm_avg :value: 1 .. py:attribute:: step :value: 0.001 .. py:attribute:: aimle .. py:method:: forward(pred_cost) Forward pass .. py:class:: adaptiveImplicitMLEFunc(*args, **kwargs) Bases: :py:obj:`implicitMLEFunc` A autograd function for Adaptive Implicit Maximum Likelihood Estimator .. py:method:: backward(ctx, grad_output) :staticmethod: Backward pass for IMLE .. py:function:: _solve_or_cache(ptb_c, module) .. py:function:: _solve_in_pass(ptb_c, optmodel, processes, pool) A function to solve optimization in the forward pass .. py:function:: _cache_in_pass(ptb_c, optmodel, solpool) A function to use solution pool in the forward/backward pass .. py:function:: _solveWithObj4Par(perturbed_costs, args, model_type) A global function to solve function in parallel processors :param perturbed_costs: costsof objective function with perturbation :type perturbed_costs: np.ndarray :param args: optModel args :type args: dict :param model_type: optModel class type :type model_type: ABCMeta :returns: optimal solution :rtype: list