optimization

class ema_workbench.em_framework.optimization.ArchiveLogger(directory, decision_varnames, outcome_varnames, base_filename='archives.tar.gz')

Helper class to write the archive to disk at each iteration

Parameters:
  • directory (str)

  • decision_varnames (list of str)

  • outcome_varnames (list of str)

  • base_filename (str, optional)

classmethod load_archives(filename)

load the archives stored with the ArchiveLogger

Parameters:

filename (str) – relative path to file

Return type:

dict with nfe as key and dataframe as vlaue

class ema_workbench.em_framework.optimization.Convergence(metrics, max_nfe, convergence_freq=1000, logging_freq=5, log_progress=False)

helper class for tracking convergence of optimization

class ema_workbench.em_framework.optimization.EpsilonIndicatorMetric(reference_set, problem, **kwargs)

EpsilonIndicator metric

Parameters:
  • reference_set (DataFrame)

  • problem (PlatypusProblem instance)

this is a thin wrapper around EpsilonIndicator as provided by platypus to make it easier to use in conjunction with the workbench.

class ema_workbench.em_framework.optimization.EpsilonProgress

epsilon progress convergence metric class

class ema_workbench.em_framework.optimization.GenerationalDistanceMetric(reference_set, problem, **kwargs)

GenerationalDistance metric

Parameters:
  • reference_set (DataFrame)

  • problem (PlatypusProblem instance)

  • d (int, default=1) – the power in the intergenerational distance function

This is a thin wrapper around GenerationalDistance as provided by platypus to make it easier to use in conjunction with the workbench.

see https://link.springer.com/content/pdf/10.1007/978-3-319-15892-1_8.pdf for more information

class ema_workbench.em_framework.optimization.HypervolumeMetric(reference_set, problem, **kwargs)

Hypervolume metric

Parameters:
  • reference_set (DataFrame)

  • problem (PlatypusProblem instance)

this is a thin wrapper around Hypervolume as provided by platypus to make it easier to use in conjunction with the workbench.

class ema_workbench.em_framework.optimization.InvertedGenerationalDistanceMetric(reference_set, problem, **kwargs)

InvertedGenerationalDistance metric

Parameters:
  • reference_set (DataFrame)

  • problem (PlatypusProblem instance)

  • d (int, default=1) – the power in the inverted intergenerational distance function

This is a thin wrapper around InvertedGenerationalDistance as provided by platypus to make it easier to use in conjunction with the workbench.

see https://link.springer.com/content/pdf/10.1007/978-3-319-15892-1_8.pdf for more information

class ema_workbench.em_framework.optimization.OperatorProbabilities(name, index)

OperatorProbabiliy convergence tracker for use with auto adaptive operator selection.

Parameters:
  • name (str)

  • index (int)

State of the art MOEAs like Borg (and GenerationalBorg provided by the workbench) use autoadaptive operator selection. The algorithm has multiple different evolutionary operators. Over the run, it tracks how well each operator is doing in producing fitter offspring. The probability of the algorithm using a given evolutionary operator is proportional to how well this operator has been doing in producing fitter offspring in recent generations. This class can be used to track these probabilities over the run of the algorithm.

class ema_workbench.em_framework.optimization.Problem(searchover, parameters, outcome_names, constraints, reference=None)

small extension to Platypus problem object, includes information on the names of the decision variables, the names of the outcomes, and the type of search

class ema_workbench.em_framework.optimization.RobustProblem(parameters, outcome_names, scenarios, robustness_functions, constraints)

small extension to Problem object for robust optimization, adds the scenarios and the robustness functions

class ema_workbench.em_framework.optimization.SpacingMetric(problem)

Spacing metric

Parameters:

problem (PlatypusProblem instance)

this is a thin wrapper around Spacing as provided by platypus to make it easier to use in conjunction with the workbench.

ema_workbench.em_framework.optimization.epsilon_nondominated(results, epsilons, problem)

Merge the list of results into a single set of non dominated results using the provided epsilon values

Parameters:
  • results (list of DataFrames)

  • epsilons (epsilon values for each objective)

  • problem (PlatypusProblem instance)

Return type:

DataFrame

Notes

this is a platypus based alternative to pareto.py (https://github.com/matthewjwoodruff/pareto.py)

ema_workbench.em_framework.optimization.rebuild_platypus_population(archive, problem)

rebuild a population of platypus Solution instances

Parameters:
  • archive (DataFrame)

  • problem (PlatypusProblem instance)

Return type:

list of platypus Solutions

ema_workbench.em_framework.optimization.to_problem(model, searchover, reference=None, constraints=None)

helper function to create Problem object

Parameters:
  • model (AbstractModel instance)

  • searchover (str)

  • reference (Policy or Scenario instance, optional) – overwrite the default scenario in case of searching over levers, or default policy in case of searching over uncertainties

  • constraints (list, optional)

Return type:

Problem instance

ema_workbench.em_framework.optimization.to_robust_problem(model, scenarios, robustness_functions, constraints=None)

helper function to create RobustProblem object

Parameters:
  • model (AbstractModel instance)

  • scenarios (collection)

  • robustness_functions (iterable of ScalarOutcomes)

  • constraints (list, optional)

Return type:

RobustProblem instance