evaluators

collection of evaluators for performing experiments, optimization, and robust optimization

class ema_workbench.em_framework.evaluators.MultiprocessingEvaluator(msis, n_processes=None, maxtasksperchild=None, **kwargs)

evaluator for experiments using a multiprocessing pool

Parameters:
  • msis (collection of models) –
  • n_processes (int (optional)) –
  • max_tasks (int (optional)) –
evaluate_experiments(scenarios, policies, callback)

used by ema_workbench

finalize()

finalize the evaluator

initialize()

initialize the evaluator

class ema_workbench.em_framework.evaluators.IpyparallelEvaluator(msis, client, **kwargs)

evaluator for using an ipypparallel pool

evaluate_experiments(scenarios, policies, callback)

used by ema_workbench

finalize()

finalize the evaluator

initialize()

initialize the evaluator

ema_workbench.em_framework.evaluators.optimize(models, algorithm=None, nfe=10000, searchover='levers', evaluator=None, reference=None, convergence=None, constraints=None, convergence_freq=1000, logging_freq=5, **kwargs)

optimize the model

Parameters:
  • models (1 or more Model instances) –
  • algorithm (a valid Platypus optimization algorithm) –
  • nfe (int) –
  • searchover ({'uncertainties', 'levers'}) –
  • kwargs (any additional arguments will be passed on to algorithm) –
  • convergence (function or collection of functions, optional) –
  • constraints (list, optional) –
  • convergence_freq (int) – nfe between convergence check
  • logging_freq (int) – number of generations between logging of progress
  • kwargs
Returns:

Return type:

pandas DataFrame

Raises:
  • EMAError if searchover is not one of ‘uncertainties’ or ‘levers’
  • NotImplementedError if len(models) > 1
ema_workbench.em_framework.evaluators.perform_experiments(models, scenarios=0, policies=0, evaluator=None, reporting_interval=None, reporting_frequency=10, uncertainty_union=False, lever_union=False, outcome_union=False, uncertainty_sampling='lhs', levers_sampling='lhs', callback=None, return_callback=False)

sample uncertainties and levers, and perform the resulting experiments on each of the models

Parameters:
  • models (one or more AbstractModel instances) –
  • scenarios (int or collection of Scenario instances, optional) –
  • policies (int or collection of Policy instances, optional) –
  • evaluator (Evaluator instance, optional) –
  • interval (reporting) –
  • reporting_frequency (int, optional) –
  • uncertainty_union (boolean, optional) –
  • lever_union (boolean, optional) –
  • uncertainty_sampling ({LHS, MC, FF, PFF, SOBOL, MORRIS, FAST}, optional) –
  • lever_sampling ({LHS, MC, FF, PFF, SOBOL, MORRIS, FAST}, optional) –
  • callback (Callback instance, optional) –
  • return_callback (boolean, optional) –
Returns:

the experiments as a dataframe, and a dict with the name of an outcome as key, and the associated scores as numpy array. Experiments and outcomes are alinged on index.

Return type:

tuple

class ema_workbench.em_framework.evaluators.SequentialEvaluator(models, **kwargs)
evaluate_experiments(scenarios, policies, callback)

used by ema_workbench

finalize()

finalize the evaluator

initialize()

initialize the evaluator