evaluators
collection of evaluators for performing experiments, optimization, and robust optimization
- class ema_workbench.em_framework.evaluators.Samplers(value, names=None, *values, module=None, qualname=None, type=None, start=1, boundary=None)
Enum for different kinds of samplers
- class ema_workbench.em_framework.evaluators.SequentialEvaluator(msis)
- evaluate_experiments(scenarios, policies, callback, combine='factorial')
used by ema_workbench
- finalize()
finalize the evaluator
- initialize()
initialize the evaluator
- ema_workbench.em_framework.evaluators.optimize(models, algorithm=<class 'platypus.algorithms.EpsNSGAII'>, nfe=10000, searchover='levers', evaluator=None, reference=None, convergence=None, constraints=None, convergence_freq=1000, logging_freq=5, variator=None, **kwargs)
optimize the model
- Parameters:
models (1 or more Model instances)
algorithm (a valid Platypus optimization algorithm)
nfe (int)
searchover ({'uncertainties', 'levers'})
evaluator (evaluator instance)
reference (Policy or Scenario instance, optional) – overwrite the default scenario in case of searching over levers, or default policy in case of searching over uncertainties
convergence (function or collection of functions, optional)
constraints (list, optional)
convergence_freq (int) – nfe between convergence check
logging_freq (int) – number of generations between logging of progress
variator (platypus GAOperator instance, optional) – if None, it falls back on the defaults in platypus-opts which is SBX with PM
kwargs (any additional arguments will be passed on to algorithm)
- Return type:
pandas DataFrame
- Raises:
EMAError if searchover is not one of 'uncertainties' or 'levers' –
NotImplementedError if len(models) > 1 –
- ema_workbench.em_framework.evaluators.perform_experiments(models, scenarios=0, policies=0, evaluator=None, reporting_interval=None, reporting_frequency=10, uncertainty_union=False, lever_union=False, outcome_union=False, uncertainty_sampling=Samplers.LHS, lever_sampling=Samplers.LHS, callback=None, return_callback=False, combine='factorial', log_progress=False, **kwargs)
sample uncertainties and levers, and perform the resulting experiments on each of the models
- Parameters:
models (one or more AbstractModel instances)
scenarios (int or collection of Scenario instances, optional)
policies (int or collection of Policy instances, optional)
evaluator (Additional keyword arguments are passed on to evaluate_experiments of the)
reporting_interval (int, optional)
reporting_frequency (int, optional)
uncertainty_union (boolean, optional)
lever_union (boolean, optional)
outcome_union (boolean, optional)
uncertainty_sampling ({LHS, MC, FF, PFF, SOBOL, MORRIS, FAST}, optional)
lever_sampling ({LHS, MC, FF, PFF, SOBOL, MORRIS, FAST}, optional TODO:: update doc)
callback (Callback instance, optional)
return_callback (boolean, optional)
log_progress (bool, optional)
combine ({'factorial', 'zipover'}, optional) – how to combine uncertainties and levers? In case of ‘factorial’, both are sampled separately using their respective samplers. Next the resulting designs are combined in a full factorial manner. In case of ‘zipover’, both are sampled separately and then combined by cycling over the shortest of the the two sets of designs until the longest set of designs is exhausted.
evaluator
- Returns:
the experiments as a dataframe, and a dict with the name of an outcome as key, and the associated values as numpy array. Experiments and outcomes are aligned on index.
- Return type:
tuple