evaluators
¶
collection of evaluators for performing experiments, optimization, and robust optimization
- class ema_workbench.em_framework.evaluators.IpyparallelEvaluator(msis, client, **kwargs)¶
evaluator for using an ipypparallel pool
- evaluate_experiments(scenarios, policies, callback, combine='factorial')¶
used by ema_workbench
- finalize()¶
finalize the evaluator
- initialize()¶
initialize the evaluator
- class ema_workbench.em_framework.evaluators.MultiprocessingEvaluator(msis, n_processes=None, maxtasksperchild=None, **kwargs)¶
evaluator for experiments using a multiprocessing pool
- Parameters
msis (collection of models) –
n_processes (int (optional)) – A negative number can be inputted to use the number of logical cores minus the negative cores. For example, on a 12 thread processor, -2 results in using 10 threads.
max_tasks (int (optional)) –
note that the maximum number of available processes is either multiprocessing.cpu_count() and in case of windows, this never can be higher then 61
- evaluate_experiments(scenarios, policies, callback, combine='factorial')¶
used by ema_workbench
- finalize()¶
finalize the evaluator
- initialize()¶
initialize the evaluator
- class ema_workbench.em_framework.evaluators.Samplers(value)¶
Enum for different kinds of samplers
- class ema_workbench.em_framework.evaluators.SequentialEvaluator(msis)¶
- evaluate_experiments(scenarios, policies, callback, combine='factorial')¶
used by ema_workbench
- finalize()¶
finalize the evaluator
- initialize()¶
initialize the evaluator
- ema_workbench.em_framework.evaluators.optimize(models, algorithm=<class 'platypus.algorithms.EpsNSGAII'>, nfe=10000, searchover='levers', evaluator=None, reference=None, convergence=None, constraints=None, convergence_freq=1000, logging_freq=5, **kwargs)¶
optimize the model
- Parameters
models (1 or more Model instances) –
algorithm (a valid Platypus optimization algorithm) –
nfe (int) –
searchover ({'uncertainties', 'levers'}) –
evaluator (evaluator instance) –
reference (Policy or Scenario instance, optional) – overwrite the default scenario in case of searching over levers, or default policy in case of searching over uncertainties
convergence (function or collection of functions, optional) –
constraints (list, optional) –
convergence_freq (int) – nfe between convergence check
logging_freq (int) – number of generations between logging of progress
kwargs (any additional arguments will be passed on to algorithm) –
- Return type
pandas DataFrame
- Raises
EMAError if searchover is not one of 'uncertainties' or 'levers' –
NotImplementedError if len(models) > 1 –
- ema_workbench.em_framework.evaluators.perform_experiments(models, scenarios=0, policies=0, evaluator=None, reporting_interval=None, reporting_frequency=10, uncertainty_union=False, lever_union=False, outcome_union=False, uncertainty_sampling=Samplers.LHS, lever_sampling=Samplers.LHS, callback=None, return_callback=False, combine='factorial', log_progress=False)¶
sample uncertainties and levers, and perform the resulting experiments on each of the models
- Parameters
models (one or more AbstractModel instances) –
scenarios (int or collection of Scenario instances, optional) –
policies (int or collection of Policy instances, optional) –
evaluator (Evaluator instance, optional) –
reporting_interval (int, optional) –
reporting_frequency (int, optional) –
uncertainty_union (boolean, optional) –
lever_union (boolean, optional) –
outcome_union (boolean, optional) –
uncertainty_sampling ({LHS, MC, FF, PFF, SOBOL, MORRIS, FAST}, optional) –
lever_sampling ({LHS, MC, FF, PFF, SOBOL, MORRIS, FAST}, optional TODO:: update doc) –
callback (Callback instance, optional) –
return_callback (boolean, optional) –
log_progress (bool, optional) –
combine ({'factorial', 'zipover'}, optional) – how to combine uncertainties and levers? In case of ‘factorial’, both are sampled separately using their respective samplers. Next the resulting designs are combined in a full factorial manner. In case of ‘zipover’, both are sampled separately and then combined by cycling over the shortest of the the two sets of designs until the longest set of designs is exhausted.
- Returns
the experiments as a dataframe, and a dict with the name of an outcome as key, and the associated values as numpy array. Experiments and outcomes are aligned on index.
- Return type
tuple