.. _paper_results_reproduction Reproducing Results from Paper ============================== This page contains detailed instructions to reproduce the results from `Brauner et. al., Inferring the effectiveness of government interventions against COVID-19. (2020) `_. We will assume that you have successfully been able to clone the repo and install the dependencies. Main Results ------------- The main results can be produced as follows. .. code-block:: from epimodel.preprocessing.data_preprocessor import preprocess_data from epimodel.pymc3_models.models import ComplexDifferentEffectsModel from epimodel.pymc3_models.epi_params import EpidemiologicalParameters, bootstrapped_negbinom_values import pymc3 as pm import pickle data = preprocess_data('merged_data/data_final_nov.csv') data.mask_reopenings() ep = EpidemiologicalParameters() # object containing epi params with ComplexDifferentEffectsModel(data) as model: # run using latest epidemiological parameters model.build_model(**ep.get_model_build_dict()) with model.model: model.trace = pm.sample(2000, tune=1000, cores=4, chains=4, max_treedepth=14, target_accept=0.94) numpy.savetxt('CMReduction_trace.txt', model.trace['CMReduction']) pickle.dump(model.trace, open('full_results.pkl', 'wb')) This will save the NPI reductions, expressed as multiplicative factors on R, as well as a Python pickle containing the full model trace. Sensitivity Analysis -------------------- The directory, ``scripts/sensitivity_analysis/``, contains a number of sensitivity analyses that were run to produce the paper.However, there are a very large number of runs. We recommend using `scripts/sensitivity_dispatcher.py` to run all of the necessary experiments. For example .. code-block:: # activate the virtualenv poetry shell # run sensitivity analysis python scripts/sensitivity_dispatcher.py --max_processes 24 --categories epiparam_prior npi_leaveout region_holdout cases_threshold deaths_threshold oxcgrt R_prior NPI_prior any_npi_active delay_schools npi_timing structural --model_type complex ``sensitivity_dispatcher.py`` takes a list of categories, describing the tests that want to be run. The categories are read from ``sensitivity_analysis/sensitivity_analysis.yaml``, a human-readible file which can be easily modified to alter the exact tests which are run. By default, the results of the sensitivity analysis will be saved in ``sensitivity_complex/[CATEGORY]/[TEST].txt``. The sensitivity_dispatcher will dispatch ```max_processes`` runs in parallel, and wait for these runs to be completed before dispatching new experiments. Each run, by default, uses 4 cores. We ran these experiments on a 96 core server. Therefore, setting ``max_processes`` to 24 means that all of the cores of the server were utilised. Plotting Code ------------- While the core codebase contains some utility functions for plotting, we relegate the majority of the plotting code used to produce our final graphs to the ``notebooks`` directory. There are a number of files, such as ``main_results_plotter.ipynb``, ``holdouts_plotter.ipynb`` and more which generate the plots precisely as in the paper. These files will require minimal adaption for running them locally. You will need to make the following changes: change the ``results_base_dir`` path to point towards the ``sensitivity_complex`` folder, generated by doing a full model run. You will also need to ensure that the main results pickle file is correctly loaded. This should be self-explanatory.