-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Peer review from @NavonilNM .
This continues from #44, #62, #130 and #149.
Scenario and sensitivity analysis
-
In relation to, “When creating a DES, you won’t just run one model - you run scenarios, which are different parameter set-ups. These are used to test how outcomes change under varying conditions”, I think it would be good to refer also to changes in process flow and simulation logic (different configurations of the system, and which often go beyond only parameter set-up).
-
The following sentence is spot on if think of scenarios as only different parameter set-up: “the idea is to build the model with functions and classes so you only need to change the parameters and re-run”. However, scenarios can also include entirely new configurations of the system, new routing and scheduling strategies, for example, and for this changes to the code will also be needed. We can keep this as it is, we can just say, the focus of this section is limited to parameter set-up.
-
For the run_scenarios (from HSMA), I had to import itertools in Jupyter.
-
For writing the results to the CSV, I had to create the directory first (with the same name – “scenarios_resources”), and then it worked.
- Sensitivity analysis is often done for every scenario. So, conceptually, I was expecting something along the lines below:
However, the output gives the following (would have expected scenario to be 0 for all; another set for scenario=1 can also be develop, but, I think, we might want to start it again with IAT 4). Of course, not all scenarios need to have sensitivity analysis; because we may be only focussing on a sub-set (i.e., some scenarios are not at all plausible).
- Advice on Saving results --> really good way to end the section!
Tables and Figures
-
I like the references to the Heather et al. (2025) paper!! It shows the reproducibility work that has gone into preparation of the RAP book! It shows the desire to influence practice (having previously identified a shortcoming, i.e., only one study included code ..XXX)
-
Not sure the interface is functioning as expected. Clicking on the arrow does not show/hide code (as in previous pages). Also it is not clear as to what “It uses” refers to? I guess this is the function summarise_scenarios which uses summary_stats. It would be good to add a line on summarise_scenarios, before including the code.
- For running summarise_scenarios and which calls summary_stats, needed some imports and directory creation (see screenshots below)
- Same comment – as in (5), conceptually, for sensitivity analysis, I am thinking of scenario=0 (first scenario being tested out of, say, 5). The results are from three iterations of scenario = 0, with IAT=4, then three iterations of scenario = 0, with IAT=4.5, etc, then three
I am conceptualising this as the table below – is it correct (check the last row)?
If so, then It is more intuitive for me to have the table as follows (same as the output from your code above, but having zeros):
- Plot generation code took a long time to execute (in the screenshot below 2m+). I gave up after 5 minutes. Depreciation warning also. May need to do the following.
Following the message, I did the following in Jupyter
%pip install -U "plotly[kaleido]"
Restarted kernel
%pip show kaleido (version 1.2.0 was installed)
The code worked. Execution time was less than 10 seconds! New image files created in .JPG format. All Ok.
- Some imports were needed for plot_metrics and pale_colour:
import matplotlib.colors as mcolors
import plotly.express as px
import plotly.graph_objects as go
- Scenario analysis - It would be good to explain one of the figures in relation to the data that is being used; it is all there, but good to bring together at the end (explanation only for one figure, e.g., what are the shaded areas, what are the number of doctors and which section to refer to, etc.)
- Sensitivity analysis - It would be good to explain one of the figures in relation to the data that is being used; it is all there, but good to bring together at the end (explanation only for one figure, e.g., what is the shaded areas, what is the blue line)
Full run
- I had previously installed Git for Windows, and I think I selected gitbash to be installed (did not use it until now). To get the new terminal to open in gitbash, I followed the the three steps below (click on the downarrow, select Gitbash, use Gitbash)
- I had a bit of difficulty following t he instructions below.
- What does it mean by alongside? Does it mean the .sh file is under the root (Emergency-DES) and under it we have the notebooks folder?
- Rmarkdown folder does not exist. Do we need to created a folder called rmarkdown under the main root?
- It may help include some output of the commands – in the section “running the bash script”.
For example:
- I installed jupyter nbconvert (probably did not require it; but was getting an error, which was then solved – see (5).
- For Windows, I think we may need to append /Scripts (as that is where Jupyter lives within the environment). In my case “des-example”.
- I check if it was working by deleting all output from the notebooks. After I ran the bash script all the output was recreated (including the graphs). However, there was one big surprise – the Clear All Output was not available anymore (circled below).
Somehow, I think, it has to do with the metadata settings – which, I could not understand fully as to why this was needed? I did not understand what it meant by:
- These are to avoid changes in files that have been re-run, when all the results and code are still the same, just the notebook metadata has changed.
- Perhaps it has a link to the statement “This allows the script to loop over rmarkdown/*.Rmd without extra path adjustments”. But I did not have a markdown subdirectory. nbconvert seems to have to do something with this whole affair, I think?
-
It may be good to explain that the notebooks will be overwritten by nbconvert. We may like to add a sentence saying there is option for new output files – rather than overewritten (a different switch?)
-
Finally, did we want the output to be printed in HTML etc? Perhaps the destination filename can be different?
-
Regarding: Including .py in the bash script: I guess the name of the file (with main) is filename.py. As it had the main function and there is only one .py, did we need a bash script for this? Perhaps some other example can be offered, with multiple .py files?
- Concluding thoughts: I think this is an important part of the book as it supports reproducibility. This, it may be good to have a Windows example and a little more explanation.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status