|
1 | | -# WIP |
| 1 | +# WIP# Run MRIqc on the cluster |
| 2 | + |
| 3 | +Written by CPP lab people |
| 4 | + |
| 5 | +To contribute see [here](https://cpp-lln-lab.github.io/CPP_HPC/contributing/) |
| 6 | + |
| 7 | +!!! Warning |
| 8 | + |
| 9 | + Space problem since the `work` folder is not set in the script. Marco is working on it. |
| 10 | + |
| 11 | + |
| 12 | +## General tips |
| 13 | + |
| 14 | +- The more resources required, the faster it can be but the more waiting time |
| 15 | + |
| 16 | +- To try things, set `--time=00:05:00` and `--partition=debug` so it starts |
| 17 | + right away and you can check if it at least starts without problems (eg the |
| 18 | + singularity images is running, data are bids compatible or data folders are |
| 19 | + loaded proprerly). See below in the section [Submit a MRIqc job via sbatch command](#submit-a-MRIqc-job-via-sbatch-command-without-a-script-mainly-for-debug-purposes) |
| 20 | + |
| 21 | +## Prepare to run MRIqc on the cluster |
| 22 | + |
| 23 | +- have your data on the cluster |
| 24 | +- install datalad on your user (see [here](https://github.com/cpp-lln-lab/CPP_HPC/install_datalad)) |
| 25 | +- get the fmriprep singularity image as follow: |
| 26 | + |
| 27 | +here the example is with `MRIqc version 23.1.0` but check for newer version, list of fmriprep version available [here](https://hub.docker.com/r/nipreps/fmriprep/tags/) |
| 28 | + |
| 29 | +```bash |
| 30 | +datalad install https://github.com/ReproNim/containers.git |
| 31 | + |
| 32 | +cd containers |
| 33 | + |
| 34 | +datalad get images/bids/bids-mriqc--23.1.0.sing |
| 35 | +``` |
| 36 | + |
| 37 | +In case you have installe the repo a while a ago and you want to use a new version of fmriprep., update the `containers` repo via: |
| 38 | + |
| 39 | +```bash |
| 40 | +# go to the repo folder |
| 41 | +cd path/to/containers |
| 42 | + |
| 43 | +datald update --merge |
| 44 | +`````` |
| 45 | + |
| 46 | +Depending on the cluster “unlock” is needed or not. No need for `lemaitre3`. |
| 47 | + |
| 48 | +```bash |
| 49 | +datalad unlock containers/images/bids/bids-mriqc--23.1.0.sing |
| 50 | +``` |
| 51 | + |
| 52 | +## Submit a MRIqc job via a `slurm` script |
| 53 | + |
| 54 | +- pros: |
| 55 | + - easy to run for multiple subject |
| 56 | +- cons: |
| 57 | + - the `slurm` script can be hard to edit from within the cluster in case of error or a change of mind with fmriprep |
| 58 | + options. You can edit via `vim` or locally and then |
| 59 | + uploading a newversion. |
| 60 | + |
| 61 | +### Participants level |
| 62 | + |
| 63 | +Content of the `cpp_mriqc.slurm` file (download and edit from [here](cpp_mriqc.slurm)) |
| 64 | + |
| 65 | +!!! Warning |
| 66 | + |
| 67 | + 1. Read the MRIqc documentation to know what you are doing and how the arguments of the run call effects the results |
| 68 | + 2. All the paths and email are set afte Marco's users for demosntration. |
| 69 | + 3. Edit the scripts with the info you need to make it run for your user from top to buttom of the script, do not over look the first "commented" chunk cause it is not a real commented section (check the email and job report path, data paths and the `username` etc.). |
| 70 | +
|
| 71 | +```bash |
| 72 | +{% include "cpp_mriqc.slurm" %} |
| 73 | +``` |
| 74 | +
|
| 75 | +On the cluster prompt, submit the jobs as: |
| 76 | +
|
| 77 | +```bash |
| 78 | +# Submission command for Lemaitre3 |
| 79 | +
|
| 80 | +# USAGE on cluster: |
| 81 | +
|
| 82 | +sbatch cpp_mriqc.slurm <subjID> |
| 83 | +
|
| 84 | +# examples: |
| 85 | +# - 1 subject |
| 86 | +
|
| 87 | +sbatch cpp_mriqc.slurm sub-01 |
| 88 | +
|
| 89 | +# submit all the subjects (1 per job) all at once |
| 90 | +# read subj list to submit each to a job for all the tasks |
| 91 | +# !!! to run from within `raw` folder |
| 92 | +ls -d sub* | xargs -n1 -I{} sbatch path/to/cpp_mriqc.slurm {} |
| 93 | +``` |
| 94 | +
|
| 95 | +### Group level |
| 96 | +
|
| 97 | +Content of the `cpp_mriqc_group.slurm` file (download and edit from [here](cpp_mriqc_group.slurm)) |
| 98 | +
|
| 99 | +!!! Warning |
| 100 | +
|
| 101 | + 1. Read the MRIqc documentation to know what you are doing and how the arguments of the run call effects the results |
| 102 | + 2. All the paths and email are set afte Marco's users for demosntration. |
| 103 | + 3. Edit the scripts with the info you need to make it run for your user from top to buttom of the script, do not over look the first "commented" chunk cause it is not a real commented section (check the email and job report path, data paths and the `username` etc.). |
| 104 | + |
| 105 | +```bash |
| 106 | +{% include "cpp_mriqc_group.slurm" %} |
| 107 | +``` |
| 108 | + |
| 109 | +On the cluster prompt, submit the jobs as: |
| 110 | + |
| 111 | +```bash |
| 112 | +# Submission command for Lemaitre3 |
| 113 | +
|
| 114 | +# USAGE on cluster: |
| 115 | +
|
| 116 | +# no need to priovide any input |
| 117 | +
|
| 118 | +sbatch cpp_mriqc_group.slurm |
| 119 | +``` |
| 120 | + |
| 121 | +## TIPS |
| 122 | + |
| 123 | +### check your job |
| 124 | + |
| 125 | +see [here](https://github.com/cpp-lln-lab/CPP_HPC/cluster_code_snippets/#check-your-running-jobs) |
2 | 126 |
|
3 | 127 | To contribute see [here](https://cpp-lln-lab.github.io/CPP_HPC/contributing/) |
0 commit comments