Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .all-contributorsrc
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,34 @@
"talk",
"mentoring"
]
},
{
"login": "lorenzobranca",
"name": "Lorenzo Branca",
"avatar_url": "https://avatars.githubusercontent.com/u/57775402?v=4",
"profile": "https://github.com/lorenzobranca",
"contributions": [
"data",
"ideas",
"research",
"test",
"talk"
]
},
{
"login": "Immi000",
"name": "Immanuel Sulzer",
"avatar_url": "https://avatars.githubusercontent.com/u/100942429?v=4",
"profile": "https://github.com/Immi000",
"contributions": [
"code",
"content",
"data",
"design",
"doc",
"infra",
"userTesting"
]
}
]
}
5 changes: 3 additions & 2 deletions docs/source/guides/running-benchmarks/modalities.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Interpolation studies remove every _n_-th timestep, forcing the surrogate to rec
align: center
alt: Interpolation modality example
---
Interpolation MAE over time for several interval widths. Wider gaps create bigger spikes but also highlight which surrogates remain stable.
Interpolation MAE over time for several interval widths. When the distance between time steps increases, one can often observe error spikes between them.
```

## Extrapolation
Expand All @@ -61,11 +61,12 @@ Sparse training reduces the number of observations before fitting, emulating lim
align: center
alt: Sparse modality example
---
Down-sampling trajectories shows how MAE changes with fewer observations; FCNN tends to degrade earlier than the latent models.
Reducing the numer of samples (sets of trajectories) shows how MAE changes with fewer observations, revealing how efficiently the model extracts information from samples, and equivalently informing whether more training data could help improve this surrogate's performance.
```

## Batch scaling

Batch scaling sweeps different batch sizes and records how accuracy/timing behave. This is useful to identify sweet spots for throughput without impacting convergence too heavily. Combine the results with the `timing` evaluation to compare throughput across surrogates.

See the :doc:`configuration reference </reference/configuration>` for the exact YAML schema and defaults.

2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ CODES Benchmark
* **Extend the Stack** — :doc:`guides/extending-benchmark` shows how to add datasets or surrogates without rewriting orchestration glue.
* **API Reference** — :doc:`api-reference` explains how the generated package docs are organized and links to each module.

Looking for a bird’s-eye view first? Start with the **User Guide**. Already configuring experiments or integrating your own model? Skip ahead to the **API Reference**. Either way, the sidebar mirrors the sections below so you are one click away from the next step.
Looking for a bird’s-eye view first? Start with the **User Guide**. Already configuring experiments or integrating your own model? Skip ahead to the **API Reference**. The sidebar mirrors the sections below so you are one click away from the next step.

.. toctree::
:maxdepth: 2
Expand Down