|
| 1 | +# Benchmarks |
| 2 | + |
| 3 | +The following benchmarks show the performance of NestedSamplers.jl. As with any statistical inference package, the likelihood function will often dominate the runtime. This is important to consider when comparing packages across different languages- in general a custom Julia likelihood function may be faster than the same code written in Python/numpy. As an example, compare the relative timings of these two simple Guassian likelihoods |
| 4 | + |
| 5 | +```julia |
| 6 | +using BenchmarkTools |
| 7 | +using PyCall |
| 8 | + |
| 9 | +# julia version |
| 10 | +gauss_loglike(X) = sum(x -> exp(-0.5 * x^2) / sqrt(2π), X) |
| 11 | + |
| 12 | +# python version |
| 13 | +py""" |
| 14 | +import numpy as np |
| 15 | +def gauss_loglike(X): |
| 16 | + return np.sum(np.exp(-0.5 * X ** 2) / np.sqrt(2 * np.pi)) |
| 17 | +""" |
| 18 | +gauss_loglike_py = py"gauss_loglike" |
| 19 | +xs = randn(100) |
| 20 | +``` |
| 21 | + |
| 22 | +```julia |
| 23 | +@btime gauss_loglike($xs) |
| 24 | +``` |
| 25 | + |
| 26 | +``` |
| 27 | + 611.971 ns (0 allocations: 0 bytes) |
| 28 | +26.813747896467206 |
| 29 | +``` |
| 30 | + |
| 31 | +```julia |
| 32 | + |
| 33 | +@btime gauss_loglike_py($xs) |
| 34 | +``` |
| 35 | + |
| 36 | +``` |
| 37 | + 13.129 μs (6 allocations: 240 bytes) |
| 38 | +26.81374789646721 |
| 39 | +``` |
| 40 | + |
| 41 | +In certain cases, you can use language interop tools (like [PyCall.jl](https://github.com/JuliaPy/PyCall.jl)) to use Julia likelihoods with Python libraries. |
| 42 | + |
| 43 | +## Setup and system information |
| 44 | + |
| 45 | +The benchmark code can be found in the [`bench`](https://github.com/TuringLang/NestedSamplers.jl/blob/main/bench/) folder. The system information at the time these benchmarks were ran is |
| 46 | + |
| 47 | +```julia |
| 48 | +julia> versioninfo() |
| 49 | +Julia Version 1.7.1 |
| 50 | +Commit ac5cc99908* (2021-12-22 19:35 UTC) |
| 51 | +Platform Info: |
| 52 | + OS: macOS (x86_64-apple-darwin20.5.0) |
| 53 | + CPU: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz |
| 54 | + WORD_SIZE: 64 |
| 55 | + LIBM: libopenlibm |
| 56 | + LLVM: libLLVM-12.0.1 (ORCJIT, skylake) |
| 57 | +Environment: |
| 58 | + JULIA_NUM_THREADS = 1 |
| 59 | +``` |
| 60 | + |
| 61 | +## Highly-correlated multivariate Guassian |
| 62 | + |
| 63 | +This benchmark uses [`Models.CorrelatedGaussian`](@ref) and simply measures the time it takes to fully sample down to `dlogz=0.01`. This benchmark is exactly the same as the benchmark detailed in the [JAXNS paper](https://ui.adsabs.harvard.edu/abs/2020arXiv201215286A/abstract). |
| 64 | + |
| 65 | +### Timing |
| 66 | + |
| 67 | +```@example sample-benchmark |
| 68 | +using CSV, DataFrames, Plots # hide |
| 69 | +benchdir = joinpath(dirname(pathof(NestedSamplers)), "..", "bench") # hide |
| 70 | +results = DataFrame(CSV.File(joinpath(benchdir, "sampling_results.csv"))) # hide |
| 71 | +plot(results.D, results.t, label="NestedSamplers.jl", marker=:o, yscale=:log10 # hide |
| 72 | + ylabel="runtime (s)", xlabel="prior dimension", leg=:topleft) # hide |
| 73 | +``` |
| 74 | + |
| 75 | +### Accuracy |
| 76 | + |
| 77 | +The following shows the Bayesian evidence estmiate as compared to the true value |
| 78 | + |
| 79 | +```@example sample-benchmark |
| 80 | +plot(results.D, results.dlnZ, yerr=results.lnZstd, label="NestedSamplers.jl", # hide |
| 81 | + marker=:o, ylabel="ΔlnZ", xlabel="prior dimension", leg=:topleft) # hide |
| 82 | +hlines([0.0], c=:black, ls=:dash, alpha=0.7, label="") # hide |
| 83 | +``` |
0 commit comments