1111
1212
1313<p align =" center " >
14- <img src =" https://mcml.ai/images/MCML_Logo_cropped.jpg " height =" 40 " >
14+ <img src =" https://github.com/user-attachments/assets/41edb678-16c4-4a3a-a579-62459106cf48 " height =" 40 " >
1515 <img src =" https://github.com/user-attachments/assets/1ae42b4a-163e-43ed-b691-c253d4f4c814 " height =" 40 " >
16- <img src =" https://github.com/user-attachments/assets/e70ec1d4-bbc4-4ff3-8803-8806bc879bb0 " height =" 40 " />
16+ <img src =" https://github.com/user-attachments/assets/d6977ca8-7b52-4714-83d7-610743c7d52c " height =" 40 " />
1717 <img src =" https://mcml.ai/images/footer/lmu_white.webp " height =" 40 " >
1818 <img src =" https://mcml.ai/images/footer/tum_white.webp " height =" 40 " >
1919</p >
2020
2121
22-
2322## 🚀 What is Promptolution?
2423
25- ** Promptolution** is a modular framework for * serious* prompt optimization — built for researchers who want full control over optimizers, datasets, evaluation, and logging.
26- Unlike end-to-end agent frameworks (DSPy, LangGraph…), Promptolution focuses ** exclusively** on the prompt optimization phase, with clean abstractions, transparent internals, and an extensible API.
27-
28- It supports:
24+ ** Promptolution** is a unified, modular framework for prompt optimization — built for researchers and advanced practitioners who want full control over their experimental setup. Unlike end-to-end application frameworks with high abstraction, promptolution focuses exclusively on the optimization stage, providing a clean, transparent, and extensible API.
2925
30- * single-task prompt optimization
31- * large-scale experiments
32- * local + API-based LLMs
33- * fast parallelization
34- * clean logs for reproducible research
26+ <img width =" 808 " height =" 356 " alt =" promptolution_framework " src =" https://github.com/user-attachments/assets/e3d05493-30e3-4464-b0d6-1d3e3085f575 " />
3527
36- Developed by ** Timo Heiß ** , ** Moritz Schlager ** , and ** Tom Zehle ** (LMU Munich, MCML, ELLIS, TUM, Uni Freiburg).
28+ Key features include:
3729
30+ * Allowing for single-prompt optimization and large-scale, reproducible benchmark experiments.
31+ * Implementation of many current prompt optimizers out of the box.
32+ * Unified LLM backend supporting API-based models, Local LLMs, and vLLM clusters.
33+ * Built-in response caching to save costs and parallelized inference for speed.
34+ * Detailed logging and token usage tracking for granular post-hoc analysis.
3835
36+ Have a look at our [ Release Notes] ( https://finitearth.github.io/promptolution/release-notes/ ) for the latest updates to promptolution.
3937
4038## 📦 Installation
4139
@@ -57,8 +55,6 @@ cd promptolution
5755poetry install
5856```
5957
60-
61-
6258## 🔧 Quickstart
6359
6460Start with the ** Getting Started tutorial** :
@@ -68,7 +64,6 @@ Full docs:
6864[ https://finitearth.github.io/promptolution/ ] ( https://finitearth.github.io/promptolution/ )
6965
7066
71-
7267## 🧠 Featured Optimizers
7368
7469| ** Name** | ** Paper** | ** Init prompts** | ** Exploration** | ** Costs** | ** Parallelizable** | ** Few-shot** |
@@ -78,49 +73,30 @@ Full docs:
7873| ` EvoPromptGA ` | [ Guo et al., 2023] ( https://arxiv.org/abs/2309.08532 ) | required | 👍 | 💲💲 | ✅ | ❌ |
7974| ` OPRO ` | [ Yang et al., 2023] ( https://arxiv.org/abs/2309.03409 ) | optional | 👎 | 💲💲 | ❌ | ❌ |
8075
76+ ## 🏗 Components
8177
82-
83- ## 🏗 Core Components
84-
85- * ** Task** – wraps dataset fields, init prompts, evaluation.
86- * ** Predictor** – runs predictions using your LLM backend.
87- * ** LLM** – unified interface for OpenAI, HuggingFace, vLLM, etc.
88- * ** Optimizer** – plug-and-play implementations of CAPO, GA/DE, OPRO, and your own custom ones.
89-
90-
91-
92- ## ⭐ Highlights
93-
94- * Modular, OOP design → easy customization
95- * Experiment-ready architecture
96- * Parallel LLM requests
97- * LangChain support
98- * JSONL logging, callbacks, detailed event traces
99- * Works from laptop to cluster
100-
101-
102-
103- ## 📜 Changelog
104-
105- [ https://finitearth.github.io/promptolution/release-notes/ ] ( https://finitearth.github.io/promptolution/release-notes/ )
106-
107-
78+ * ** ` Task ` ** – Manages the dataset, evaluation metrics, and subsampling.
79+ * ** ` Predictor ` ** – Defines how to extract the answer from the model's response.
80+ * ** ` LLM ` ** – A unified interface handling inference, token counting, and concurrency.
81+ * ** ` Optimizer ` ** – The core component that implements the algorithms that refine prompts.
82+ * ** ` ExperimentConfig ` ** – A configuration abstraction to streamline and parametrize large-scale scientific experiments.
10883
10984## 🤝 Contributing
11085
11186Open an issue → create a branch → PR → CI → review → merge.
11287Branch naming: ` feature/... ` , ` fix/... ` , ` chore/... ` , ` refactor/... ` .
11388
114- ### Code Style
89+ Please ensure to use pre-commit, which assists with keeping the code quality high:
11590
11691```
11792pre-commit install
11893pre-commit run --all-files
11994```
120-
121- ### Tests
95+ We encourage every contributor to also write tests, that automatically check if the implementation works as expected:
12296
12397```
12498poetry run python -m coverage run -m pytest
12599poetry run python -m coverage report
126100```
101+
102+ Developed by ** Timo Heiß** , ** Moritz Schlager** , and ** Tom Zehle** (LMU Munich, MCML, ELLIS, TUM, Uni Freiburg).
0 commit comments