Warning
This package is currently under active development. The API may change in future releases. Please refer to the documentation for the latest updates.
This package provides a unified interface for training decision-focused learning algorithms that combine machine learning with combinatorial optimization. It implements several state-of-the-art algorithms for learning to predict parameters of optimization problems.
- Unified Interface: Consistent API across all algorithms via
train_policy! - Policy-Centric Design:
DFLPolicyencapsulates statistical models and optimizers - Flexible Metrics: Track custom metrics during training
- Benchmark Integration: Seamless integration with DecisionFocusedLearningBenchmarks.jl
using DecisionFocusedLearningAlgorithms
using DecisionFocusedLearningBenchmarks
# Create a policy
benchmark = ArgmaxBenchmark()
model = generate_statistical_model(benchmark)
maximizer = generate_maximizer(benchmark)
policy = DFLPolicy(model, maximizer)
# Train with FYL algorithm
algorithm = PerturbedFenchelYoungLossImitation()
result = train_policy(algorithm, benchmark; epochs=50)See the documentation for more details.
- Perturbed Fenchel-Young Loss Imitation: Differentiable imitation learning with perturbed optimization
- AnticipativeImitation: Imitation of anticipative solutions for dynamic problems
- DAgger: DAgger algorithm for dynamic problems