Skip to content

Commit 650e11a

Browse files
committed
doc: comment on nlp_scaling_max_gradient
1 parent 3d98242 commit 650e11a

File tree

3 files changed

+14
-6
lines changed

3 files changed

+14
-6
lines changed

src/controller/nonlinmpc.jl

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,11 +162,14 @@ NonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedK
162162
algebra instead of a `for` loop. This feature can accelerate the optimization, especially
163163
for the constraint handling, and is not available in any other package, to my knowledge.
164164
165-
The optimization relies on [`JuMP.jl`](https://github.com/jump-dev/JuMP.jl) automatic
165+
The optimization relies on [`JuMP`](https://github.com/jump-dev/JuMP.jl) automatic
166166
differentiation (AD) to compute the objective and constraint derivatives. Optimizers
167167
generally benefit from exact derivatives like AD. However, the [`NonLinModel`](@ref) `f`
168168
and `h` functions must be compatible with this feature. See [Automatic differentiation](https://jump.dev/JuMP.jl/stable/manual/nlp/#Automatic-differentiation)
169169
for common mistakes when writing these functions.
170+
171+
Note that if `Cwt≠Inf`, the attribute `nlp_scaling_max_gradient` of `Ipopt` is set to
172+
`10/Cwt` (if not already set), to scale the small values of ``ϵ``.
170173
"""
171174
function NonLinMPC(
172175
model::SimModel;

src/estimator/mhe/construct.jl

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -250,11 +250,15 @@ MovingHorizonEstimator estimator with a sample time Ts = 10.0 s, Ipopt optimizer
250250
state and sensor noise).
251251
252252
For [`LinModel`](@ref), the optimization is treated as a quadratic program with a
253-
time-varying Hessian, which is generally cheaper than nonlinear programming. For
254-
[`NonLinModel`](@ref), the optimization relies on automatic differentiation (AD).
253+
time-varying Hessian, which is generally cheaper than nonlinear programming.
254+
255+
For [`NonLinModel`](@ref), the optimization relies on automatic differentiation (AD).
255256
Optimizers generally benefit from exact derivatives like AD. However, the `f` and `h`
256257
functions must be compatible with this feature. See [Automatic differentiation](https://jump.dev/JuMP.jl/stable/manual/nlp/#Automatic-differentiation)
257-
for common mistakes when writing these functions.
258+
for common mistakes when writing these functions.
259+
260+
Note that if `Cwt≠Inf`, the attribute `nlp_scaling_max_gradient` of `Ipopt` is set to
261+
`10/Cwt` (if not already set), to scale the small values of ``ϵ``.
258262
"""
259263
function MovingHorizonEstimator(
260264
model::SM;

src/estimator/mhe/execute.jl

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -192,9 +192,10 @@ also inits `estim.optim` objective function, expressed as the quadratic general
192192
```math
193193
J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + p
194194
```
195-
in which ``\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]``. The
195+
in which ``\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]``. Note that
196+
``p`` is useless at optimization but required to evaluate the objective minima ``J``. The
196197
Hessian ``\mathbf{H̃}`` matrix of the quadratic general form is not constant here because
197-
of the time-varying ``\mathbf{P̄}`` covariance . The computations are:
198+
of the time-varying ``\mathbf{P̄}`` covariance . The computed variables are:
198199
```math
199200
\begin{aligned}
200201
\mathbf{F} &= \mathbf{G U} + \mathbf{J D} + \mathbf{Y^m} \\

0 commit comments

Comments
 (0)