|
| 1 | +.. _particle_swarm_optimization: |
| 2 | + |
| 3 | +Particle Swarm Optimization Path Planning |
| 4 | +------------------------------------------ |
| 5 | + |
| 6 | +This is a 2D path planning implementation using Particle Swarm Optimization (PSO). |
| 7 | + |
| 8 | +PSO is a metaheuristic optimization algorithm inspired by the social behavior of bird flocking or fish schooling. In path planning, particles represent potential solutions that explore the search space to find collision-free paths from start to goal. |
| 9 | + |
| 10 | +.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathPlanning/ParticleSwarmOptimization/animation.gif |
| 11 | + |
| 12 | +Algorithm Overview |
| 13 | +++++++++++++++++++ |
| 14 | + |
| 15 | +The PSO algorithm maintains a swarm of particles that move through the search space according to simple mathematical rules: |
| 16 | + |
| 17 | +1. **Initialization**: Particles are randomly distributed near the start position |
| 18 | +2. **Evaluation**: Each particle's fitness is calculated based on distance to goal and obstacle penalties |
| 19 | +3. **Update**: Particles adjust their velocities based on: |
| 20 | + - Personal best position (cognitive component) |
| 21 | + - Global best position (social component) |
| 22 | + - Current velocity (inertia component) |
| 23 | +4. **Movement**: Particles move to new positions and check for collisions |
| 24 | +5. **Convergence**: Process repeats until maximum iterations or goal is reached |
| 25 | + |
| 26 | +Mathematical Foundation |
| 27 | ++++++++++++++++++++++++ |
| 28 | + |
| 29 | +The core PSO velocity update equation is: |
| 30 | + |
| 31 | +.. math:: |
| 32 | +
|
| 33 | + v_{i}(t+1) = w \cdot v_{i}(t) + c_1 \cdot r_1 \cdot (p_{i} - x_{i}(t)) + c_2 \cdot r_2 \cdot (g - x_{i}(t)) |
| 34 | +
|
| 35 | +Where: |
| 36 | +- :math:`v_{i}(t)` = velocity of particle i at time t |
| 37 | +- :math:`x_{i}(t)` = position of particle i at time t |
| 38 | +- :math:`w` = inertia weight (controls exploration vs exploitation) |
| 39 | +- :math:`c_1` = cognitive coefficient (attraction to personal best) |
| 40 | +- :math:`c_2` = social coefficient (attraction to global best) |
| 41 | +- :math:`r_1, r_2` = random numbers in [0,1] |
| 42 | +- :math:`p_{i}` = personal best position of particle i |
| 43 | +- :math:`g` = global best position |
| 44 | + |
| 45 | +Position update: |
| 46 | + |
| 47 | +.. math:: |
| 48 | +
|
| 49 | + x_{i}(t+1) = x_{i}(t) + v_{i}(t+1) |
| 50 | +
|
| 51 | +Fitness Function |
| 52 | +++++++++++++++++ |
| 53 | + |
| 54 | +The fitness function combines distance to target with obstacle penalties: |
| 55 | + |
| 56 | +.. math:: |
| 57 | +
|
| 58 | + f(x) = ||x - x_{goal}|| + \sum_{j} P_{obs}(x, O_j) |
| 59 | +
|
| 60 | +Where: |
| 61 | +- :math:`||x - x_{goal}||` = Euclidean distance to goal |
| 62 | +- :math:`P_{obs}(x, O_j)` = penalty for obstacle j |
| 63 | +- :math:`O_j` = obstacle j with position and radius |
| 64 | + |
| 65 | +The obstacle penalty function is defined as: |
| 66 | + |
| 67 | +.. math:: |
| 68 | +
|
| 69 | + P_{obs}(x, O_j) = \begin{cases} |
| 70 | + 1000 & \text{if } ||x - O_j|| < r_j \text{ (inside obstacle)} \\ |
| 71 | + \frac{50}{||x - O_j|| - r_j + 0.1} & \text{if } r_j \leq ||x - O_j|| < r_j + R_{influence} \text{ (near obstacle)} \\ |
| 72 | + 0 & \text{if } ||x - O_j|| \geq r_j + R_{influence} \text{ (safe distance)} |
| 73 | + \end{cases} |
| 74 | +
|
| 75 | +Where: |
| 76 | +- :math:`r_j` = radius of obstacle j |
| 77 | +- :math:`R_{influence}` = influence radius (typically 5 units) |
| 78 | + |
| 79 | +Collision Detection |
| 80 | ++++++++++++++++++++ |
| 81 | + |
| 82 | +Line-circle intersection is used to detect collisions between particle paths and circular obstacles: |
| 83 | + |
| 84 | +.. math:: |
| 85 | +
|
| 86 | + ||P_0 + t \cdot \vec{d} - C|| = r |
| 87 | +
|
| 88 | +Where: |
| 89 | +- :math:`P_0` = start point of path segment |
| 90 | +- :math:`\vec{d}` = direction vector of path |
| 91 | +- :math:`C` = obstacle center |
| 92 | +- :math:`r` = obstacle radius |
| 93 | +- :math:`t \in [0,1]` = parameter along line segment |
| 94 | + |
| 95 | +Algorithm Parameters |
| 96 | +++++++++++++++++++++ |
| 97 | + |
| 98 | +Key parameters affecting performance: |
| 99 | + |
| 100 | +- **Number of particles** (n_particles): More particles = better exploration but slower |
| 101 | +- **Maximum iterations** (max_iter): More iterations = better convergence but slower |
| 102 | +- **Inertia weight** (w): High = exploration, Low = exploitation |
| 103 | +- **Cognitive coefficient** (c1): Attraction to personal best |
| 104 | +- **Social coefficient** (c2): Attraction to global best |
| 105 | + |
| 106 | +Typical values: |
| 107 | +- n_particles: 20-50 |
| 108 | +- max_iter: 100-300 |
| 109 | +- w: 0.9 → 0.4 (linearly decreasing) |
| 110 | +- c1, c2: 1.5-2.0 |
| 111 | + |
| 112 | +Advantages |
| 113 | +++++++++++ |
| 114 | + |
| 115 | +- **Global optimization**: Can escape local minima unlike gradient-based methods |
| 116 | +- **No derivatives needed**: Works with non-differentiable fitness landscapes |
| 117 | +- **Parallel exploration**: Multiple particles search simultaneously |
| 118 | +- **Simple implementation**: Few parameters and straightforward logic |
| 119 | +- **Flexible**: Easily adaptable to different environments and constraints |
| 120 | + |
| 121 | +Disadvantages |
| 122 | ++++++++++++++ |
| 123 | + |
| 124 | +- **Stochastic**: Results may vary between runs |
| 125 | +- **Parameter sensitive**: Performance heavily depends on parameter tuning |
| 126 | +- **No optimality guarantee**: Metaheuristic without convergence proof |
| 127 | +- **Computational cost**: Requires many fitness evaluations |
| 128 | +- **Prone to stagnation**: Premature convergence where the entire swarm can get trapped in a local minimum if exploration is insufficient |
| 129 | + |
| 130 | +Code Link |
| 131 | ++++++++++ |
| 132 | + |
| 133 | +.. autofunction:: PathPlanning.ParticleSwarmOptimization.particle_swarm_optimization.main |
| 134 | + |
| 135 | +Usage Example |
| 136 | ++++++++++++++ |
| 137 | + |
| 138 | +.. code-block:: python |
| 139 | +
|
| 140 | + import matplotlib.pyplot as plt |
| 141 | + from PathPlanning.ParticleSwarmOptimization.particle_swarm_optimization import main |
| 142 | + |
| 143 | + # Run PSO path planning with visualization |
| 144 | + main() |
| 145 | +
|
| 146 | +References |
| 147 | +++++++++++ |
| 148 | + |
| 149 | +- `Particle swarm optimization - Wikipedia <https://en.wikipedia.org/wiki/Particle_swarm_optimization>`__ |
| 150 | +- Kennedy, J.; Eberhart, R. (1995). "Particle Swarm Optimization". Proceedings of IEEE International Conference on Neural Networks. IV. pp. 1942–1948. |
| 151 | +- Shi, Y.; Eberhart, R. (1998). "A Modified Particle Swarm Optimizer". IEEE International Conference on Evolutionary Computation. |
| 152 | +- `A Gentle Introduction to Particle Swarm Optimization <https://machinelearningmastery.com/a-gentle-introduction-to-particle-swarm-optimization/>`__ |
| 153 | +- Clerc, M.; Kennedy, J. (2002). "The particle swarm - explosion, stability, and convergence in a multidimensional complex space". IEEE Transactions on Evolutionary Computation. 6 (1): 58–73. |
0 commit comments