Jump to content

Trajectory optimization

From Wikipedia, the free encyclopedia

Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

Although the idea of trajectory optimization has been around for hundreds of years (calculus of variations, brachystochrone problem), it only became practical for real-world problems with the advent of the computer. Many of the original applications of trajectory optimization were in the aerospace industry, computing rocket and missile launch trajectories. More recently, trajectory optimization has also been used in a wide variety of industrial process and robotics applications.[1]

History

[edit]

Trajectory optimization first showed up in 1697, with the introduction of the Brachystochrone problem: find the shape of a wire such that a bead sliding along it will move between two points in the minimum time.[2] The interesting thing about this problem is that it is optimizing over a curve (the shape of the wire), rather than a single number. The most famous of the solutions was computed using calculus of variations.

In the 1950s, the digital computer started to make trajectory optimization practical for solving real-world problems. The first optimal control approaches grew out of the calculus of variations, based on the research of Gilbert Ames Bliss and Bryson[3] in America, and Pontryagin[4] in Russia. Pontryagin's maximum principle is of particular note. These early researchers created the foundation of what we now call indirect methods for trajectory optimization.

Much of the early work in trajectory optimization was focused on computing rocket thrust profiles, both in a vacuum and in the atmosphere. This early research discovered many basic principles that are still used today. Another successful application was the climb to altitude trajectories for the early jet aircraft. Because of the high drag associated with the transonic drag region and the low thrust of early jet aircraft, trajectory optimization was the key to maximizing climb to altitude performance. Optimal control based trajectories were responsible for some of the world records. In these situations, the pilot followed a Mach versus altitude schedule based on optimal control solutions.

One of the important early problems in trajectory optimization was that of the singular arc, where Pontryagin's maximum principle fails to yield a complete solution. An example of a problem with singular control is the optimization of the thrust of a missile flying at a constant altitude and which is launched at low speed. Here the problem is one of a bang-bang control at maximum possible thrust until the singular arc is reached. Then the solution to the singular control provides a lower variable thrust until burnout. At that point bang-bang control provides that the control or thrust go to its minimum value of zero. This solution is the foundation of the boost-sustain rocket motor profile widely used today to maximize missile performance.

Applications

[edit]

There are a wide variety of applications for trajectory optimization, primarily in robotics: industry, manipulation, walking, path-planning, and aerospace. It can also be used for modeling and estimation.

Robotic manipulators

[edit]

Depending on the configuration, open-chain robotic manipulators require a degree of trajectory optimization. For instance, a robotic arm with 7 joints and 7 links (7-DOF) is a redundant system where one cartesian position of an end-effector can correspond to an infinite number of joint angle positions, thus this redundancy can be used to optimize a trajectory to, for example, avoid any obstacles in the workspace or minimize the torque in the joints.[5]

Quadrotor helicopters

[edit]

Trajectory optimization is often used to compute trajectories for quadrotor helicopters. These applications typically used highly specialized algorithms.[6][7] One interesting application shown by the U.Penn GRASP Lab is computing a trajectory that allows a quadrotor to fly through a hoop as it is thrown. Another, this time by the ETH Zurich Flying Machine Arena, involves two quadrotors tossing a pole back and forth between them, with it balanced like an inverted pendulum. The problem of computing minimum-energy trajectories for a quadcopter, has also been recently studied.[8]

Manufacturing

[edit]

Trajectory optimization is used in manufacturing, particularly for controlling chemical processes[9] or computing the desired path for robotic manipulators.[10]

Walking robots

[edit]

There are a variety of different applications for trajectory optimization within the field of walking robotics. For example, one paper used trajectory optimization of bipedal gaits on a simple model to show that walking is energetically favorable for moving at a low speed and running is energetically favorable for moving at a high speed.[11] Like in many other applications, trajectory optimization can be used to compute a nominal trajectory, around which a stabilizing controller is built.[12] Trajectory optimization can be applied in detailed motion planning complex humanoid robots, such as Atlas.[13] Finally, trajectory optimization can be used for path-planning of robots with complicated dynamics constraints, using reduced complexity models.[14]

Aerospace

[edit]

For tactical missiles, the flight profiles are determined by the thrust and lift histories. These histories can be controlled by a number of means including such techniques as using an angle of attack command history or an altitude/downrange schedule that the missile must follow. Each combination of missile design factors, desired missile performance, and system constraints results in a new set of optimal control parameters.[15]

Terminology

[edit]
Decision variables
The set of unknowns to be found using optimization.
Trajectory optimization problem
A special type of optimization problem where the decision variables are functions, rather than real numbers.
Parameter optimization
Any optimization problem where the decision variables are real numbers.
Nonlinear program
A class of constrained parameter optimization where either the objective function or constraints are nonlinear.
Indirect method
An indirect method for solving a trajectory optimization problem proceeds in three steps: 1) Analytically construct the necessary and sufficient conditions for optimality, 2) Discretize these conditions, constructing a constrained parameter optimization problem, 3) Solve that optimization problem.[16]
Direct method
A direct method for solving a trajectory optimization problem consists of two steps: 1) Discretize the trajectory optimization problem directly, converting it into a constrained parameter optimization problem, 2) Solve that optimization problem.[16]
Transcription
The process by which a trajectory optimization problem is converted into a parameter optimization problem. This is sometimes referred to as discretization. Transcription methods generally fall into two categories: shooting methods and collocation methods.
Shooting method
A transcription method that is based on simulation, typically using explicit Runge--Kutta schemes.
Collocation method (Simultaneous Method)
A transcription method that is based on function approximation, typically using implicit Runge--Kutta schemes.
Pseudospectral method (Global Collocation)
A transcription method that represents the entire trajectory as a single high-order orthogonal polynomial.
Mesh (Grid)
After transcription, the formerly continuous trajectory is now represented by a discrete set of points, known as mesh points or grid points.
Mesh refinement
The process by which the discretization mesh is improved by solving a sequence of trajectory optimization problems. Mesh refinement is either performed by sub-dividing a trajectory segment or by increasing the order of the polynomial representing that segment.[17]
Multi-phase trajectory optimization problem
Trajectory optimization over a system with hybrid dynamics can be achieved by posing it as a multi-phase trajectory optimization problem. This is done by composing a sequence of standard trajectory optimization problems that are connected using constraints.[18]

Trajectory optimization techniques

[edit]

The techniques to any optimization problems can be divided into two categories: indirect and direct. An indirect method works by analytically constructing the necessary and sufficient conditions for optimality, which are then solved numerically. A direct method attempts a direct numerical solution by constructing a sequence of continually improving approximations to the optimal solution.[16]

The optimal control problem is an infinite-dimensional optimization problem, since the decision variables are functions, rather than real numbers. All solution techniques perform transcription, a process by which the trajectory optimization problem (optimizing over functions) is converted into a constrained parameter optimization problem (optimizing over real numbers). Generally, this constrained parameter optimization problem is a non-linear program, although in special cases it can be reduced to a quadratic program or linear program.

Single shooting

[edit]

Single shooting is the simplest type of trajectory optimization technique. The basic idea is similar to how you would aim a cannon: pick a set of parameters for the trajectory, simulate the entire thing, and then check to see if you hit the target. The entire trajectory is represented as a single segment, with a single constraint, known as a defect constraint, requiring that the final state of the simulation matches the desired final state of the system. Single shooting is effective for problems that are either simple or have an extremely good initialization. Both the indirect and direct formulation tend to have difficulties otherwise.[16][19][20]

Multiple shooting

[edit]

Multiple shooting is a simple extension to single shooting that renders it far more effective. Rather than representing the entire trajectory as a single simulation (segment), the algorithm breaks the trajectory into many shorter segments, and a defect constraint is added between each. The result is large sparse non-linear program, which tends to be easier to solve than the small dense programs produced by single shooting.[19][20]

Direct collocation

[edit]

Direct collocation methods work by approximating the state and control trajectories using polynomial splines. These methods are sometimes referred to as direct transcription. Trapezoidal collocation is a commonly used low-order direct collocation method. The dynamics, path objective, and control are all represented using linear splines, and the dynamics are satisfied using trapezoidal quadrature. Hermite-Simpson Collocation is a common medium-order direct collocation method. The state is represented by a cubic-Hermite spline, and the dynamics are satisfied using Simpson quadrature.[16][20]

Orthogonal collocation

[edit]

Orthogonal collocation is technically a subset of direct collocation, but the implementation details are so different that it can reasonably be considered its own set of methods. Orthogonal collocation differs from direct collocation in that it typically uses high-order splines, and each segment of the trajectory might be represented by a spline of a different order. The name comes from the use of orthogonal polynomials in the state and control splines.[20][21]

Pseudospectral discretization

[edit]

In pseudospectral discretization the entire trajectory is represented by a collection of basis functions in the time domain (independent variable). The basis functions need not be polynomials. Pseudospectral discretization is also known as spectral collocation.[22][23][24] When used to solve a trajectory optimization problem whose solution is smooth, a pseudospectral method will achieve spectral (exponential) convergence.[25] If the trajectory is not smooth, the convergence is still very fast, faster than Runge-Kutta methods.[26][27]

Temporal Finite Elements

[edit]

In 1990 Dewey H. Hodges and Robert R. Bless[28] proposed a weak Hamiltonian finite element method for optimal control problems. The idea was to derive a weak variational form of first order necessary conditions for optimality, discretise the time domain in finite intervals and use a simple zero order polynomial representation of states, controls and adjoints over each interval.

Differential dynamic programming

[edit]

Differential dynamic programming, is a bit different than the other techniques described here. In particular, it does not cleanly separate the transcription and the optimization. Instead, it does a sequence of iterative forward and backward passes along the trajectory. Each forward pass satisfies the system dynamics, and each backward pass satisfies the optimality conditions for control. Eventually, this iteration converges to a trajectory that is both feasible and optimal.[29]

Diffusion-based trajectory optimization

[edit]

In contrast to the aforementioned classical methods, generative machine learning methods may be used to generate a desirable trajectory. In particular, diffusion models learn to iteratively reverse a destructive forward process in which noise is added to data until it becomes noise itself by estimating the noise to remove at every time step. Thus given easily to sample random noise as input, the diffusion process will recover a plausible corresponding noise-free data point. Recent methods[30][31] have parameterized trajectories as matrices of state-action pairs at consecutive time steps and trained a diffusion model to generate such a matrix. To address the issue of controllability of the generated samples, the Diffuser method[30] proposes two techniques to steer the generated sample, thereby reducing the optimization problem to a sampling problem. First, guided diffusion[32][33] can be used to incorporate a cost (or reward) function into the generation process. For this purpose the gradient of the cost function modifies the mean of the estimated noise at every time step. Second, for motion planning problems in which the start and the end states of the trajectory are known, and the trajectory needs to comply with constraints to find a viable path, an inpainting approach can be used. Similar to the first technique, a prior modifies the distribution of trajectories, which in this case assigns high probability to trajectories satisfying the constraints (e.g. arriving at a state at time step ), and zero probability to all other trajectories. As a result, sampling from this distribution will produce trajectories that satisfy the constraints.

Comparison of techniques

[edit]

There are many techniques to choose from when solving a trajectory optimization problem. There is no best method, but some methods might do a better job on specific problems. This section provides a rough understanding of the trade-offs between methods.

Indirect vs. direct methods

[edit]

When solving a trajectory optimization problem with an indirect method, you must explicitly construct the adjoint equations and their gradients. This is often difficult to do, but it gives an excellent accuracy metric for the solution. Direct methods are much easier to set up and solve, but do not have a built-in accuracy metric.[16] As a result, direct methods are more widely used, especially in non-critical applications. Indirect methods still have a place in specialized applications, particularly aerospace, where accuracy is critical.

One place where indirect methods have particular difficulty is on problems with path inequality constraints. These problems tend to have solutions for which the constraint is partially active. When constructing the adjoint equations for an indirect method, the user must explicitly write down when the constraint is active in the solution, which is difficult to know a priori. One solution is to use a direct method to compute an initial guess, which is then used to construct a multi-phase problem where the constraint is prescribed. The resulting problem can then be solved accurately using an indirect method.[16]

Shooting vs. collocation

[edit]

Single shooting methods are best used for problems where the control is very simple (or there is an extremely good initial guess). For example, a satellite mission planning problem where the only control is the magnitude and direction of an initial impulse from the engines.[19]

Multiple shooting tends to be good for problems with relatively simple control, but complicated dynamics. Although path constraints can be used, they make the resulting nonlinear program relatively difficult to solve.

Direct collocation methods are good for problems where the accuracy of the control and the state are similar. These methods tend to be less accurate than others (due to their low-order), but are particularly robust for problems with difficult path constraints.

Orthogonal collocation methods are best for obtaining high-accuracy solutions to problems where the accuracy of the control trajectory is important. Some implementations have trouble with path constraints. These methods are particularly good when the solution is smooth.

See also

[edit]

References

[edit]
  1. ^ Qi Gong; Wei Kang; Bedrossian, N. S.; Fahroo, F.; Pooya Sekhavat; Bollino, K. (December 2007). "Pseudospectral Optimal Control for Military and Industrial Applications". 2007 46th IEEE Conference on Decision and Control. pp. 4128–4142. doi:10.1109/CDC.2007.4435052. ISBN 978-1-4244-1497-0. S2CID 2935682.
  2. ^ 300 Years of Optimal Control: From The Brachystochrone to the Maximum Principle, Hector J. Sussmann and Jan C. Willems. IEEE Control Systems Magazine, 1997.
  3. ^ Bryson, Ho, Applied Optimal Control, Blaisdell Publishing Company, 1969, p 246.
  4. ^ L.S. Pontyragin, The Mathematical Theory of Optimal Processes, New York, Intersciences, 1962
  5. ^ Malik, Aryslan; Henderson, Troy; Prazenica, Richard (January 2021). "Trajectory Generation for a Multibody Robotic System using the Product of Exponentials Formulation". AIAA Scitech 2021 Forum: 2016. doi:10.2514/6.2021-2016. ISBN 978-1-62410-609-5. S2CID 234251587.
  6. ^ Daniel Mellinger and Vijay Kumar, "Minimum snap trajectory generation and control for quadrotors" International Conference on Robotics and Automation, IEEE 2011.
  7. ^ Markus Hehn and Raffaello D'Andrea, "Real-Time Trajectory Generation for Quadrocopters" IEEE Transactions on Robotics, 2015.
  8. ^ Fabio Morbidi, Roel Cano, David Lara, "Minimum-Energy Path Generation for a Quadrotor UAV" in Proc. IEEE International Conference on Robotics and Automation, pp. 1492-1498, 2016.
  9. ^ John W. Eaton and James B. Rawlings. "Model-Predictive Control of Chemical Processes" Chemical Engineering Science, Vol 47, No 4. 1992.
  10. ^ T. Chettibi, H. Lehtihet, M. Haddad, S. Hanchi, "Minimum cost trajectory planning for industrial robots" European Journal of Mechanics, 2004.
  11. ^ Manoj Srinivasan and Andy Ruina. "Computer optimization of a minimal biped model discovers walking and running" Nature, 2006.
  12. ^ E.R. Westervelt, J.W. Grizzle, and D.E. Koditschek. "Hybrid Zero Dynamics of PLanar Biped Walkers" IEEE Transactions on Automatic Control, 2003.
  13. ^ Michael Posa, Scott Kuindersma, and Russ Tedrake. "Optimization and stabilization of trajectories for constrained dynamical systems." International Conference on Robotics and Automation, IEEE 2016.
  14. ^ Hongkai Dai, Andres Valenzuela, and Russ Tedrake. "Whole-body motion planning with Centroidal Dynamics and Full Kinematics" International Conference on Humanoid Robots, IEEE 2014.
  15. ^ Phillips, C.A, "Energy Management for a Multiple Pulse Missile", AIAA Paper 88-0334, Jan., 1988
  16. ^ a b c d e f g John T. Betts "Practical Methods for Optimal Control and Estimation Using Nonlinear Programming" SIAM Advances in Design and Control, 2010.
  17. ^ Christopher L. Darby, William W. Hager, and Anil V. Rao. "An hp-adaptive pseudospectral method for solving optimal control problems." Optimal Control Applications and Methods, 2010.
  18. ^ Patterson, Michael A.; Rao, Anil V. (2014-10-01). "GPOPS-II: A MATLAB Software for Solving Multiple-Phase Optimal Control Problems Using hp-Adaptive Gaussian Quadrature Collocation Methods and Sparse Nonlinear Programming". ACM Trans. Math. Softw. 41 (1): 1:1–1:37. doi:10.1145/2558904. ISSN 0098-3500.
  19. ^ a b c Survey of Numerical Methods for Trajectory Optimization; John T. Betts Journal of Guidance, Control, and Dynamics 1998; 0731-5090 vol.21 no.2 (193-207)
  20. ^ a b c d Anil V. Rao "A survey of numerical methods for optimal control" Advances in Astronautical Sciences, 2009.
  21. ^ Camila C. Francolin, David A. Benson, William W. Hager, Anil V. Rao. "Costate Estimation in Optimal Control Using Integral Gaussian Quadrature Orthogonal Collocation Methods" Optimal Control Applications and Methods, 2014.
  22. ^ R., Malik, Mujeeb (1984). A spectral collocation method for the Navier-Stokes equations. National Aeronautics and Space Administration, Langley Research Center. OCLC 11642811.{{cite book}}: CS1 maint: multiple names: authors list (link)
  23. ^ "Spectral Methods and Pseudospectral Methods", Spectral Methods and Their Applications, WORLD SCIENTIFIC, pp. 100–187, May 1998, doi:10.1142/9789812816641_0004, ISBN 978-981-02-3333-4, retrieved 2021-04-23
  24. ^ Gong, Qi. Spectral and Pseudospectral Optimal Control Over Arbitrary Grids. OCLC 1185648645.
  25. ^ Lloyd N. Trefethen. "Approximation Theory and Approximation Practice", SIAM 2013
  26. ^ Kang, Wei (November 2010). "Rate of convergence for the Legendre pseudospectral optimal control of feedback linearizable systems". Journal of Control Theory and Applications. 8 (4): 391–405. doi:10.1007/s11768-010-9104-0. ISSN 1672-6340. S2CID 122945121.
  27. ^ Trefethen, Lloyd N. (Lloyd Nicholas) (January 2019). Approximation theory and approximation practice. ISBN 978-1-61197-594-9. OCLC 1119061092.
  28. ^ D. H. Hodges and R. R. Bless, "A Weak Hamiltonian Finite Element Method for Optimal Control Problems", Journal of Guidance, Control, and Dynamics, 1990. https://arc.aiaa.org/doi/10.2514/3.20616
  29. ^ David H. Jacobson, David Q. Mayne. "Differential Dynamic Programming" Elsevier, 1970.
  30. ^ a b Janner, Michael; Du, Yilun; Tenenbaum, Joshua B.; Levine, Sergey (2022-12-20), Planning with Diffusion for Flexible Behavior Synthesis, doi:10.48550/arXiv.2205.09991, retrieved 2024-11-21
  31. ^ Zhou, Guangyao; Swaminathan, Sivaramakrishnan; Raju, Rajkumar Vasudeva; Guntupalli, J. Swaroop; Lehrach, Wolfgang; Ortiz, Joseph; Dedieu, Antoine; Lázaro-Gredilla, Miguel; Murphy, Kevin (2024-10-07), Diffusion Model Predictive Control, doi:10.48550/arXiv.2410.05364, retrieved 2024-11-21
  32. ^ Sohl-Dickstein, Jascha; Weiss, Eric; Maheswaranathan, Niru; Ganguli, Surya (2015-06-01). "Deep Unsupervised Learning using Nonequilibrium Thermodynamics". Proceedings of the 32nd International Conference on Machine Learning. PMLR: 2256–2265.
  33. ^ Dhariwal, Prafulla; Nichol, Alexander (2021). "Diffusion Models Beat GANs on Image Synthesis". Advances in Neural Information Processing Systems. 34. Curran Associates, Inc.: 8780–8794.
[edit]