Optimal Control Theory: An Introduction
5/5
()
About this ebook
Optimal control theory is the science of maximizing the returns from and minimizing the costs of the operation of physical, social, and economic processes. Geared toward upper-level undergraduates, this text introduces three aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization.
Chapters 1 and 2 focus on describing systems and evaluating their performances. Chapter 3 deals with dynamic programming. The calculus of variations and Pontryagin's minimum principle are the subjects of chapters 4 and 5, and chapter 6 examines iterative numerical techniques for finding optimal controls and trajectories. Numerous problems, intended to introduce additional topics as well as to illustrate basic concepts, appear throughout the text.
Related to Optimal Control Theory
Related ebooks
Feedback Control Theory Rating: 5 out of 5 stars5/5Adaptive Control: Second Edition Rating: 4 out of 5 stars4/5Optimal Control: Linear Quadratic Methods Rating: 4 out of 5 stars4/5Numerical Methods for Scientists and Engineers Rating: 4 out of 5 stars4/5Optimal Control: An Introduction to the Theory and Its Applications Rating: 5 out of 5 stars5/5Computer-Controlled Systems: Theory and Design, Third Edition Rating: 3 out of 5 stars3/5Approximate Dynamic Programming: Solving the Curses of Dimensionality Rating: 4 out of 5 stars4/5Foundations of Applied Mathematics Rating: 3 out of 5 stars3/5Kronecker Products and Matrix Calculus with Applications Rating: 0 out of 5 stars0 ratingsControl System Design: An Introduction to State-Space Methods Rating: 3 out of 5 stars3/5Dynamic Programming: Models and Applications Rating: 2 out of 5 stars2/5Algebras of Holomorphic Functions and Control Theory Rating: 0 out of 5 stars0 ratingsIntroduction to Stochastic Control Theory Rating: 0 out of 5 stars0 ratingsAdvanced Mathematics for Engineers and Scientists Rating: 4 out of 5 stars4/5Dynamic Programming Rating: 4 out of 5 stars4/5Stochastic Modeling: Analysis and Simulation Rating: 0 out of 5 stars0 ratingsComputational Number Theory and Modern Cryptography Rating: 3 out of 5 stars3/5Pathways to Machine Learning and Soft Computing: 邁向機器學習與軟計算之路(國際英文版) Rating: 0 out of 5 stars0 ratingsInvitation to Dynamical Systems Rating: 5 out of 5 stars5/5Linear Systems Theory: Second Edition Rating: 0 out of 5 stars0 ratingsDynamical Systems Rating: 4 out of 5 stars4/5Laplace Transforms and Their Applications to Differential Equations Rating: 5 out of 5 stars5/5Robust Adaptive Control Rating: 0 out of 5 stars0 ratingsSimulation of Digital Communication Systems Using Matlab Rating: 4 out of 5 stars4/5An Introduction to Fourier Series and Integrals Rating: 5 out of 5 stars5/5Dynamics of Physical Systems Rating: 0 out of 5 stars0 ratingsCombinatorial Optimization: Algorithms and Complexity Rating: 4 out of 5 stars4/5Nonlinear Dynamics: Exploration Through Normal Forms Rating: 5 out of 5 stars5/5GaN Transistors for Efficient Power Conversion Rating: 0 out of 5 stars0 ratingsAn Introduction to Information Theory: Symbols, Signals and Noise Rating: 4 out of 5 stars4/5
Electrical Engineering & Electronics For You
How to Diagnose and Fix Everything Electronic, Second Edition Rating: 4 out of 5 stars4/5Beginner's Guide to Reading Schematics, Fourth Edition Rating: 4 out of 5 stars4/5Practical Electrical Wiring: Residential, Farm, Commercial, and Industrial Rating: 4 out of 5 stars4/5The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution Rating: 4 out of 5 stars4/5Electrician's Pocket Manual Rating: 0 out of 5 stars0 ratingsUpcycled Technology: Clever Projects You Can Do With Your Discarded Tech (Tech gift) Rating: 5 out of 5 stars5/5The Homeowner's DIY Guide to Electrical Wiring Rating: 5 out of 5 stars5/5Off-Grid Projects: Step-by-Step Guide to Building Your Own Off-Grid System Rating: 0 out of 5 stars0 ratingsProgramming the Raspberry Pi, Third Edition: Getting Started with Python Rating: 5 out of 5 stars5/5Ramblings of a Mad Scientist: 100 Ideas for a Stranger Tomorrow Rating: 0 out of 5 stars0 ratingsBeginner's Guide to Reading Schematics, Third Edition Rating: 0 out of 5 stars0 ratingsThe Illustrated Tesla (Rediscovered Books): With linked Table of Contents Rating: 5 out of 5 stars5/5Understanding Electricity Rating: 4 out of 5 stars4/5Two-Stroke Engine Repair and Maintenance Rating: 0 out of 5 stars0 ratingsDIY Lithium Battery Rating: 3 out of 5 stars3/5Electricity for Beginners Rating: 4 out of 5 stars4/5Basic Electricity Rating: 4 out of 5 stars4/5Electric Circuits Essentials Rating: 5 out of 5 stars5/5The Electrician's Trade Demystified Rating: 0 out of 5 stars0 ratingsRaspberry Pi Projects for the Evil Genius Rating: 0 out of 5 stars0 ratingsSchaum's Outline of Basic Electricity, Second Edition Rating: 5 out of 5 stars5/5Digital Filmmaking for Beginners A Practical Guide to Video Production Rating: 0 out of 5 stars0 ratingsSoldering electronic circuits: Beginner's guide Rating: 4 out of 5 stars4/5Programming Arduino: Getting Started with Sketches Rating: 4 out of 5 stars4/5Making Everyday Electronics Work: A Do-It-Yourself Guide: A Do-It-Yourself Guide Rating: 4 out of 5 stars4/5Circuitbuilding Do-It-Yourself For Dummies Rating: 0 out of 5 stars0 ratingsTHE Amateur Radio Dictionary: The Most Complete Glossary of Ham Radio Terms Ever Compiled Rating: 4 out of 5 stars4/5Theory on DC Electric Circuits Rating: 0 out of 5 stars0 ratings
Reviews for Optimal Control Theory
2 ratings0 reviews
Book preview
Optimal Control Theory - Donald E. Kirk
Index
I
Describing the System
and
Evaluating Its Performance
1
Introduction
Classical control system design is generally a trial-and-error process in which various methods of analysis are used iteratively to determine the design parameters of an acceptable
system. Acceptable performance is generally defined in terms of time and frequency domain criteria such as rise time, settling time, peak overshoot, gain and phase margin, and bandwidth. Radically different performance criteria must be satisfied, however, by the complex, multiple-input, multiple-output systems required to meet the demands of modern technology. For example, the design of a spacecraft attitude control system that minimizes fuel expenditure is not amenable to solution by classical methods. A new and direct approach to the synthesis of these complex systems, called optimal control theory, has been made feasible by the development of the digital computer.
The objective of optimal control theory is to determine the control signals that will cause a process to satisfy the physical constraints and at the same time minimize (or maximize) some performance criterion. Later, we shall give a more explicit mathematical statement of the optimal control problem,
but first let us consider the matter of problem formulation.
1.1PROBLEM FORMULATION
The axiom A problem well put is a problem half solved
may be a slight exaggeration, but its intent is nonetheless appropriate. In this section, we shall review the important aspects of problem formulation, and introduce the notation and nomenclature to be used in the following chapters.
The formulation of an optimal control problem requires:
1.A mathematical description (or model) of the process to be controlled.
2.A statement of the physical constraints.
3.Specification of a performance criterion.
The Mathematical Model
A nontrivial part of any control problem is modeling the process. The objective is to obtain the simplest mathematical description that adequately predicts the response of the physical system to all anticipated inputs. Our discussion will be restricted to systems described by ordinary differential equations (in state variable form).† Thus, if
are the state variables (or simply the states) of the process at time t, and
are control inputs to the process at time t, then the system may be described by n first-order differential equations
‡
We shall define
as the state vector of the system, and
as the control vector. The state equations can then be written
where the definition of a is apparent by comparison with (1.1-1).
Figure 1-1 A simplified control problem
Example 1.1-1. The car shown parked in Fig. 1-1 is to be driven in a straight line away from point O. The distance of the car from O at time t is denoted by d(t). To simplify the model, let us approximate the car by a unit point mass that can be accelerated by using the throttle or decelerated by using the brake. The differential equation is
where the control α is throttle acceleration and-β is braking deceleration. Selecting position and velocity as state variables, that is,
and letting
we find that the state equations become
or, using matrix notation,
This is the mathematical model of the process in state form.
Before we move on to the matter of physical constraints, let us consider two definitions that will be useful later. Let the system be described by .†
DEFINITION 1-1
A history of control input values during the interval [t0, tf] is denoted by u and is called a control history, or simply a control.
DEFINITION 1-2
A history of state values in the interval [t0, tf] is called a state trajectory and is denoted by x.
The terms history,
curve,
function,
and trajectory
will be used interchangeably. It is most important to keep in mind the difference between a function and the value of a function. Figure 1-2 shows a single-valued function of time which is denoted by x. The value of the function at time t1 is denoted by x(t1).
Figure 1-2 A function, x, and its value at time t1, x(t1)
Physical Constraints
After we have selected a mathematical model, the next step is to define the physical constraints on the state and control values. To illustrate some typical constraints, let us return to the automobile whose model was determined in Example 1.1-1.
Example 1.1-2. Consider the problem of driving the car in Fig. 1-1 between the points O and e. Assume that the car starts from rest and stops upon reaching point e.
First let us define the state constraints. If t0 is the time of leaving O, and tf is the time of arrival at e, then, clearly,
In addition, since the automobile starts from rest and stops at e,
In matrix notation these boundary conditions are
If we assume that the car does not back up, then the additional constraints
are also imposed.
What are the constraints on the control inputs (acceleration)? We know that the acceleration is bounded by some upper limit which depends on the capability of the engine, and that the maximum deceleration is limited by the braking system parameters. If the maximum acceleration is M1 > 0, and the maximum deceleration is M2 > 0, then the controls must satisfy
In addition, if the car starts with G gallons of gas and there are no service stations on the way, another constraint is
which assumes that the rate of gas consumption is proportional to both acceleration and speed with constants of proportionality k1 and k2.
Now that we have an idea of typical constraints that may be encountered, let us make these concepts more precise.
DEFINITION 1-3
A control history which satisfies the control constraints during the entire time interval [t0, tf] is called an admissible control.
We shall denote the set of admissible controls by Umeans that the control history u is admissible.
To illustrate the concept of admissibility are admissible if they satisfy the consumed-fuel constraint of Eq. (1.1-9). In this example, the set of admissible controls U is defined by the in equalities in (1.1-8) and (1.1-9).
Figure 1-3 Some acceleration histories
DEFINITION 1-4
A state trajectory which satisfies the state variable constraints during the entire time interval [t0, tf] is called an admissible trajectory.
The set of admissible state trajectories will be denoted by Xmeans that the trajectory x is admissible.
In Example 1.1-2 the set of admissible state trajectories X is specified by the conditions given in Eqs. (1.1-6), (1.1-7), and (1.1-9). In general, the final state of a system will be required to lie in a specified region S of the (n + 1)-dimensional state-time space. We shall call S the target set. If the final state and the final time are fixed, then S is a point. In the automobile problem of Example 1.1-2 the target set was the line shown in Fig. 1-4(a). If the automobile had been required to arrive within three feet of e with zero terminal velocity, the target set would have been as shown in Fig. 1-4(b).
Admissibility is an important concept, because it reduces the range of values that can be assumed by the states and controls. Rather than consider all control histories and their trajectories to see which are best (according to some criterion), we investigate only those trajectories and controls that are admissible.
Figure 1-4 x1(t) − e ≤ 3, x2(t) = 0
The Performance Measure
In order to evaluate the performance of a system quantitatively, the designer selects a performance measure. An optimal control is defined as one that minimizes (or maximizes) the performance measure. In certain cases the problem statement may clearly indicate what to select for a performance measure, whereas in other problems the selection is a subjective matter. For example, the statement, Transfer the system from point A to point B as quickly as possible,
clearly indicates that elapsed time is the performance measure to be minimized. On the other hand, the statement, Maintain the position and velocity of the system near zero with a small expenditure of control energy,
does not instantly suggest a unique performance measure. In such problems the designer may be required to try several performance measures before selecting one which yields what he considers to be optimal performance. We shall discuss the selection of a performance measure in more detail in Chapter 2.
Example 1.1-3. Let us return to the automobile problem begun in Example 1.1-1. The state equations and physical constraints have been defined; now we turn to the selection of a performance measure. Suppose the objective is to make the car reach point e as quickly as possible; then the performance measure J is given by
In all that follows it will be assumed that the performance of a system is evaluated by a measure of the form
where t0 and tf are the initial and final time; h and g are scalar functions. tf may be specified or free,
depending on the problem statement.
Starting from the initial state x(t0) = x0 and applying a control signal u(t, causes a system to follow some state trajectory; the performance measure assigns a unique real number to each trajectory of the system.
With the background material we have accumulated it is now possible to present an explicit statement of the optimal control problem.
The Optimal Control Problem
The theory developed in the subsequent chapters is aimed at solving the following problem.
Find an admissible control u* which causes the system
to follow an admissible trajectory x* that minimizes the performance measure
u* is called an optimal control and x* an optimal trajectory.
Several comments are in order here. First, we may not know in advance that an optimal control exists; that is, it may be impossible to find a control which (a) is admissible and (b) causes the system to follow an admissible trajectory. Since existence theorems are in rather short supply, we shall, in most cases, attempt to find an optimal control rather than try to prove that one exists.
Second, even if an optimal control exists, it may not be unique. Nonunique optimal controls may complicate computational procedures, but they do allow the possibility of choosing among several controller configurations. This is certainly helpful to the designer, because he can then consider other factors, such as cost, size, reliability, etc., which may not have been included in the performance measure.
Third, when we say that u* causes the performance measure to be minimized, we mean that
. The above inequality states that an optimal control and its trajectory cause the performance measure to have a value smaller than (or perhaps equal to) the performance measure for any other admissible control and trajectory. Thus, we are seeking the absolute or global minimum of J, not merely local minima. Of course, one way to find the global minimum is to determine all of the local minima and then simply pick out one (or more) that yields the smallest value for the performance measure.
It may be helpful to visualize the optimization as shown in Fig. 1-5. u(1), u(2), u(3), and u(4) are points
at which J has local, or relative, minima; u(1) is the point
where J has its global, or absolute, minimum.
Finally, observe that if the objective is to maximize some measure of system performance, the theory we shall develop still applies because this is the same as minimizing the negative of this performance measure. Henceforth, we shall speak, with no lack of generality, of minimizing the performance measure.
Figure 1-5 A representation of the optimization problem
Example 1.1-4. To illustrate a complete problem formulation, let us now summarize the results of Example 1.1-1, using the notation and definitions which have been developed.
The state equations are
The set of admissible states X is partially specified by the boundary conditions
and the inequalities
The set of admissible controls U is partially defined by the constraints
The inequality constraint
completes the description of the admissible states and controls.
The solution to this problem (which is left as an exercise for the reader at the end of . We have also assumed that the car has enough fuel available to reach point e using the control shown.
Figure 1-6 The optimal control and trajectory for the automobile problem
Example 1.1-5. Let us now consider what would happen if the preceding problem had been improperly formulated. Suppose that the control constraints had not been recognized. If we let
where δ(t − t0) is a unit impulse function that occurs at time t0,† then
and
represents a unit step function at t = t0]. Figure 1-7 shows the state trajectory which results from applying the optimal
control in (1.1-15). Unfortunately, although the desired transfer from point O
Figure 1-7 The optimal trajectory resulting from unconstrained controls
to point e is accomplished in infinitesimal time, the control required, apart from being rather unsafe, is physically impossible! Thus, we see the importance of correctly formulating problems before attempting their solution.
Form of the Optimal Control
DEFINITION 1-5
If a functional relationship of the form
‡
can be found for the optimal control at time t, then the function f is called the optimal control law, or the optimal policy.†
Notice that Eq. (1.1-18) implies that f is a rule which determines the optimal control at time t for any (admissible) state value at time t. For example, if
where F is an m × n matrix of real constants, then we would say that the optimal control law is linear, time-invariant feedback of the states.
DEFINITION 1-6
If the optimal control is determined as a function of time for a specified initial state value, that is,
then the optimal control is said to be in open-loop form.
Thus the optimal open-loop control is optimal only for a particular initial state value, whereas, if the optimal control law is known, the optimal control history starting from any state value can be generated.
Conceptually, it is helpful to imagine the difference between an optimal control law and an open-loop optimal control as shown in Fig. 1-8; notice,
Figure 1-8 (a) Open-loop optimal control. (b) Optimal control law
however, that the mere presence of connections from the states to a controller does not, in general, guarantee an optimal control law.†
Although engineers normally prefer closed-loop solutions to optimal control problems, there are cases when an open-loop control may be feasible. For example, in the radar tracking of a satellite, once the orbit is set very little can happen to cause an undesired change in the trajectory parameters. In this situation a pre-programmed control for the radar antenna might well be used.
A typical example of feedback control is in the classic servomechanism problem where the actual and desired outputs are compared and any deviation produces a control signal that attempts to reduce the discrepancy to zero.
1.2STATE VARIABLE REPRESENTATION OF SYSTEMS
The starting point for optimal control investigations is a mathematical model in state variable form. In this section we shall summarize the results and notation to be used in the subsequent discussion. There are several excellent texts available for the reader who needs additional background material.†
Why Use State Variables?
Having the mathematical model in state variable form is convenient because
1.The differential equations are ideally suited for digital or analog solution.
2.The state form provides a unified framework for the study of nonlinear and linear systems.
3.The state variable form is invaluable in theoretical investigations.
4.The concept of state has strong physical motivation.
Definition of State of a System
When referring to the state of a system, we shall have the following definition in mind.
DEFINITION 1-7
The state of a system is a set of quantities x1(t), x2(t), . . . , xn(t)
which if known at t = t0 are determined for t ≥ t0 by specifying the inputs to the system for t ≥ t0.
System Classification
Systems are described by the terms linear, nonlinear, time-invariant,† and time-varying. We shall classify systems according to the form of their state equations.‡ For example, if a system is nonlinear and time-varying, the state equations are written
Nonlinear, time-invariant systems are represented by state equations of the form
If a system is linear and time-varying its state equations are
where A(t) and B(t) are n × n and n × m matrices with time-varying elements. State equations for linear, time-invariant systems have the form
where A and B are constant matrices.
Output Equations
The physical quantities that can be measured are called the outputs and are denoted by y1(t), y2(t), . . . , yq(t). If the outputs are nonlinear, time-varying functions of the states and controls, we write the output equations
If the output is related to the states and controls by a linear, time-invariant relationship, then
where C and D are q × n and q × m constant matrices. A nonlinear, time-varying
Figure 1-9 (a) Nonlinear system representation. (b) Linear system representation
system and a linear, time-invariant system are shown in Fig. 1-9. r(t), which has not been included in the state equations and represents any inputs that are not controlled, is called the reference or command input.
In our discussion of optimal control theory we shall make the simplifying assumption that the states are all available for measurement; that is, y(t) = x(t).
Solution of the State Equations—Linear Systems
For linear systems the state equations (1.2-3) have the solution
where φ(t, t0) is the state transition matrix† of the system. If the system is time-invariant as well as linear, t0 can be set equal to 0 and the solution of the state equations is given by any of the three equivalent forms
where U(s) and Φ(s) are the Laplace transforms of u(t) and φ(tis the n × n matrix
Equation (1.2-8a) results when the state equations (1.2-4) are Laplace transformed and solved for X(s). Equation (1.2-8b) can be obtained by drawing a block diagram (or signal flow graph) of the system and applying Mason's gain formula.‡ Notice that H(s) is the transfer function matrix. The solution in (1.2-8c) can be found by classical methods. The equivalence of these three solutions establishes the correspondences
Properties of the State Transition Matrix
It can be verified that the state transition matrix has the properties shown in Table 1-1 for all t, t0, t1, and t2.
Table 1-1
PROPERTIES OF THE LINEAR SYSTEM STATE TRANSITION MATRIX
Determination of the State Transition Matrix
For systems having a constant A matrix, the state transition matrix, φ(t), can be determined by any of the following methods:
1.Inverting the matrix [sI − A] and finding the inverse Laplace transform of each element.
2.Using Mason's gain formula to find Φ(s) from a block diagram or signal flow graph of the system [the ijth element of the matrix Φ(s) is given by the transmission Xi(s)/xj(0)] and evaluating the inverse Laplace transform of Φ(s).
3.Evaluating the matrix expansion
†
For high-order systems (n numerically (with the aid of a digital computer) is the most feasible of these methods.
For systems having a time-varying A matrix the state transition matrix can be found by numerical integration of the matrix differential equation
with the initial condition φ(t0, t0) = I.
Controllability and Observability†
Consider the system
for t ≥ t0 with initial state x(t0) = x0.
DEFINITION 1-8
If there is a finite time t1 ≥ t, which transfers the state x0 to the origin at time t1, the state x0 is said to be controllable at time t0. If all values of x0 are controllable for all t0, the system is completely controllable, or simply controllable.
Controllability is very important, because we shall consider problems in which the goal is to transfer a system from an arbitrary initial state to the origin while minimizing some performance measure; thus, controllability of the system is a necessary condition for the existence of a solution.
Kalman‡ has shown that a linear, time-invariant system is controllable if and only if the n × mn matrix
has rank n. If there is only one control input (m = 1), a necessary and sufficient condition for controllability is that the n × n matrix E be nonsingular.
The concept of observability is defined by considering the system (1.2-13) with the control u(t) = 0 for t ≥ t0.§
DEFINITION 1-9
If by observing the output y(t) during the finite time interval [t0, t1] the state x(t0) = x0 can be determined, the state x0 is said to be observable at time t0. If all states x0 are observable for every t0, the system is called completely observable, or simply observable.
Analogous to the test for controllability, it can be shown that the linear, time-invariant system
is observable if and only if the n × qn matrix
has rank n. If there is only one output (q = 1) G is an n × n matrix and a necessary and sufficient condition for observability is that G be nonsingular. Since we have made the simplifying assumption that all of the states can be physically measured (y(t) = x(t)), the question of observability will not arise in our subsequent discussion.
1.3CONCLUDING REMARKS
In control system design, the ultimate objective is to obtain a controller that will cause a system to perform in a desirable manner. Usually, other factors, such as weight, volume, cost, and reliability also influence the controller design, and compromises between performance requirements and implementation considerations must be made. Classical design procedures are best suited for linear, single-input, single-output systems with zero initial conditions. Using simulation, mathematical analysis, or graphical methods, the designer evaluates the effects of inserting various physical devices into the system. By trial and error either an acceptable controller design is obtained, or the designer concludes that the performance requirements cannot be satisfied.
Many complex aerospace problems that are not amenable to classical techniques have been solved by using optimal control theory. However, we are forced to admit that optimal control theory does not, at the present time, constitute a generally applicable procedure for the design of simple controllers. The optimal control law, if it can be obtained, usually requires a digital computer for implementation (an important exception is the linear regulator problem discussed in Section 5.2), and all of the states must be available for feedback to the controller. These limitations may preclude implementation of the optimal control law; however, the theory of optimal control is still useful, because
1.Knowing the optimal control law may provide insight helpful in designing a suboptimal, but easily implemented controller.
2.The optimal control law provides a standard for evaluating proposed suboptimal designs. In other words, by knowing the optimal control law we have a quantitative measure of performance degradation caused by using a suboptimal controller.
REFERENCES
A-1 Athans, M., The Status of Optimal Control Theory and Applications for Deterministic Systems,
IEEE Trans. Automatic Control (1966), 580–596.
D-1 Derusso, P. M., R. J. Roy, and C. M. Close, State Variables for Engineers. New York: John Wiley & Sons, Inc., 1965.
K-1 Kliger, I., On Closed-Loop Optimal Control,
IEEE Trans. Automatic Control (1965), 207.
K-2 Kalman, R. E., On the General Theory of Control Systems,
Proc. First IFAC Congress (1960), 481–493.
K-3 Kalman, R. E., Y. C. Ho, and K. S. Narendra, Controllability of Linear Dynamical Systems,
in Contributions to Differential Equations, Vol. 1. New York: John Wiley & Sons, Inc., 1962.
O-1 Ogata, K., State Space Analysis of Control Systems. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1967.
S-1 Schwarz, R. J., and B. Friedland, Linear Systems. New York: McGraw-Hill, Inc., 1965.
S-2 Schultz, D. G., and J. L. Melsa, State Functions and Linear Control Systems. New York: McGraw-Hill, Inc., 1967.
T-1 Timothy, L. K., and B. E. Bona, State Space Analysis: An Introduction. New York: McGraw-Hill, Inc., 1968.
W-1 Ward, J. R., and R. D. Strum, State Variable Analysis (A Programmed Text). Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1970.
Z-1 Zadeh, L. A., and C. A. Desoer, Linear System Theory: The State Space Approach. New York: McGraw-Hill, Inc., 1963.
PROBLEMS
1-1. The tanks A and B shown in Fig. 1-P1 each have a capacity of 50 gal. Both tanks are filled at t = 0, tank A with 60 lb of salt dissolved in water, and tank B with water. Fresh water enters tank A at the rate of 8 gal/min, the mixture of salt and water (assumed uniform) leaves A and enters B at the rate of 8 gal/min, and the flow is incompressible. Let q(t) and p(t) be the number of pounds of salt contained in tanks A and B, respectively.
Figure 1-P1
(a)Write a set of state equations for the system.
(b)Draw a block diagram (or signal flow graph) for the system.
(c)Find the state transition matrix φ(t).
(d)Determine q(t) and p(t) for t ≥ 0.
1-2.(a)Using the capacitor voltage vc(t) and the inductor current iL(t) as states, write state equations for the RLC series circuit shown in Fig. 1-P2.
Figure 1-P2
(b) Find the state transition matrix φ(t) if R = 3 Ω, L .
(c) If vc(0) = 0, iL(0) = 0, and e(t) is as shown, determine vc(t) and iL(t) for t ≥ 0.
1-3.(a)Write a set of state equations for the mechanical system shown in Fig. 1-P3. The applied force is f(t), the block has mass M, the spring constant is K, and the coefficient of viscous friction is B. The displacement of the block,