LECTURE NOTE ON MEC 314 Updated
LECTURE NOTE ON MEC 314 Updated
LECTURE NOTE ON MEC 314 Updated
INTRODUCTION
Measurement can be said to be the process or act of finding the size, quantity or degree of
something. it gives us a repeatable and dependable way of quantifying the world in which we live.
It is essentially the act of comparing some unknown value with a value which is assumed to be
known (otherwise known as a standard). Accurate measurement of quantities is very important in
engineering. Instrumentation is when a set of instruments are used for a particular purpose
(measurement or control). There are in engineering three ways in which instrumentation is used
for making measurements.
1) Obtaining data for some events or items: this could, for instance, be the marking out of an item
for machining and involve measurement of length and angles.
3) Measuring to ensure that a process is kept under control: many industrial processes are
continuous. The purpose of measurement in this case is to ensure the proper control of the process.
Measurement systems (instruments): all measurement systems consist essentially of three main
parts:
1) The sensing element: This frequently called the transducer is the first element. It is in some way
in contact with what is being measured and produces a signal which is related to the quantity being
measured. Sensing elements take information about the thing being measured and change it into
some form which enables the rest of the measurement system to give a value to it. For example, if
a spring balance is used to measure weight, the sensing action can be considered to be the change
in length due to the weight. Thus, the spring balance takes information about the weight and
changes, into a change in length of a spring.
2) The signal converter: The output of the sensing element then passes through a second element
before reaching the display. This second element can take many forms. In general, it can be
considered that the signal from the sensing element is converted into a form which is suitable for
1
the display or control element. An example of this might be an amplifier which takes a small signal
from the sensing element and makes it big enough to activate a display.
3) The display: The third part of the measuring system could be the display or a control system.
The display element is where the output from the measuring system is displayed. This may, for
example, be a pointer moving across a scale. The display element takes the signal from the signal
converter and presents it in a form which enables an observer to recognize it. The control system
is where the output is used to control a process.
2
classification of measuring instruments
Instruments can be classified into various types based on different criteria such as their function,
the physical quantity they measure, or their method of operation. Here are some common
classifications of instruments:
Based on Function:
1. Measuring Instruments: These instruments are used to determine the value of a physical
quantity. Examples include voltmeters, thermometers, pressure gauges, and flow meters.\
2. Recording Instruments: Instruments that record the values of physical quantities over time.
Examples include chart recorders and data loggers.
3. Controlling Instruments: These are used to control the value of the measured variable within a
desired range. Thermostats and PID controllers are common examples.
4. Indicating Instruments: These provide a visual indication of a physical quantity, usually via a
dial or digital display. Examples include dial thermometers and digital readouts.
4. Level Instruments: Float switches, level gauges, and hydrostatic pressure sensors.
1. Analog Instruments: These provide output in a continuous form, usually as a pointer movement
on a scale. Examples include analogue ammeters and speedometers.
3
2. Digital Instruments: These provide numerical output, often on an LCD or LED display.
Examples include digital clocks and digital pressure sensors.
3. Mechanical Instruments: These operate based on mechanical principles, such as gears, springs,
and levers. Examples include pressure gauges with Bourdon tubes and bimetallic thermometers.
4. Electronic Instruments: These use electronic circuits and components to measure and indicate
the physical quantity. Examples include digital multimeters and electronic temperature controllers.
Based on Application:
1. Laboratory Instruments: Precise instruments used in a lab setting for research and analysis, such
as microscopes and spectrophotometers.
2. Process Instruments: Used in industrial settings to monitor and control processes, such as
transmitters and industrial sensors.
3. Field Instruments: Portable instruments used for measurements in the field, like handheld GPS
devices and portable gas detectors.
4. Medical Instruments: Used in healthcare for diagnosis and monitoring, such as ECG machines
and blood pressure cuffs.
These classifications are not mutually exclusive, and many instruments can fall into multiple
categories based on their features and usage.
Selecting the right instrument for a specific application involves considering a variety of factors
to ensure accurate, reliable, and cost-effective measurements. Here are some of the key factors
affecting instrument selection:
1. Measurement Objectives: Understand what physical quantity needs to be measured, the range
of measurement, accuracy, and resolution required. This will dictate the type of instrument needed.
4
2. Operating Environment: Consider the conditions under which the instrument will operate, such
as temperature, humidity, vibration, electromagnetic interference, and the presence of corrosive
substances, as these can affect the instrument's performance and durability.
3. Installation Requirements: Evaluate the space available for installing the instrument and any
special installation requirements like mounting, accessibility, and connection to other systems.
5. Data Output and Communication: Determine the type of data output needed (analogue, digital,
graphical) and the required communication protocols (4-20 mA, HART, Modbus, etc.) for
integration with other systems.
6. Power Supply: Check the power requirements of the instrument and ensure compatibility with
the available power sources.
7. Calibration and Maintenance: Consider the ease of calibration and the frequency of maintenance
required. Instruments that are difficult to calibrate or require frequent maintenance may incur
higher long-term costs.
8. Cost: Evaluate the initial purchase cost as well as the total cost of ownership, which includes
installation, operation, maintenance, and calibration costs over the instrument's lifespan.
9. Regulatory Compliance: Ensure that the instrument meets any relevant standards and regulatory
requirements for the industry and region in which it will be used.
10. Safety: If the instrument will be used in potentially hazardous environments, it must be
designed to prevent ignition of flammable gases or dust (intrinsically safe, explosion-proof, etc.).
11. Reliability and Durability: The instrument's reliability and expected lifespan under operating
conditions should be considered to minimize downtime and replacement costs.
12. User-Friendliness: The ease of use, including the interface, display readability, and simplicity
of operation, can be important, especially for instruments that require frequent interaction.
5
By carefully considering these factors, you can select an instrument that not only meets the
technical requirements but also offers ease of use, reliability, and cost-effectiveness.
Performance characteristics
The performance characteristics of an instrument are very necessary for choosing the most suitable
instrument for a specific measuring task. This generally has two sub-areas:
• static characteristics
• dynamic characteristics
Static characteristics: static characteristics are generally considered for instruments used in
measuring unvarying process conditions. These characteristics are usually obtained by one form
or another of a process called calibration.
• Examine the construction of the instrument and identify and list possible inputs.
• Decide, as best as one can, which of the inputs will be a significant application for which
the instrument is to be calibrated.
• Procure apparatus that will allow all significant inputs to vary ranges considered necessary.
• By holding some inputs constant, varying others, and recording the develop the desired
static input-output relations.
The following are some of the terms commonly used to describe the performance of measuring
systems:
Accuracy: The accuracy of an instrument is the extent to which the reading it gives might be
wrong or the ability of a device or a system to respond to the true value of a measured variable
under reference conditions. It is usually quoted as a percentage of the full-scale deflection (f.s.d)
of the instrument. Thus, for example, an ammeter might have an f.s.d. of 5A and accuracy quoted
as ± 5%. This means that 0.25 A is either added or subtracted from any reading taken on this
6
ammeter. Accuracy can either be static for slow-changing quantities or dynamic for quantities that
change quickly.
Precision: This is the degree of exactness for which an instrument is designed to perform. It
consists of two characteristics, conformity and the number of significant figures to which the
measurement may be made. The more the significant the figures, the greater the precision of the
instrument.
Repeatability: The repeatability of an instrument is its ability to display the same reading for
repeated applications of the same value of the quantity being measured.
Reliability: The reliability of an instrument is the probability that it will operate to an agreed level
of performance under the conditions specified for its use.
Reproducibility: The reproducibility or stability of an instrument is its ability to display the same
reading when it is used to measure a constant quantity over a period of time or when that quantity
is measured on several occasions.
Sensitivity: This can be defined as the ratio of the change in output to the change in input which
causes it, at steady state conditions. the sensitivity of an instrument is given by
Thus, for example, a volt meter might have a sensitivity of 1 scale division per 0.05V. This means
that if the voltage being measured changes by 0.05V, then the reading of the instrument will change
by one scale division.
Resolution: The resolution or discrimination of an instrument is the smallest change in the quantity
being measured that will produce an observable change in the reading of the instrument.
Range: The range of an instrument is the limits between which readings can be made
Dead space: The dead space of an instrument is the range of values of the quantity being measured
for which it gives no reading.
Threshold: If the quantity being measured is increased from zero a certain minimum level might
have to be reached before the instrument responds and gives a detectable reading. This is called
the threshold.
7
Hysteresis: Instruments can give different readings for the same value of measured quantity
depending on whether that value has been reached by a continuously increasing change or
continuously decreasing change. This effect is called hysteresis and it occurs as a result of such
things as bearing friction and slack motion in gears in the instrument.
Dynamic characteristics: Instruments rarely respond to changes in measured variables, rather they
exhibit a characteristic of slowness or sluggishness due to things such as mass, thermal
capacitance, fluid capacitance or electric capacitance. In addition to this, pure delay in time is often
encountered when the instrument waits for some reactions to take place. the dynamic behaviour of
an instrument is determined by subjecting its primary element to some unknown and
predetermined variations in measured quantity. the three most common variations are:
• step change in which the primary element is subjected to an instantaneous and finite
change in the measured variable
• linear change in which the primary element is following a measured variable changing
linearly with time
• sinusoidal change in which the primary element follows a measured variable the
magnitude of which changes by the sinusoidal function of a constant amplitude.
Speed of response: It is the Rapidity with which an instrumental response to changes in the
measured quantity
Fidelity: It is the degree to which an instrument indicates the changes in the measured variable
without dynamic error
Lag: It is retardation or delay in the response of an instrument changes in the measured quantity.
Dynamic error: It is the difference between the true value of a quantity changing with time and
the value indicated by the instrument if no static error is assumed.
Sources of error: the sources of error other than the inability of the instrument to provide true
measurement, are listed as follows:
The relationship between any input and the output can by application of suitable simplifying
assumptions, be written as:
𝑑𝑑𝑛𝑛 𝑥𝑥𝑜𝑜 𝑑𝑑𝑛𝑛−1 𝑥𝑥𝑜𝑜 𝑑𝑑𝑑𝑑𝑜𝑜 𝑑𝑑𝑚𝑚 𝑥𝑥𝑖𝑖 𝑑𝑑𝑚𝑚−1 𝑥𝑥1 𝑑𝑑𝑑𝑑1
𝑎𝑎𝑛𝑛 + 𝑎𝑎𝑛𝑛−1 … + 𝑎𝑎1 + 𝑎𝑎0 𝑥𝑥𝑜𝑜 = 𝑏𝑏𝑚𝑚 + 𝑏𝑏𝑚𝑚−1 + ⋯ + 𝑏𝑏1 + 𝑏𝑏𝑜𝑜 𝑥𝑥1 Eq1.3
𝑑𝑑𝑑𝑑 𝑛𝑛 𝑑𝑑𝑑𝑑 𝑛𝑛−1 𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑 𝑚𝑚 𝑑𝑑𝑑𝑑 𝑚𝑚−1 𝑑𝑑𝑑𝑑
Where:
xo = output quantity
xi = input quantity
t = time
When all the a’s and the b’s other than ao and bo of the eq 1.3 are assumed to be zero, the differential
equation then degenerates into the simple algebraic equation given as:
Any instrument that closely obeys the Eq 1.4 over its intended range of operating conditions is
defined as a zero-order instrument.
The static sensitivity (or steady-state gain) of a zero-order instrument may be defined as follows:
𝑏𝑏
𝑥𝑥𝑜𝑜 = 𝑎𝑎𝑜𝑜 𝑥𝑥𝑖𝑖 = 𝐾𝐾𝐾𝐾𝑖𝑖 Eq 1.5
𝑜𝑜
𝑏𝑏
Where: 𝐾𝐾 = 𝑎𝑎𝑜𝑜 = static sensitivity
𝑜𝑜
Since the equation 𝑥𝑥𝑜𝑜 = 𝐾𝐾𝐾𝐾𝑖𝑖 is an algebraic equation, it is clear that, no matter how xi might vary
with time, the instrument output (reading) follows it perfectly with no distortion or time lag of
9
any sort. Thus, a zero-order instrument represents ideal or perfect dynamic performance. A
practical example of a zero-order instrument is the displacement-measuring potentiometer.
Dynamic response of first order instruments: If in eq 1.3 above, all a’s and b’s other than a1, ao
and bo are taken to be zero, we get:
𝑑𝑑𝑑𝑑𝑜𝑜
𝑎𝑎1 + 𝑎𝑎𝑜𝑜 𝑥𝑥0 = 𝑏𝑏𝑜𝑜 𝑥𝑥𝑖𝑖 Eq 1.6
𝑑𝑑𝑑𝑑
Any instrument that follows the eq 1.6 is known as a first-order instrument. By dividing eq by ao,
the equation can be written as:
𝑎𝑎1 𝑑𝑑𝑑𝑑𝑜𝑜 𝑏𝑏
∙ + 𝑥𝑥𝑜𝑜 = 𝑎𝑎𝑜𝑜 𝑥𝑥𝑖𝑖 Eq 1.7
𝑎𝑎𝑜𝑜 𝑑𝑑𝑑𝑑 𝑜𝑜
Where:
𝑎𝑎1
𝜏𝜏 = = 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
𝑎𝑎𝑜𝑜
𝑏𝑏𝑜𝑜
𝐾𝐾 = = 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝑎𝑎𝑜𝑜
the time constant τ always has the dimensions of time, while the static sensitivity K has the
dimensions input/output. The operational transfer function of any first-order instrument is:
𝑥𝑥𝑜𝑜 𝐾𝐾
(𝐷𝐷) = Eq 1.8
𝑥𝑥1 𝐷𝐷+1
Dynamic response of second-order system: A second-order instrument is one that follows the
equation
𝐷𝐷 2 2𝜉𝜉𝜉𝜉
�(𝜔𝜔 2
+ + 1� 𝑥𝑥𝑜𝑜 = 𝐾𝐾𝐾𝐾𝑖𝑖 Eq 1.10
𝑛𝑛 ) 𝜔𝜔𝑛𝑛
10
𝑎𝑎 𝑟𝑟𝑟𝑟𝑟𝑟
Where: 𝜔𝜔𝑛𝑛 = 𝑎𝑎𝑜𝑜 = 𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑖𝑖𝑖𝑖 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡
2
𝑎𝑎1
𝜉𝜉 = = 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟, 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
�𝑎𝑎𝑜𝑜 𝑎𝑎2
𝑏𝑏
𝐾𝐾 = 𝑎𝑎𝑜𝑜 = 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝑜𝑜
𝑥𝑥0 𝐾𝐾
(𝐷𝐷) =
(𝐷𝐷2 ⁄𝜔𝜔𝜔𝜔2 +2𝜉𝜉𝜉𝜉 ⁄𝜔𝜔𝑛𝑛 +1)
Eq 1.11
𝑥𝑥𝑖𝑖
11
12
Measurement of displacement
Displacement-measuring devices can be classified into electrical and mechanical types, each with
various examples:
1. Vernier Caliper: A precision instrument that uses a sliding vernier scale to directly measure the
distance between two opposite sides of an object.
2. Micrometer Screw Gauge: Utilizes a calibrated screw to measure small distances with high
accuracy, typically used for objects with a dimension in the order of millimetres.
3. Dial Indicator: Has a plunger that is linked to a gear-and-lever system which amplifies small
displacements to a dial display.
4. Linear Variable Differential Transformer (LVDT) (Mechanical Aspect): While LVDTs are
primarily electrical devices, they have a mechanical component where displacement of the core
changes the induced voltage in the secondary windings.
13
2. Potentiometric Displacement Sensor: Uses a resistive element with a sliding contact (wiper) to
measure displacement as a change in resistance.
5. Optical Encoder: Converts the mechanical displacement of a shaft or linear scale into digital or
analogue signals using light interrupting patterns.
6. Laser Displacement Sensor: Uses laser triangulation to measure the distance between the sensor
and the target with high precision.
Each type of displacement-measuring device has its own application niche, with mechanical types
being more traditional and often used in manual measurements, while electrical types are more
suited to automated and real-time measurement systems.
A dial indicator is a precision measurement tool used to measure small linear distances. It is
commonly used in engineering to check the variation in tolerance during the inspection process of
a machined part. It translates small linear distances into rotational movement which is then
displayed on a circular dial.
Construction:
1. Plunger: A spring-loaded part that moves in and out as it comes into contact with the object
being measured.
2. Rack and Pinion: The plunger is connected to a rack gear which meshes with a pinion gear. As
the plunger moves, it causes the pinion to rotate.
3. Dial: The pinion gear is connected to the pointer (or needle) on the dial face. The rotation of the
pinion causes the pointer to move around the dial.
14
4. Bezel: This is the outer frame that holds the crystal (clear cover) of the dial indicator and can
often be rotated to zero the dial.
5. Graduated Scale: The face of the dial has a graduated scale which is typically marked in either
a thousandth of an inch or a hundredth of a millimetre.
6. Body: The main housing that contains the internal mechanism and provides mounting points for
attaching the indicator to a stand or fixture.
Dial indicator
Principle of Operation:
The principle of operation of a dial indicator is based on the conversion of linear motion into
rotational motion:
1. When the plunger is pressed against the object to be measured, it moves inward against the force
of its internal spring.
3. The rack gear's linear motion is converted into rotational motion by the pinion gear.
The amount of movement is proportional to the rotation of the pointer, and the scale on the dial
face allows for the reading of precise measurements. The user can take readings directly from the
position of the pointer on the scale.
Dial indicators have a limited range, which is the maximum distance the plunger can move, and
this range is typically small, allowing for high-precision measurements. They are often used in
conjunction with other tools, such as a stand or magnetic base, to provide stability and precise
positioning during measurement.
The float
Principle of Operation:
1. Buoyancy: The float is an object with a density lower than that of the liquid in which it is placed.
Due to buoyancy, the float remains on the surface of the liquid.
16
2. Displacement: As the liquid level changes, the float rises or falls with the surface of the liquid.
The vertical movement of the float is a direct measure of the liquid level.
4. Scale Reading: The position of the float, transferred to a pointer or an electronic sensor, can
then be read against a scale that is calibrated in units of liquid level (such as inches, centimetres,
or meters).
The float-based displacement-measuring device is a simple and effective method to measure the
level of liquid in a tank or container. It is commonly used in a variety of industries, including water
treatment, chemical processing, and fuel storage.
The Linear Variable Displacement Transducer (LVDT) is a type of electrical transducer that
converts linear displacement into an electrical signal. It is widely used for measuring linear
position and is known for its accuracy and reliability over a wide range of temperatures and
environmental conditions.
Construction
17
An LVDT typically consists of three solenoidal coils placed end-to-end around a tube. The centre
coil is the primary coil, and the two outer coils are the secondary coils. A movable soft iron core,
which is attached to the object whose displacement is to be measured, slides within the tube.
Principle of Operation:
1. Excitation of Primary Coil: An alternating current (AC) is supplied to the primary coil, which
creates a magnetic field that induces an alternating voltage in the two secondary coils.
2. Position of the Core: When the core is in the central position (null position), the induced voltages
in the two secondary coils are equal and opposite, thus cancelling each other out. Therefore, the
output voltage (the differential voltage between the two secondary coils) is zero.
3. Displacement from Null Position: When the core is displaced from the null position, the induced
voltage in one of the secondary coils becomes greater than the other. This results in a net output
voltage from the LVDT that is proportional to the direction and magnitude of the displacement.
4. Phase Detection: The phase of the output voltage relative to the input voltage indicates the
direction of the displacement. If the core moves in one direction from the null position, the phase
of the output voltage will lead to the input voltage. If the core moves in the opposite direction, the
phase will lag.
5. Signal Conditioning: The output signal is usually passed through signal conditioning equipment
to convert the AC output to a DC voltage or current that is more easily interpreted by display or
control systems.
The LVDT is characterized by its high sensitivity, infinite resolution, and friction-free operation
since there is no physical contact between the core and the coils. It can measure displacements
ranging from a few micrometres to several centimetres. The LVDT is robust, reliable, and widely
used in applications such as industrial automation, aerospace, and automotive testing.
The potentiometer
18
Construction:
A typical potentiometer consists of a resistive element, a sliding contact (wiper), and a housing
that protects the components. The resistive element is usually a long, thin strip of resistive material,
such as carbon or a metal film, formed into a track. The wiper is mechanically connected to the
object whose displacement is to be measured, and it slides along the resistive track as the object
moves.
Linear potentiometer
Principle of Operation:
1. Electrical Connection: The ends of the resistive element are connected to a voltage source,
creating a voltage drop across the length of the element. The wiper is connected to the output
terminal.
2. Displacement: As the object moves, it causes the wiper to slide along the resistive element. The
position of the wiper divides the resistive track into two segments with different resistances.
3. Resistance Variation: The resistance between the wiper and each end of the potentiometer
changes with the wiper's position. When the wiper moves closer to one end, the resistance between
the wiper and that end decreases while the resistance between the wiper and the other end
increases, and vice versa.
19
4. Voltage Output: The voltage at the wiper (output voltage) is a function of the wiper's position
along the resistive track. This voltage is proportional to the position of the wiper and, therefore, to
the displacement of the connected object.
5. Signal Interpretation: The output voltage can be measured and interpreted to determine the
displacement. If the potentiometer is linear, the relationship between the wiper position and the
output voltage will also be linear.
Potentiometers are versatile and can be used in a wide range of applications, from simple position
sensing in consumer electronics (like volume controls) to more precise displacement
measurements in industrial automation and control systems. They are available in various shapes
and sizes, including rotary (angular displacement) and linear (straight-line displacement)
configurations.
Measurement of force
Force is a vector quantity that describes the interaction between physical objects. It refers to a
push or pull on an object with mass that causes it to change its velocity (which includes both speed
and direction of motion). Force induces an object's acceleration (change of velocity over time) in
the direction of the force. For example, pushing on a stationary box applies a force to it that can
cause the box to move. According to Newton's second law of motion, the force acting on an object
is equal to the object's mass multiplied by its acceleration. The common units of force are pounds
(lb) and Newtons (N). Forces are essential to explain interactions in nature ranging from everyday
motions to the orbits of planets and galaxies.
Certainly! The measurement of force can be accomplished through various methods, each based
on different physical principles. Here are the descriptions of the four methods you've asked about:
The gravity balance method involves counterbalancing the force to be measured with a known
gravitational force. This is typically done using a beam balance or a platform scale. A mass is
placed on one side of the balance, and the force to be measured is applied to the other side. The
mass is adjusted until the system is in equilibrium. Since the gravitational force acting on the mass
is known (mass times the acceleration due to gravity), the force being measured can be determined
by the mass that balances it.
20
(ii) Fluid-Pressure Method:
The fluid-pressure method measures force by converting it into a pressure and then measuring that
pressure. One common device that uses this principle is a hydraulic load cell. When a force is
applied to a piston, it generates pressure in a confined fluid (usually oil). The pressure is
proportional to the applied force and can be measured with a pressure gauge or a pressure sensor.
This method is particularly useful
for measuring large forces and is
commonly used in industrial weighing
systems.
This method is based on Hooke's Law, which states that the deformation of an elastic material is
proportional to the force applied to it, within the elastic limit of the material. In practice, a known
force is applied to an elastic element, such as a spring, beam, or diaphragm, and the resulting
deflection is measured. The amount of deflection can be correlated to the magnitude of the force.
Devices such as strain gauges are often used to measure the deflection accurately. This method is
widely used in force transducers and load cells.
Piezoelectric materials generate an electric charge when mechanically stressed. In the piezoelectric
method, a force is applied to a piezoelectric crystal or ceramic, causing it to deform slightly. This
deformation leads to a charge displacement within the material, which can be measured as a
21
voltage across the piezoelectric element. The voltage is proportional to the force applied.
Piezoelectric sensors are highly sensitive and are used for dynamic force measurements, such as
in pressure transducers and accelerometers. They are particularly useful for measuring rapidly
changing forces or vibrations.
Measurement of torque
Torque is a measure of the twisting force that causes an object to rotate around an axis. It is a
vector quantity, meaning it has both magnitude and direction. Torque depends on two factors: the
magnitude of the force applied and the distance from the axis of rotation to the point where the
force is applied. Mathematically, torque is calculated as the product of the force and the distance
from the axis of rotation, multiplied by the sine of the angle between the force vector and the vector
from the axis of rotation to the point of application. Torque is commonly measured in units of
Newton meters (Nm) or foot-pounds (ft-lb) and is an important concept in various fields such as
physics, engineering, and mechanics.
Various methods are used to measure torque, which is the rotational force applied to an object.
Here are some of the common methods:
Mechanical torque wrenches, such as beam-type or click-type wrenches, directly measure the
torque applied to a bolt or fastener. The beam-type uses a calibrated metal beam and a pointer to
22
indicate torque, while the click-type wrenches have an internal spring mechanism that 'clicks' once
the preset torque is reached.
Strain gauge torque sensors measure torque by detecting the strain (deformation) in a material
caused by the applied torque. Strain gauges are attached to a shaft or a structural member that
twists slightly when torque is applied. The change in resistance of the strain gauges due to
deformation is measured and converted into an electrical signal proportional to the torque.
Rotary torque transducers use various technologies, including strain gauges, to measure torque on
rotating shafts. They often provide dynamic measurement and can offer high accuracy. Some
rotary transducers use telemetry or slip rings to transfer the electrical signal from the rotating shaft
to a stationary data acquisition system.
23
Rotary torque transducer
Reaction torque sensors measure the torque without requiring the sensor to rotate. They are fixed
and measure the reaction force that is generated by the torque. These are used in applications where
the part being tested can rotate or in calibration systems.
Optical torque sensors use the principle of polarized light. A shaft is fitted with a polarizing filter,
and as torque is applied, the shaft twists and alters the polarization of the light passing through it.
The change in light polarization is measured and related to the amount of torque.
These sensors measure torque by detecting changes in the magnetic properties of a shaft. As torque
is applied, the magnetic permeability of the shaft material changes. This change can be detected
using magnetic field sensors, and the signal is processed to determine the torque.
24
Piezoelectric Torque Sensors:
Piezoelectric sensors use materials that generate an electric charge when mechanically stressed.
These sensors are particularly useful for measuring dynamic and impact torques because they have
a high-frequency response.
Hydraulic torque sensors measure torque by measuring the pressure of a fluid within a hydraulic
system. The torque applied to a shaft is transferred to a hydraulic fluid, and the resulting pressure
is proportional to the torque.
Each of these methods has its advantages and is suited to specific types of applications, depending
on factors such as the required range, accuracy, dynamic response, and environmental conditions.
In physics, strain is defined as the amount of deformation experienced by a body in the direction
of force applied, divided by the initial dimensions of the body12. The formula for strain (ε) is given
by:
𝛿𝛿𝛿𝛿
𝜖𝜖 =
𝐿𝐿
where:
● ε is the strain due to the stress applied,
● δl is the change in length, and
● L is the original length of the material1.
Strain is a dimensionless quantity as it just defines the relative change in shape1. It can be of two
types depending on stress application: tensile strain and compressive strain1. Tensile strain is
produced when a body increases in length as applied forces try to stretch it, while compressive
25
strain is produced when a body decreases in length when equal and opposite forces try to compress
it1.
There are a few common methods for measuring strain:
1. Strain gauge: This is a thin metallic foil arranged in a grid pattern that is attached to the surface
of the object. As strain is applied, the foil deforms, causing its electrical resistance to change. This
resistance change is measured and calibrated to determine the strain.
2. Extensometer: This device uses a spring-loaded transducer that stretches or compresses as the
object is strained. The transducer converts the mechanical displacement into an electrical signal
that indicates the strain measurement.
3. Optical extensometer: This non-contact device uses high-resolution cameras and tracking
software to optically measure changes in spacing between target points on the object. The
displacement change is used to calculate strain.
4. Crack gauges: Visual indicators such as brittle coatings or special grids are applied to the
surface. Cracks will initiate and propagate along the coating under strain, allowing the strain
distribution to be mapped.
The choice of strain measurement technique depends on factors like cost, accuracy, test
environment restrictions, surface preparation needs and whether a contact or non-contact method
is required. The measured strain data can be used to determine mechanical properties, validate
analytical models, guide design changes and test quality control.
Angular velocity refers to how fast an object rotates or revolves relative to a point or axis. It
describes the number of radians an object travels in a particular unit of time.
Angular velocity is measured in radians per second (rad/s) or radians per minute (rad/min). The
radians specify the angle of arc length the rotating body covers.
26
- It is related to linear velocity (v) and radius (r) by: ω = v/r
Angular velocity plays an important role in analyzing rotating mechanical systems like motors,
turbines, discs, gears, pendulums and gyroscopes. Understanding concepts around angular velocity
helps engineers design, operate and control such rotational machinery effectively.
The motion, stability and performance of these systems depend directly on the accurate
measurement and control of angular velocity as a function of time.
1. Tachometer - This instrument measures the speed of rotation directly in revolutions per minute
(rpm). A small generator is attached to the rotating axis, which produces a voltage proportional to
rpm.
2. Rotary encoder - An optical or magnetic coded disc provides output pulses indicating
incremental angular motion. Counting the pulses over a time interval determines the angular
velocity.
27
3. Stroboscope - A strobe light directed at a rotating object seems to freeze its motion at certain
flash rates. Matching the flash rate to rpm allows direct optical observation and measurement of
angular velocity.
4. Gyroscope - Measures angular velocity by detecting the Coriolis force exerted on an oscillating
mass within the gyroscope when it rotates. The output signal represents the rate of turn.
5. Accelerometer - An accelerometer mounted at a fixed radius from the rotation axis experiences
centripetal acceleration. This acceleration value and radius give the angular rate.
The selection depends on motion type (uniform, oscillatory etc.), accuracy required, environment
and interface. These instruments find wide application in research, industry and navigation for
measuring angular velocity.
Temperature: the temperature of a substance is a measure of the hotness or the coldness of that
substance. It is the thermal state of the body which determines whether it will give heat to, or
receive heat from other bodies. The term temperature and heat are closely related. Temperature
may be defined as “degree of heat” but heat is usually taken to mean “quantity of heat”.
Temperature and heat are related quantitatively since heat flows, of its own accord, from a body
at a higher temperature to a body at a lower temperature. It is therefore important to remember that
28
in temperature measurement, two bodies in intimate contact are at the same temperature only if
there is no heat flow between them.
Temperature scales: Temperature scales are based upon some recognized fixed points. At least two
fixed points are required which are constant in temperature and can be easily reproduced such as:
The lower fixed point, or ice point, is the temperature of ice, prepared from distilled water when
melting under a pressure of 760 mm hg. The upper fixed point is the temperature of steam from
pure distilled water boiling under a pressure of 760 mm hg. The boiling point of water varies
greatly with applied pressure. Thus, it is important to note the pressure at which the water is
boiling.
The temperature difference between the ice point and the steam point is known as the
“fundamental interval”. In other to graduate a thermometer between these fixed points, the
temperature between these points is divided into several equal parts.
Fahrenheit and centigrade (Celsius) temperature scales: the Fahrenheit scale, abbreviated oF was
introduced in 1709 by a German philosopher Fahrenheit. On this scale, the melting point of ice is
designated as 32oF and the boiling point at 212oF. The centigrade scale was introduced in 1742 by
a Swedish astronomer Celsius. On this scale, the ice point of ice is 0oC and the boiling point at
100oC.between these two intervals, the Fahrenheit scale is divided into 180 equal divisions and the
centigrade scale into 100 equal divisions. Since both scales are linear, temperature can be easily
converted from one to another, using the following equation:
℃ ℉−32
100
= 180
Kelvin and Rankine temperature scales: the Kelvin scale abbreviated oK was introduced in about
1848 by Lord Kelvin. On the Kelvin scale the ice point is 273.15oK and the steam point is
373.15oKthe Kelvin scale like the centigrade scale is also divided into 100 equal divisions. The
centigrade (oC) can be converted into Kelvin (oK) using the equation:
°𝐾𝐾 = ℃ + 273.15
The Rankine scale: On the Rankine scale, which is abbreviated as oR, the ice point is 491.7oR and
the steam point is 671.7oR. The Rankine scale, like the Fahrenheit scale is divided into 180 equal
divisions. Temperatures in Fahrenheit (oF) can be converted into Rankine (oR) by using the
equation:
°𝑅𝑅 = ℉ + 459.69
Both Rankine and Kelvin temperature scales are called absolute scales because they use
absolute zero as one of their reference points.
29
The Reaumur scale: the Reaumur scale abbreviated as oR’ was introduced in about 1731. It assigns
0oR’ as the ice point and 80oR’ as the steam point. This often used in the alcohol industries
Methods of temperature measurement
The temperature measuring instruments are classified according to the nature of change
produced in the testing body by the change of temperature. They may be classified as follows:
• Expansion thermometers
• Filled system thermometers
• Electrical temperature instruments
• Pyrometers.
1.3.1 Expansion thermometers: these thermometers are classified according to the substance which
expands. They may be described under three headings as follows:
b) liquid-in-metal thermometers
𝑙𝑙 2 ∆T
𝑆𝑆 ∝
𝑡𝑡
where; S – distance moved
t - Thickness of metal
The movement of a bimetallic strip is utilized to deflect a pointer over a graduated scale.
The longer the length of the strip, the larger the deflection of the tip of the strip; since the distance
is proportional to the square of the length. A longer can be contained in a relatively small space if
the strip is wound in a spiral, a helix or a multi-helix form. If the bimetallic element is wound in a
30
spiral form, the spiral coil is tightened with an increase in temperature. As it coils, the counter post
rotates clockwise, and thus a pointer attached to the post also moves on a calibrated temperature
scale. (Fig 1.2). This type of temperature indicator is often used in homes and offices to indicate
ambient air temperature.
The bimetal can also be used in the form of a helix to indicate temperature. This type of
industrial bimetallic thermometer is shown in Fig 1.3. It consists of a tightly wound bimetallic strip
located inside the stem of the thermometer with one end fastened permanently to the outer casing.
A strip is attached to a centre post that extends from the stem to the centre of the indicating dial.
A pointer is attached to the centre post. When the temperature surrounding the stem changes, the
bimetal expands and the helical coil winds and unwinds which rotates the center post. This causes
the pointer to move on the dial to indicate the measured temperature. Bimetallic thermometers are
available for temperatures ranging from -75 to 540oC.
31
Liquid-In-Glass Thermometers: the liquid in glass thermometer is one of the simplest temperature
measuring devices, widely used in both laboratory and industry. Its operation is based on the fact
that liquids expand as the temperature rises. In this type of thermometer, the expansion causes the
liquid to rise in the tube, indicating the temperature. The simplest form of liquid in glass
thermometer is shown in Fig 1.4. It consists of a small-bore glass tube with a thin-walled glass
bulb at its lower end containing almost all the liquid (usually mercury). As heat is transferred into
the mercury, the mercury expands, pushing the column of mercury higher in the capillary above
which indicates the temperature.
Liquid-in-glass thermometer is commonly used for a temperature range of -120 to 320oC. But
when mercury is used as liquid, it freezes at -39oC. Thus, for measuring very low temperatures,
alcohol is used as a liquid. This type of thermometer cannot be used to measure temperatures
higher than 600oC, because it permanently changes the volume of the bulb, thus destroying the
accuracy of the instrument.
32
Liquid-In-Metal Thermometer: the distinct disadvantages of liquid-glass thermometers are
overcome in liquid-in-metal thermometers. A liquid-in-metal is shown in Fig 1.5. Mercury has
been used as liquid and the metal is steel. This mercury-in-steel thermometer works on the same
principle as the liquid-in-glass thermometer. The glass bulb is replaced by a steel bulb and the
glass capillary by one of stainless steel. Mercury is used as liquid in the system. As mercury in the
system is not visible, a bourdon is used to measure the change in its volume. The bourdon, the
capillary tube, and the bulb are completely filled with mercury, usually at higher pressure.
When the temperature to be measured rises, the mercury in the bulb expands more than the bulb
so that some mercury is driven through the capillary tube to the bourdon tube. As the temperature
continues to rise, an increasing amount of mercury will be driven into the bourdon tube, causing it
to bend. One end of the bourdon tube is fixed, while the motion of the other end is communicated
to the pointer which moves on a calibrated temperature scale.
Generally, mercury is used as a liquid. But it has its limitations, particularly at the lower
end of the temperature scale. For this and other reasons, other liquids are also used in place of
mercury; these include xylene, alcohol, ether etc.
Gas Thermometers: the operation of the gas thermometer is based on the ideal gas law which states
that the volume of a gas increases with temperature if the pressure is maintained constant, and the
pressure increases with temperature if the volume is maintained constant. Therefore, if a certain
volume of inert gas is contained in a bulb, Capillary and bourdon tube, and most of the gas in the
bulb, then the pressure indicated by the bourdon tube may be calibrated in terms of the temperature
of the bulb.
33
Nitrogen is the favourite fill for the gas-filled thermometer because it is almost inert and
inexpensive. It does react somewhat with the steel at temperatures exceeding 427oC, and it does
act less like a perfect gas at extremely low temperatures. Under these conditions, helium should
be used.
An advantage of the gas thermometer is that the gas in the bulb has a lower thermal capacity
than a similar quantity of liquid so that the response to temperature changes will be more rapid
than for a liquid-filled system with a bulb the same size and shape. It has a temperature range of -
200 to 1500oC depending on the gas used.
These consist of a bourdon tube, a capillary tube and a thermometer bulb all interconnected as
shown in fig1.6. The entire system is sealed after filling with the appropriate liquid (known as
filling liquid) at a pressure at the normal ambient temperature. The common liquids used are
mercury, ethyl alcohol, xylene and toluene. When in use the thermometer bulb is inserted inside
the substance to be measured. This causes the filling liquid inside the bulb to heat or cool until its
temperature matches that of the substance. This change in temperature causes the filling liquid to
expand or contract and thus the bourdon tube moves. With the increase in temperature, the liquid
expands and this forces the bourdon tube to uncoil. With the decrease in temperature, the liquid
contracts and it forces the bourdon tube to coil more tightly. The movement of the bourdon tube
may be used to drive a pointer for indicating temperature.
Liquid-Filled Thermometers: these thermometers are filled with liquids (other than mercury) and
operate on the principle of liquid expansion with an increase in temperature. The filling is usually
an inert hydrocarbon such as xylene (C8H10) which has a coefficient of expansion six times that of
mercury and makes smaller bulbs possible. Other liquids (even water) are sometimes used. The
criterion is that the pressure inside the system must be greater than the vapour pressure of the liquid
to prevent bubbles of vapour from forming. Also, the liquid should not be allowed to solidify even
in storage; otherwise, the calibration may be affected.
Vapour Pressure Thermometers: in this system, the bulb is partially filled with liquid, while the
capillary tube and the bourdon are filled with vapour. In this system, some of the liquid vaporizes
during operation.
A vapour pressure thermometer is shown in Fig 31. The liquid in the system boils and vaporizes
during operation which creates a gas or vapour inside the capillary and bourdon tube. The liquid
inside the bulb continues to boil until the pressure in the system equals the vapour pressure of the
boiling liquid. At this point, the liquid stops boiling unless its temperature increases. When the
temperature of the substance surrounding the bulb drops, the liquid and the vapour inside the bulb
also cool, causing some of the vapour to condense. As the vapour condenses, the pressure inside
the system decreases. Due to this change in pressure, the bourdon tube uncoils as the pressure
increases and coils as tightly as it decreases. This movement of the bourdon tube may be connected
to a pointer, or to a pen on a strip chart recorder, or to a transmitter to indicate temperature.
Form measurement
35
Form measurement typically refers to the process of assessing the shape, contour, and geometric
characteristics of an object or surface. This can include parameters such as roundness, flatness,
straightness, cylindricity, and other dimensional aspects that define the form of an object. Form
measurement is crucial in various industries, including manufacturing, engineering, and quality
control, where precise adherence to specifications is essential for the functionality, performance,
and interoperability of components. Techniques for form measurement range from traditional
methods using precision instruments like callipers and micrometres to advanced technologies such
as coordinate measuring machines (CMMs), optical profilometers, and 3D scanning devices. The
choice of method depends on the specific requirements of the measurement task, including
accuracy, speed, and complexity of the object being measured.
Measurement of straightness
Straightness typically refers to the linear deviation of a surface or axis from the ideal straight-line
geometry. There are a few common methods employed for straightness measurement:
1. Mechanical indicators - A dial indicator or linear variable differential transformer (LVDT) can
be mounted on a precision straight edge or slide to map deviations. The instrument is traversed
along the test line while maintaining contact or constant probe stand-off.
2. Optical beams - Collimated laser sources and detectors sense a relative angular change from
straightness error as they travel along the line. Interference effects are used to detect sub-micron
deviations.
3. Autocollimators - They project a collimated beam that reflects off a mirror mounted on the
moving target straight line. Angular deviations from non-straightness introduce lateral shifts at the
detector.
4. Interferometric techniques - Laser interferometers can discern straightness errors from changes
in optical path length between a reference and measurement arm.
5. Gravity-referenced methods - Precision spirit levels and electronic tilt sensors referenced to
gravity can map orientation changes along the line.
36
The specific selection depends on accuracy grade, environment and error separation needs
according to applicable straightness standards and tolerances.
Measurement of flatness
Flatness typically refers to the deviation of a surface from the ideal plane geometry. There are
several common methods to measure surface flatness deviations:
1. Mechanical dial indicators - Precision indicators are swept across a surface plate in a grid pattern
while maintaining constant contact force. The indicator deviations map the flatness profile.
2. Optical flats - Monochromatic light reflects between the test surface and an optical flat.
Interference fringes displayed indicate contours of deviation from flatness. Phase shifting analysis
can provide high-resolution maps.
3. Autocollimators - They project a collimated beam that reflects off the test surface onto an
internal detector. Angular deviations from flatness introduce lateral shifts in the reflected beam.
The specific method selection depends on the surface area, measurement speed, environmental
influences and the required flatness tolerance grade per applicable standards. Maintaining
cleanliness and calibration traceability are also key considerations during flatness evaluation.
Thread measurement
Threads are of prime importance; they are used as fasteners. It is a helical groove, used to transmit
force and motion. In the plain shaft, the hole assembly, the object of dimensional control is to
ensure a certain consistency of fit. The performance of screw threads during their assembly with
nut depends upon several parameters such as the condition of the machine tool used for screw
cutting, work material and tool.
Screw threads are used to transmit the power and motion, and are also used to fasten two
components with the help of nuts, bolts and studs. There is a large variety of screw threads varying
in their form, by included angle, head angle, helix angle etc. The screw threads are
mainly classified into 1) External thread and 2) Internal thread.
37
Fig. 36: a) External Thread b) internal thread
38
Height of thread: It is the distance measured radially between the major and minor diameters
respectively
Addendum: Radial distance between the major and pitch cylinders for the external thread.
The radial distance between the minor and pitch cylinder for the internal thread.
Dedendum: It is the radial distance between the pitch and minor cylinders for the external thread.
Also, the radial distance between the major and pitch cylinders for internal thread.
39
Fig. 39: floating carriage diameter measuring machine
41
Fig. 45: pitch measuring machine
GEAR MEASUREMENT
Introduction
Gear is a mechanical drive which transmits power through a toothed wheel. In this gear drive,
the driving wheel is in direct contact with the driven wheel. The accuracy of gearing is a very
important factor when gears are manufactured. The transmission efficiency is almost 99 in gears.
So, it is very important to test and measure the gears precisely. For proper inspection of gear, it is
very important to concentrate on the raw materials, which are used to manufacture the gears, also
very important to check the machining of the blanks, heat treatment and the finishing of teeth. The
gear blanks should be tested for dimensional accuracy and tooth thickness for the forms of gears.
The most commonly used forms of gear teeth are:
1. Involute
2. Cycloidal
The involute gears are also called straight tooth or spur gears. The cycloidal gears are used in
heavy and impact loads. The involute rack has straight teeth. The involute pressure angle is
either 20° or 14.5°.
Types of gears
1. Spur gear: cylindrical gear whose tooth traces are straight line. These are used for transmitting
power between parallel shafts.
42
2. Spiral gear: The tooth of the gear traces curved lines.
3. Helical gears: These gears are used to transmit the power between parallel shafts as well as
nonparallel and nonintersecting shafts. It is a cylindrical gear whose tooth traces are in a straight
line.
4. Bevel gears: The tooth traces are straight-line generators of cones. The teeth are cut on the
conical surface. It is used to connect the shafts at right angles.
5. Worm and Worm wheel: It is used to connect the shafts whose axes are non-parallel and non-
intersecting.
6. Rack and Pinion: Rack gears are straight spur gears with infinite radius.
Gear thickness measurement, often referred to as tooth thickness measurement, is a critical aspect
of gear inspection. It ensures that the gears will mesh correctly without too much play (which can
cause slippage) or too little play (which can cause binding or excessive wear). Tooth thickness is
typically defined as the width of a gear tooth measured along the pitch circle.
1. Vernier Caliper: A simple method to measure gear tooth thickness is by using a vernier calliper.
The caliper is placed over the gear tooth and the measurement is taken across the tooth. This
method is less precise and more suitable for quick checks or for gears where high precision is not
critical.
2. Micrometer: A more accurate method involves using a specialized micrometre with a ball or
disc-shaped anvil that is designed to sit in the tooth space. The micrometre is closed until the anvil
and spindle contact the gear tooth surfaces, and the reading is taken to determine the tooth
thickness.
3. Gear Tooth Vernier: This is a specialized tool that combines a vernier scale with a pair of
adjustable jaws that can measure tooth thickness directly. It is more precise than a standard vernier
calliper and is specifically designed for measuring gears.
4. Optical Comparators: An optical comparator can project a magnified image of the gear tooth
profile onto a screen. The image can then be measured using a reticle or measuring software to
determine tooth thickness.
6. Gear Measuring Wires: This method involves using a pair of precision wires or pins that are
placed in the tooth spaces opposite each other. The measurement over the wires is taken with a
micrometre, and the tooth thickness is then calculated using a formula that accounts for the wire
diameter and the gear's pitch and pressure angle.
43
7. Tooth Thickness Gauges: These gauges are designed to measure the thickness at a particular
section of the tooth, often at the pitch circle. The gauge has a feeler or a probe that fits into the
tooth space, and a reading is taken directly that corresponds to the tooth thickness.
The appropriate method for measuring gear tooth thickness will depend on the size of the gear, the
precision required, and the gear's module or diametral pitch. It is important to follow proper
measurement techniques and to calibrate measuring instruments regularly to ensure accurate and
reliable results.
44
7. Waviness: - Surface irregularities which are of greater spacing than roughness.
Roundness measurement
Roundness measurement involves assessing the degree to which the shape of an object
approximates a perfect circle. Roundness is a critical dimension in components that require high
precision, such as bearings, shafts, and precision rollers. Deviations from roundness can lead to
imbalanced rotating parts, increased wear, and reduced performance.
1. Roundness Testers or Form Testers**: These are precision instruments that measure the
deviation of the surface from an ideal circle. A typical roundness tester has a spindle that rotates
the workpiece while a sensor measures the distance to the surface from a fixed point. The sensor
tracks the surface as the part rotates, and the data is analyzed to determine the roundness profile.
3. Optical Methods: Some optical systems use lasers or other light sources to measure roundness.
These non-contact methods can rapidly scan the surface without physically touching the part,
reducing the risk of deformation or damage to delicate surfaces.
4. Gauges: For simpler applications, go/no-go gauges or ring gauges may be used to check
roundness. These gauges have a fixed size and can quickly determine if a part is within a certain
roundness tolerance.
The roundness profile obtained by these methods is analyzed to determine various roundness
parameters, such as:
- Radial Run-out: The difference between the maximum and minimum radius measured from the
centre of the part to the surface.
- Total Run-out: The combination of radial run-out and axial run-out, which is the variation along
the axis of rotation.
The measurement results are compared to the specified roundness tolerances to determine if the
part meets the required standards. It is important to ensure that the instruments are properly
calibrated and that the part is correctly mounted and aligned during measurement to obtain accurate
results.
46
The basic concept of laser
Lasers are devices that emit coherent, monochromatic and highly directional light beams, usually
in the ultraviolet, visible or infrared range. The name LASER itself is an acronym for “Light
Amplification by Stimulated Emission of Radiation”.
The fundamental principle behind laser operation is stimulated emission, proposed by Albert
Einstein in 1917. It refers to the process by which an incoming photon interacting with an excited
atom/molecule, causes it to release another photon of the same frequency, phase and direction.
This forms the basis of optical amplification, where a collection of atoms/molecules are excited
by an external source (optical pumping) to higher energy states. As these excited states decay and
emit photons, they stimulate more photons of identical properties to be emitted. The emitted light
bounces back and forth within the amplifying medium contained between two mirrors, one fully
reflective and the other partially transmissive.
This repeated light amplification leads to intense beams of near monochromatic and coherent laser
light emitted from the partially reflective end. The emitted laser properties depend on the
amplifying medium, optical cavity design and pumping mechanism used. An extensive range of
lasers are available today covering the electromagnetic spectrum from ultraviolet to far infrared.
The terms "DC" and "AC" in the context of laser interferometers do not typically refer to the types of
lasers themselves but rather to the type of signal processing used in the interferometry system. In laser
interferometry, a laser beam is split into two paths: one reflects off a reference surface and the other off
the test surface. The beams are then recombined to create an interference pattern that can be analyzed
to measure very small distances or changes in distance. Here's how DC and AC signal processing in laser
interferometry can be understood:
2. **AC Laser Interferometers**: AC interferometer systems, on the other hand, use alternating current
(AC) methods that involve modulating the laser light at a known frequency. The modulation can make
47
the system more sensitive and less prone to noise and signal distortion. It allows for the measurement
of dynamic changes and can provide higher-resolution measurements. AC systems often use heterodyne
or homodyne detection techniques for signal processing.
Types of lasers
As for types of lasers, they are generally categorized by the gain medium used to amplify the light, and
here are some common types:
1. **Gas Lasers**: These use gas as the gain medium. Examples include the helium-neon (HeNe) laser,
carbon dioxide (CO2) laser, and argon-ion lasers.
2. **Solid-State Lasers**: These have a solid gain medium, typically a doped insulating crystal. Examples
are the neodymium-doped yttrium aluminium garnet (Nd:YAG) laser and the titanium-doped sapphire
(Ti:sapphire) laser.
3. **Dye Lasers**: These use an organic dye as the gain medium and can produce a wide range of
wavelengths depending on the dye used.
4. **Semiconductor Lasers**: Also known as diode lasers, these use a semiconductor as the gain
medium and are compact and efficient. They are commonly found in consumer electronics.
5. **Fiber Lasers**: These use a doped optical fibre as the gain medium and are known for their high
output power and excellent beam quality.
6. **Excimer Lasers**: These are a type of gas laser that uses a mixture of noble gases and halogens to
produce ultraviolet light.
Each type of laser has its own set of characteristics that make it suitable for specific applications,
including use in various interferometry systems for precision measurements.
Applications of lasers
Industrial Material Processing - Lasers are used extensively for cutting, welding, drilling, marking
and other manufacturing processes for everything from automotive parts to smartphones. The
focused laser beam provides non-contact precision processing.
Healthcare - Lasers have revolutionized applications like eye surgery for vision correction, tumour
removal, dental procedures and various cosmetic treatments by enabling localized tissue removal,
sealing blood vessels and tissue regeneration.
Metrology and Imaging - Laser-based sensors like LIDAR, interferometers, gyroscopes, barcode
scanners and confocal microscopes exploit laser properties to enable precise measurement and
48
imaging across disciplines like self-driving vehicles, semiconductor fabrication, biomedical
research and quality control.
Entertainment - Lasers are universal in entertainment at concerts, festivals, theme parks and other
venues due to their bright, vibrant and energetic visual effects possible with the rapid scanning of
intense laser beams using galvo mirrors.
Industrial calibration
Calibration is the process of configuring an instrument to provide a result for a sample within an
acceptable range. Essentially, it is the act of ensuring that a measuring device produces accurate results,
which are consistent with the standard or known reference. This is typically done by comparing the
device in question with a calibration standard of known accuracy.
1. Quality: Calibration ensures that products meet their specifications. For example, in manufacturing, if
a machine tool is not calibrated, the parts it produces may not be within the desired tolerances, leading
to poor fit, function, or performance. Regular calibration of measuring instruments helps in maintaining
the quality of the production output, ensuring that components meet the required standards and
specifications.
2. Productivity: Properly calibrated equipment works as intended, which means fewer errors and
defects, leading to higher efficiency. When measurement tools are not calibrated, the risk of producing
non-conforming products increases, which can result in rework or scrap, both of which are costly and
time-consuming. By ensuring that instruments are calibrated, production processes can run smoothly,
with minimal interruptions, thus optimizing productivity.
49
Regular calibration is therefore a key aspect of quality assurance programs in various industries, and it
helps in maintaining compliance with relevant standards and regulations. It is also a fundamental
practice in risk management, as it reduces the probability of errors that could lead to product failures,
safety incidents, or costly downtime.
Calibrating pressure instruments involves comparing the output of the instrument to a known reference
standard across a range of pressure values. This is typically done using a calibration setup that includes a
pressure source, a pressure reference standard, and appropriate connectors and adapters. The
instrument under calibration is connected to the setup, and the output readings are compared to the
reference standard. Any deviations or errors in the instrument's measurements can be identified, and
adjustments can be made to align the readings with the reference standard. The calibration process
ensures that pressure instruments provide accurate and reliable measurements, which is crucial in
industries such as manufacturing, oil and gas, and HVAC systems.
Calibrating temperature instruments involves verifying the accuracy and reliability of temperature
measurements. This is done by comparing the instrument's output readings to a known reference
standard, such as a calibrated thermometer or a temperature bath. The instrument under calibration is
placed in a controlled temperature environment, and its readings are compared to the reference
standard at various temperature points. Any deviations or errors can be identified, and necessary
adjustments can be made to correct the readings. Temperature calibration is essential in industries such
as food processing, pharmaceuticals, and scientific research, where precise temperature control is
critical for product quality and safety.
Calibrating flow instruments ensures accurate measurement of fluid flow rates. Flow calibration involves
comparing the instrument's output readings to a calibrated flow standard, such as a flowmeter or a
volumetric container. The instrument under calibration is connected to the calibration setup, and the
flow rates are varied across a range of values. The instrument's readings are compared to the reference
standard, and any discrepancies are noted. Adjustments can be made to bring the instrument's readings
in line with the reference standard. Flow calibration is important in industries such as water and
wastewater management, oil and gas, and chemical processing, where precise flow measurement is
crucial for process control, efficiency, and compliance with regulations.
In all these calibration processes, it is essential to follow standardized procedures and use traceable
calibration standards to ensure accurate and reliable measurements. Calibration helps maintain the
quality, accuracy, and safety of instruments, ensuring that they perform optimally within their specified
ranges.
50
CONTROL SYSTEMS
Control systems play a crucial role in various industries and applications, offering several
benefits and enabling efficient and reliable operations. Here are the key importance and areas of
application of control systems:
Control systems can be classified into various branches based on their applications and the
specific variables they aim to control. Here are some common branches of control system
applications:
51
1. Speed Control: Speed control systems are designed to regulate the speed of a motor, engine, or
any other rotating machinery. These systems ensure that the speed remains constant or follows a
specific profile, based on the requirements of the application. Examples include motor speed
control in industrial drives, automotive cruise control, and conveyor belt speed control.
2. Position Control: Position control systems focus on maintaining or controlling the position of
an object or system. They ensure that the object reaches and maintains a desired position
accurately. Examples include robotic arm control, CNC machine tool positioning, and satellite
dish positioning.
3. Process Control: Process control systems are used to regulate and control various parameters
in industrial processes. These systems aim to maintain specific variables such as temperature,
pressure, flow rate, or level within desired ranges. Examples include temperature control in
chemical reactors, pressure control in HVAC systems, and flow control in water treatment plants.
4. Path Control: Path control systems are used to control the movement or trajectory of objects
along a specific path. These systems ensure that the object follows a predefined path accurately.
Examples include automated guided vehicles (AGVs), robotic assembly line control, and CNC
machine tool path control.
5. Stability Control: Stability control systems focus on maintaining stability and preventing
instability or oscillations in a system. They are commonly used in various applications to ensure
safe and stable operation. Examples include aircraft autopilot systems, vehicle stability control
(ESP), and power system stability control.
6. Level Control: Level control systems are used to regulate and control the level of liquids or
solids in tanks or containers. These systems ensure that the level remains within the desired
limits. Examples include water level control in reservoirs, fuel level control in tanks, and liquid
level control in chemical processes.
These are just a few examples of the branches of control system applications. Control systems
can be further categorized based on specific industries, domains, or variables they control. The
choice of the control system branch depends on the specific requirements and objectives of the
application at hand.
1. Automation and Efficiency: Control systems automate processes and tasks, reducing the need
for manual intervention. They optimize operations, improve efficiency, and minimize human
errors, leading to increased productivity and cost savings.
2. Consistency and Quality: Control systems ensure consistent and precise control over variables
such as temperature, pressure, speed, and flow rate. This consistency helps maintain product
quality, reduces variations, and ensures compliance with standards and specifications
52
3. Safety and Reliability: Control systems provide safety measures by monitoring and controlling
critical parameters. They can detect abnormalities, trigger alarms, and initiate corrective actions
to prevent accidents, equipment damage, and system failures. This enhances operational safety
and reliability.
4. Process Optimization: Control systems continuously monitor and analyze data to optimize
processes and improve performance. They can adjust setpoints, regulate parameters, and
implement feedback control strategies to achieve optimal operating conditions and maximize
efficiency.
1. Industrial Automation: Control systems are extensively used in manufacturing and industrial
processes. They control machinery, robots, and production lines to ensure precise and consistent
operations. Control systems are applied in sectors such as automotive, chemical, pharmaceutical,
food and beverage, and electronics manufacturing.
2. Energy Management: Control systems play a vital role in energy generation, distribution, and
consumption. They regulate power generation, grid stability, and energy efficiency in industries,
buildings, and renewable energy systems.
3. Process Control: Control systems are used to regulate and optimize various processes,
including chemical reactions, refining, water treatment, HVAC (Heating, Ventilation, and Air
Conditioning), and wastewater treatment. They ensure proper control of variables like
temperature, pressure, level, and flow rate.
5. Building Automation: Control systems are utilized in building management systems to control
lighting, heating, ventilation, air conditioning, and security systems. They optimize energy
usage, maintain comfort levels, and enhance occupant safety and security.
6. Environmental Monitoring: Control systems are used in environmental monitoring and control
applications, such as air quality management, pollution control, and waste management. They
help regulate emissions, monitor pollutant levels, and ensure compliance with environmental
regulations.
7. Biomedical and Healthcare: Control systems are applied in medical devices, patient
monitoring systems, and healthcare facilities. They regulate parameters like drug dosage, patient
vital signs, and environmental conditions to ensure accurate diagnostics, treatment, and patient
safety.
Control systems have a wide range of applications, contributing to improved efficiency, safety,
and reliability across various industries and sectors. They enable precise control, automation, and
53
optimization of processes, leading to enhanced productivity, quality, and operational
performance.
There are two main types of control systems: open-loop control systems and closed-loop control
systems.
1. Open-Loop Control Systems: In an open-loop control system, the control action is not
influenced by the system's output. It operates based on a predetermined set of instructions
or inputs. The system's output is not measured or compared to the desired output or
reference value. Instead, the control system applies a fixed control signal or input to the
system. The system's response is solely determined by its internal dynamics and the
applied input. Open-loop control systems do not have feedback, which means they cannot
make corrections based on the system's actual output. Examples of open-loop control
systems include automatic washing machines, toasters, and traffic signal timers.
54
Fig. 59: Block diagram of a closed-loop control system
Continuous and sequential systems are sub-divisions of open-loop control systems. Here's a
description of each with examples:
- Speed Control: A cruise control system in a car. The control action continuously adjusts the
throttle to maintain a constant speed without discrete switching.
55
2. Sequential Systems: Sequential systems, also known as time-variant systems, operate in a
sequential or step-by-step manner. These systems have a specific sequence or order in which the
control actions are applied. The control actions are triggered based on specific events or
conditions. Examples of sequential systems include:
- Traffic Light Control: A traffic light system at an intersection. The control actions are
triggered sequentially based on a predefined timing sequence or the detection of vehicles at
different lanes.
- Washing Machine Control: A washing machine that goes through different stages or steps in
a predefined sequence, such as filling water, agitating, rinsing, and spinning.
In both continuous and sequential systems, the control action is determined based on the input or
set of instructions without considering the system's output. These open-loop control systems rely
on the accuracy of the initial instructions and do not have feedback to make adjustments based
on the actual output.
Continuous and On-Off systems are sub-divisions of closed-loop control systems. Here's a
description of each with examples:
- Temperature Control: A thermostat controlling the temperature of a room. The control system
continuously adjusts the heating or cooling output based on the difference between the actual
temperature and the desired temperature.
- Speed Control: A closed-loop speed control system in a motor drive. The control system
continuously adjusts the motor's input voltage or current to maintain a constant speed based on
the feedback from speed sensors.
2. On-Off Systems: On-Off systems, also known as bang-bang control, involve switching the
control signal between two discrete states - fully ON or fully OFF. The control action is applied
in a binary manner, turning the control signal ON when the system's output falls below a certain
threshold and turning it OFF when the output exceeds another threshold. Examples of On-Off
systems in closed-loop control include:
- Thermostat Control: An air conditioning system with On-Off control. The control system
turns the cooling output ON when the temperature rises above a set threshold and turns it OFF
when the temperature falls below another threshold.
56
- Level Control: A pump control system for maintaining a certain liquid level in a tank. The
control system turns the pump ON when the liquid level falls below a set threshold and turns it
OFF when the level rises above another threshold.
In both continuous and On-Off systems, the control action in closed-loop control is influenced by
the feedback from the system's output. The continuous systems continuously adjust the control
signal, while the On-Off systems switch between discrete states based on threshold conditions.
These closed-loop control systems aim to maintain the desired output by continuously or
intermittently adjusting the control signal based on the feedback.
Continuous and On-Off systems are sub-divisions of closed-loop control systems. Here's a
description of each with examples:
In a basic closed-loop control system, several terms are associated with its components and
operation. Here's an explanation of the key terms:
1. Process: The process refers to the system or plant being controlled. It could be a physical
system, such as a motor, temperature control system, or chemical reactor. The process generates
an output or response based on the control input.
2. Setpoint: The setpoint, also known as the reference value or desired value, is the target value
that the controlled variable should achieve or maintain. It represents the desired operating
condition of the process.
3. Sensor: The sensor, also called a transducer, measures the actual value of the controlled
variable or the output of the process. It provides feedback to the control system by converting the
physical quantity into an electrical signal.
4. Controller: The controller is the central component of the closed-loop control system. It
receives the feedback signal from the sensor and compares it to the setpoint. Based on this
comparison, the controller generates a control signal or output to adjust the process and bring the
actual value closer to the set point.
5. Actuator: The actuator receives the control signal from the controller and translates it into
physical action or manipulation of the process. It could be a motor, valve, heater, or any device
that can modify the process variable.
7. Closed-Loop: The closed-loop refers to the feedback loop in the control system. It involves
continuously comparing the actual value from the sensor to the setpoint and adjusting the control
57
signal accordingly. This feedback loop allows the control system to continuously correct and
regulate the process.
8. Controlled Variable: The controlled variable is the physical quantity or parameter of the
process that is being controlled. It could be temperature, pressure, flow rate, position, or any
other measurable parameter.
By understanding these terms, one can grasp the fundamental concepts and components of a
basic closed-loop control system.
1. Improved Accuracy: Closed-loop control systems continuously compare the actual output to
the desired setpoint and make adjustments accordingly. This feedback mechanism allows for
precise control, resulting in improved accuracy and reduced errors.
2. Enhanced Stability: Closed-loop systems are inherently more stable compared to open-loop
systems. The feedback loop helps maintain stability by continuously monitoring and adjusting
the control signal to counteract disturbances and variations in the process.
3. Robustness: Closed-loop control systems are often more robust and can handle variations in
the process or external disturbances. The feedback mechanism enables the system to adapt and
make corrections based on real-time information, ensuring stable operation even in changing
conditions.
5. Improved Response Time: Closed-loop systems can respond quickly to changes in the process
or setpoint. The feedback loop enables the system to detect deviations and adjust the control
signal promptly, resulting in faster response times.
1. Complexity: Closed-loop control systems are generally more complex compared to open-loop
systems. They require additional components such as sensors, controllers, and feedback
mechanisms, which can increase the system's complexity and cost.
2. Design and Tuning Challenges: Designing and tuning closed-loop control systems can be
more challenging. Selecting the appropriate control algorithm and tuning the controller
parameters to achieve optimal performance requires expertise and careful analysis.
58
3. Potential Stability Issues: While closed-loop systems are generally more stable, they can still
face stability issues if not designed or tuned properly. Improper controller design or aggressive
tuning can lead to oscillations, instability, or even system failure.
4. Dependency on Sensors: Closed-loop control systems heavily rely on accurate and reliable
sensors to provide feedback. If the sensors fail or provide incorrect measurements, it can
adversely affect the control system's performance.
5. Higher Cost: Closed-loop systems often involve additional components and complexity, which
can result in higher costs compared to open-loop systems. The cost of sensors, controllers, and
associated hardware can add to the overall system cost.
It's worth noting that while closed-loop systems offer several advantages, they may not be
necessary or suitable for every application. The decision to implement a closed-loop control
system should consider the specific requirements, complexity, and cost-effectiveness of the
application.
Block diagram A pictorial representation of the functions performed by each component and of
the flow of signals.
Basic elements of a block diagram Blocks Transfer functions of elements inside the blocks
Summing points Take off points Arrow
Block diagram A control system may consist of several components. A block diagram of a
system is a pictorial representation of the functions performed by each component and of the
flow of signals. The elements of a block diagram are block, branch point and summing point.
Block In a block diagram all system variables are linked to each other through functional blocks.
The functional block or simply block is a symbol for the mathematical operation on the input
signal to the block that produces the output.
Summing point Although blocks are used to identify many types of mathematical operations,
operations of addition and subtraction are represented by a circle, called a summing point. As
shown in Figure a summing point may have one or several inputs. Each input has its appropriate
plus or minus sign. A summing point has only one output and is equal to the algebraic sum of the
inputs.
59
A takeoff point is used to allow a signal to be used by more than one block or summing point.
The transfer function is given inside the block • The input in this case is E(s) • The output in this
case is C(s)
Functional block – each element of the practical system represented by a block with its T.F.
Branches – lines showing the connection between the blocks
Arrow – associated with each branch to indicate the direction of flow of signal
Closed loop system Summing point – comparing the different signals
Take off point – point from which signal is taken for feedback
Advantages of Block Diagram Representation Very simple to construct a block diagram for a
complicated system Function of individual elements can be visualized Individual & Overall
performance can be studied Over all transfer function can be calculated easily. Disadvantages of
Block Diagram Representation No information about the physical construction Source of
energy is not shown
60
R(s) – Laplace of reference input r(t) C(s) – Laplace of controlled output c(t) E(s) – Laplace of
error signal e(t) B(s) – Laplace of feed back signal b(t) G(s) – Forward path transfer function
H(s) – Feed back path transfer function
Block diagram reduction technique Because of their simplicity and versatility, block diagrams
are often used by control engineers to describe all types of systems. A block diagram can be used
simply to represent the composition and interconnection of a system. Also, it can be used,
together with transfer functions, to represent the cause-and-effect relationships throughout the
system. Transfer Function is defined as the relationship between an input signal and an output
signal to a device.
61
moving the pickup point ahead of the block
63
Transfer function
Example
Consider the block diagram shown in the following figure. Let us simplify (reduce) this block
diagram using the block diagram reduction rules.
Step 1 − Use Rule 1 for blocks G1 and G2. Use Rule 2 for blocks G3 and G4. The modified block
diagram is shown in the following figure.
64
Step 2 − Use Rule 3 for blocks G1G2 and H1. Use Rule 4 for shifting the take-off point after block
G5. The modified block diagram is shown in the following figure
Step 3 − Use Rule 1 for blocks (G3+G4) and G5. The modified block diagram is shown in the
following figure
Step 4 − Use Rule 3 for blocks (G3+G4)G5 and H3. The modified block diagram is shown in the
following figure.
65
Step 5 − Use Rule 1 for blocks connected in series. The modified block diagram is shown in the
following figure.
Step 6 − Use Rule 3 for blocks connected in a feedback loop. The modified block diagram is
shown in the following figure. This is the simplified block diagram.
Frequency response
The concept of frequency response refers to the behaviour of a system in response to different
frequencies of input signals. It provides information about how a system attenuates or amplifies
different frequencies and how it introduces phase shifts to the input signals.
The amplitude response shows how the system's output magnitude changes with respect to
different input frequencies. It indicates whether the system amplifies or attenuates specific
frequencies. The amplitude response is usually represented in decibels (dB) on a logarithmic
scale.
The phase response, on the other hand, represents the phase shift introduced by the system to the
input signal at different frequencies. It shows how the output signal is shifted in time compared
to the input signal. The phase response is typically represented in degrees or radians.
The frequency response of a system is an essential characteristic that helps in understanding its
behaviour and performance. It allows engineers to analyze and design systems for specific
66
frequency ranges, such as audio systems, communication systems, and control systems. By
examining the frequency response, engineers can determine the system's stability, gain margin,
phase margin, and overall performance in different frequency regions.
In summary, the frequency response of a system describes how the system responds to different
frequencies of input signals, providing information about its gain and phase characteristics. It is a
crucial tool for analyzing and designing systems in various fields of engineering.
In a Nyquist diagram, the frequency is represented along the horizontal axis, while the complex
values of the transfer function are plotted on the vertical axis. The plot typically shows the
magnitude and phase of the transfer function at each frequency.
67
The Nyquist diagram provides valuable insights into the stability of a system. It allows engineers
to determine if a system is stable by examining the encirclement of the critical point (-1,0) in the
complex plane. If the Nyquist plot does not encircle the critical point, the system is stable. If it
encircles the critical point, the system is unstable.
2. Bode Plots: Bode plots are graphical representations of the frequency response of a system in
the amplitude and phase domains. They provide a clear and intuitive visualization of a system's
gain and phase characteristics.
A Bode plot consists of two separate plots: the amplitude plot and the phase plot. The amplitude
plot shows the gain (magnitude) of the system's transfer function in decibels (dB) as a function
of frequency. The phase plot shows the phase shift introduced by the system to the input signal in
degrees or radians as a function of frequency.
68
In a Bode plot, the frequency is represented logarithmically on the horizontal axis, while the gain
and phase are plotted on separate vertical axes. The Bode plot allows engineers to easily analyze
the gain margin, phase margin, bandwidth, and resonance frequency of a system.
Bode plots are widely used in control system analysis and design. They provide valuable
information about the system's stability, frequency response, and overall performance. Engineers
can use Bode plots to optimize system parameters and ensure desired system behaviour.
In summary, Nyquist diagrams and Bode plots are graphical representations of a system's
frequency response. The Nyquist diagram shows the complex values of the transfer function in
the complex plane, providing insights into system stability. Bode plots display the gain and phase
characteristics of the system, aiding in the analysis and design of control systems.
The general principle of determining the time and frequency response of a system involves using
specific input signals, such as step and ramp signals, and analyzing the system's output.
1. Step Response: The step response of a system is obtained by applying a step input signal to the
system and observing the output. A step input signal is a sudden change from one value to
another, typically from zero to a non-zero value. When a step input is applied, the system's
output initially exhibits a transient response, followed by a steady-state response.
69
By analyzing the step response, we can determine various characteristics of the system, such as
rise time, settling time, overshoot, and steady-state error. These parameters provide insights into
the system's dynamic behaviour and performance.
2. Ramp Response: The ramp response of a system is obtained by applying a ramp input signal to
the system and observing the output. A ramp input signal is a linearly increasing or decreasing
signal with time. When a ramp input is applied, the system's output also exhibits a transient
response and a steady-state response.
Analyzing the ramp response allows us to determine the system's gain and phase characteristics
in the frequency domain. The slope of the ramp response curve provides information about the
system's gain, while the phase shift between the input and output signals indicates the system's
phase response.
By performing a Fourier analysis on the ramp response, we can obtain the system's frequency
response, which describes how the system attenuates or amplifies different frequencies and
introduces phase shifts.
In summary, the general principle of determining the time and frequency response of a system
involves applying specific input signals, such as step and ramp signals, and analyzing the
system's output. The step response helps in understanding the system's transient and steady-state
behaviour, while the ramp response provides insights into the system's gain and phase
characteristics, leading to the determination of the system's frequency response.
The response of first and second-order systems can be determined using different methods
depending on the input signal. Here are the general approaches for finding the response of first
and second-order systems:
1. First-Order Systems:
- Step Response: For a first-order system, the step response can be obtained by using the
transfer function of the system and applying the Laplace transform. The Laplace transform of the
step function is 1/s, where 's' is the complex frequency variable. By substituting this into the
transfer function, you can solve for the output response in the Laplace domain. Then, by taking
the inverse Laplace transform, you can obtain the time-domain response of the system.
- Ramp Response: Similar to the step response, the ramp response of a first-order system can
be obtained by applying the Laplace transform to the transfer function and substituting the
Laplace transform of the ramp function (1/s^2) into it. By taking the inverse Laplace transform,
you can obtain the time-domain response.
70
2. Second-Order Systems:
- Step Response: The step response of a second-order system can be determined by using the
transfer function and applying the Laplace transform. The transfer function typically takes the
form of a second-order polynomial in the Laplace domain. By substituting the Laplace transform
of the step function (1/s) into the transfer function, you can solve for the output response in the
Laplace domain. Then, by taking the inverse Laplace transform, you can obtain the time-domain
response of the system.
-Ramp Response: Similar to the step response, the ramp response of a second-order system can
be obtained by applying the Laplace transform to the transfer function and substituting the
Laplace transform of the ramp function (1/s^2) into it. By taking the inverse Laplace transform,
you can obtain the time-domain response.
It's important to note that the specific form of the transfer function for first and second-order
systems will vary depending on the system's characteristics, such as damping ratio and natural
frequency. The exact calculations for the response will depend on the specific transfer function
of the system.
The response of a second-order system can be characterized by several key parameters that
provide insights into its behaviour. Here are the commonly used parameters:
1. Rise Time: The rise time is the time taken for the system's response to rise from a specified
initial value to a specified final value. It is often measured as the time it takes for the response to
go from 10% to 90% of its final value.
2. Peak Time: The peak time is the time taken for the system's response to reach its maximum
peak value. It represents the time it takes for the response to reach the highest point before
settling down.
3. Overshoot: The overshoot is the maximum percentage or absolute value by which the response
exceeds its final steady-state value. It indicates the extent of the system's transient response
beyond the desired steady-state value. Overshoot is usually expressed as a percentage of the final
value.
4. Settling Time: The settling time is the time taken for the response to reach and stay within a
certain percentage (usually 5%) of the final steady-state value. It represents the time required for
the system's response to stabilize within an acceptable range.
5. Damping Ratio: The damping ratio (denoted by ζ) is a dimensionless parameter that relates to
the amount of damping in the system. It determines the shape of the response curve and affects
the system's stability. A higher damping ratio leads to a faster settling time but may result in a
slower response.
6. Natural Frequency: The natural frequency (denoted by ωn) is a measure of the system's
inherent oscillation rate. It represents the frequency at which the system tends to oscillate in the
71
absence of any external disturbances. The natural frequency is inversely proportional to the
settling time, with higher natural frequencies leading to faster responses.
These parameters provide valuable information about the transient and steady-state behaviour of
a second-order system. By analyzing these parameters, engineers can assess system performance,
stability, and the degree of overshoot or undershoot in the response.
It's worth noting that these parameters are interconnected, and changes in one parameter can
affect others. The specific values of these parameters depend on the characteristics of the system,
such as the damping ratio and natural frequency.
72