Mee 313 (Lectures 1-4, 6-7) My Note
Mee 313 (Lectures 1-4, 6-7) My Note
Mee 313 (Lectures 1-4, 6-7) My Note
Measurement:-
The old measurement is used to tell us length, weight and temperature and a change of these physical
measurements is the result of an opinion formed by one (or) more observers about the relative size
(or) intensity of some physical quantities.
Definition:
The word measurement is used to tell us the length, the weight, the temperature, the colour or a
change in one of these physical entities of a material. Measurement provides us with means for
describing the various physical and chemical parameters of materials in quantitative terms. For
example 10 cm length of an object implies that the object is 10 times as large as 1 cm; the unit
employed in expressing length.
These are two requirements which are to be satisfied to get good result from the measurement.
1. The standard must be accurately known and internationally accepted.
2. The apparatus and experimental procedure adopted for comparison must be provable.
Instrumentation
Definition:
The human senses cannot provide exact quantitative information about the knowledge of events
occurring in our environments. The stringent requirements of precise and accurate measurements in
the technological fields have, therefore, led to the development of mechanical aids called
instruments.
Or
Definition: the technology of using instruments to measure and control physical and chemical
properties of materials is called instrumentation.
If the measuring and controlling instruments are combined so that measurements provide impulses
for remote automatic action, the result is called control system.
Uses:
-> to study the function of different components and determine the cause of all functioning
components of the system, to formulate certain empirical relations.
-> to test a product or materials for quality control.
-> to discover effective components.
-> to develop new theories.
-> to monitor a data in the interest of health and safety.
Ex:- forecasting weather, that is, predicting it in the earth case.
1
Methods of measurement:-
1. Direct and indirect measurement.
2. Primary and secondary & tertiary measurement.
3. Contact and non-contact type of measurement.
Direct measurement:
The value of the physical parameter is determined by comparing it directly with different standards.
The physical standards like mass, length and time are measured by direct measurement.
Indirect measurement:
The value of the physical parameter is more generally determined by indirect comparison with the
secondary standards through calibration. The measurement is converted into an analogous signal
which subsequently are processed and fed to the end device which present the result of measurement.
Objectives of instrumentation:-
1. The major objective of instrumentation is to measure and control the field parameters to
increase safety and efficiency of the process.
2. To achieve good quality.
2
3. To achieve auto machine and automatic control of process thereby reducing human
involvement.
4. To maintain the operation of the plan within the design expectations and to achieve good
quality product.
The principal functions of an instrument are the acquisition of information by Sensing and
perception, the processing of that information and its final presentation to a Human observer. For the
purpose of analysis and synthesis, the instruments are considered as systems (or) assembly of inter
connected components organized to perform a specified function. The different components are
called elements.
3) MANIPULATION ELEMENT:
It modifies the direct signal by amplification, filtering etc., so that a desired output is produced.
[input]× constant = Output
An element that provides record or indication of the output from the data processing element. In a
measuring system using electrical instrumentation, an exciter and an amplifier are also incorporated
into the circuit.
The display unit may be required to serve the following functions.
transmitting
Signaling
Registering
Indicating
recording
The generalized measurement system is classified into 3 stages:
a) Input Stage
b) Intermediate Stage
i. Signal Amplifications
4
ii. Signal Filtration
iii. Signal Modification
iv. Data Transmission
c) Output Stage
a) Input Stage:
Input stage (Detector-transducer) which is acted upon by the input signal (a variable to be measured)
such as length, pressure, temperature, angle etc. and which transforms this signal in some other
physical form. When the dimensional units for the input and output signals are same, this functional
element/stage is referred to as the transformer.
b) Intermediate Stage:
i. Signal amplification to increase the power or amplitude of the signal without affecting its wave
form. The output from the detector-transducer element' is generally too small to operate an indicator
or a recorder and its amplification is necessary. Depending upon the type of transducer signal, the
amplification device may be of mechanical, hydraulic/pneumatic, optical and electrical type.
ii. Signal filtration to extract the desired information from extraneous data. Signal filtration
removes the unwanted noise signals that tend to obscure the transducer signal. Depending upon
nature of the signal and situation, one may use mechanical, pneumatic or electrical filters.
iii. Signal modification to provide a digital signal from an analog signal or vice versa, or change the
form of output from voltage to frequency or from voltage to current.
Iv. Data transmission to telemeter for remote reading and recording.
c) Output Stage:
This constitutes the data display, record or control. The data presentation stage collects the output
from the signal-conditioning element and presents the same to be read or seen and noted by the
experimenter for analysis. This element may be of:-
visual display type such as the height of liquid in a manometer or the position of pointer on
a scale
numerical readout on an electrical instrument
Graphic record on some kind of paper chart or a magnetic tape. Example: Dial indicator
CLASSIFICATION OF INSTRUMENTS:-
1) Automatic and Manual instruments:
2) Self-generating and power operated
3) Self-contact and remote indicating instruments
4) Deflection and null type
5) Analog and digital types
5
6) Contact and no-contact type
Accordingly based upon the service rendered, the instruments may also be classified as indicating
instruments, recording instruments and controlling instruments.
6
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT:-
i) Desired input:
A quantity that the instrument is specifically intended to measure. The desired input 𝑖𝐷 produces an
output component according to an input-output relation symbolized by 𝐺𝐷; here 𝐺𝐷 represents the
mathematical operation necessary to obtain the output from the input.
7
Example:
Consider a deferential manometer which consists of a u-tube filled with mercury and with its ends
connected to the two points between which the pressure differential is to be measured .The pressure
differential 𝑝1-𝑝2 is worked out from the hydro static (Equilibrium) equation:
(𝑝1-𝑝2)=g h (𝜌m − 𝜌𝑓)
𝜌𝑚 and 𝜌𝑓 are the mass densities of mercury and fluid respectively, and h is the scale reading. lf the
fluid flowing in the pipeline is a gas, then 𝜌𝑓 << 𝜌𝑚 accordingly the above identity can be re-written
as
(𝑝1-𝑝2)=g h 𝜌𝑚
Here differential pressure 𝑝1- 𝑝2 is the desired input; Scale reading „h‟ is the output and 𝜌𝑚 is the
parameter which relates the output and the input.
The manometer is placed on a wheel which is subjected to acceleration and the scale indicates a
reading even through the pressures 𝑝1& 𝑝2 at the two ends and they are equal. The acceleration that
constitutes the interference input. The manometer has an angular tilt i.e., is not properly align with
the direction of the gravitational force. An output will result even when there is no pressure
difference. Here the angular tilt acts as the interfering input. Here scale factor establishes the input -
output relation and this gets modified due to
8
Performance characteristics of a measuring instrument:-
1. Static characteristics
2. Dynamic characteristics
The performance characteristics of an instrument system are determined by how accurately the
system measures the required input and how absolutely it rejects the undesirable inputs.
Error = measured value (𝑉𝑚) – true value ((𝑉𝑡) , Correction = (𝑉𝑡-𝑉𝑚).
1. Static characteristics:
a) Range and span, b) Accuracy, error, correction, c) Calibration, d) Repeatability, e)
Reproducibility f) Precision, g) Sensitivity, h) Threshold, i) Resolution, j) Drift, k) Hysteresis, dead
zone.
Ex: -
Range - 10 Co to 80 Co Span=90oc
Range 5 bar to 100 bar Span=100-5=95
bar Range 0 v to 75v Span=75volts
𝐸𝑠 =𝑉𝑚- 𝑉𝑡
Static correction is defined as 𝑉𝑡-𝑉𝑚
𝐶𝑠=𝑉𝑡-𝑉𝑚
c) Calibration:
The magnitude of the error and consequently the correction to be applied is determined by making a
periodic comparison of the instrument with standards which are known to be constant. The entire
procedure laid down for making, adjusting or checking a scale so that readings of an instrument or
measurement system conform to an Accepted standard is called the calibration. The graphical
representation of the calibration record is called calibration curve and this curve relates standard
values of input or measurand to actual values of output throughout the operating range of the
instrument. A comparison of the instrument reading may be made with
9
(i) a primary standard,
(ii) a secondary standard of accuracy greater than the instrument to be calibrated,
(iii) a known input source.
The following points and observations need consideration while calibrating an instrument:-
(a) Calibration of the instrument is out with the instrument in the same position (upright, horizontal
etc.) and subjected to the same temperature and other environmental conditions under which it is
operated while in service.
(b) The instrument is calibrated with values of the measurand impressed both in the increasing and
in the decreasing order. The results are then expressed graphically; typically the output is plotted as
the ordinate and the input or measurand as the abscissa.
(c) Output readings for a series of impressed values going up the scale may not agree with the
output readings for the same input values when going down.
(d) Lines or curves plotted in the graphs may not close to form a loop.
d) Repeatability:
Repeatability describes the closeness of the output readings when the same input is applied
repeatedly over a short period of time with the same measurement conditions, same instrument
and observer, same location and same conditions of use maintained throughout.
e) Reproductability: Reproductability describes the closeness of output readings for the same
input when there are changes in the method of measurement, observer, measuring instrument,
location, conditions of use and time of measurement.
f) Precision:
The instrument ability to reproduce a certain group of the readings with a given accuracy is known
as precision i.e., if a number of measurements are made on the same true value then the degree of
closeness of these measurements is called precision. It refers to the ability of an instrument to give
its readings again and again in the same manner for constant input signals.
g) Sensitivity:
Sensitivity of an instrument is the ratio of magnitude of response (output signal) to the magnitude of
the quantity being measured (input signal) i.e.
Static sensitivity = =
10
Threshold:
Threshold defines the minimum value of input which is necessary to cause detectable change from
zero output. When the input to an instrument is gradually increased from zero, then the input must
reach to a certain minimum value, so that the change in the output can be detected. The minimum
value of input refers to threshold.
h) Resolution:
It is defined as the increment in the input of the instrument for which output remains constant i.e.,
when the input given to the instrument is slowly increased for which the output remains same until
the increment assumes a different value.
i) Drift:
The slow variation of the output signal of a measuring instrument is known as draft. The variation of
the output signal is not due to any changes in the input quantity, but to the changes in the working
conditions of the components inside the measuring instruments.
2. Dynamic characteristics:
a) Speed of response and measuring lag, b) Fidelity and dynamic error, c) Over shoot, d) Dead time
and dead zone, e) Frequency response.
c) Over shoot:
Because of maximum inertia of a moving part i.e., the pointer of the instrument does not
immediately come to rest in the find deflected position. The pointer goes to find deflected position.
The pointer goes beyond the steady state i.e., it over shoots. The over shoot is defined as the
maximum amount by which the pointer moves beyond the steady state.
11
d) dead time and dead zone:
Dead time is defined as the time required for an instrument to begin responding to a change in the
measured quantity. It represents the time before the instrument begins to respond after the measured
quantity has been altered. Dead zone defines the largest change in the measured variable to which the
instrument does not respond. Dead zone is the result of friction backlash in the instrument.
e) Frequency response:
(The dynamic performance of both measuring and control system is determine by applying some
known and predetermined input signal to its primary sensing element and them). Maximum
frequency of the measured variable that an instrument is capable of following with error. The usual
requirement is that the frequencies of the measured variable should not exceed 60% of the natural
frequency of the measuring instrument.
12
The most common standard inputs used for dynamic analysis
i. Step functions
ii. Linear (or) ramp functions
iii. Sinusoidal (or) sine wave functions
i. Step function:
Which is a sudden change from one steady value to another. The step input is mathematically
represented by
Here the input has a cycle variation, the input varies sinusoidal with a constant amplitude.
Mathematically it may be represented as
13
Zero, first and second order systems:-
14
First Order Systems:
The behaviour of a first order system is represented by a first order differential equation of the form.
15
Sources of error:
1. Calibration of Instrument
2. Instrument reproducibility
3. Measuring arrangement
4. Work piece
5. Environmental condition
6. Observes skill
1. Calibration of Instrument:
For any instrument calibration` is necessary before starting the process of measurement. When the
instrument is loaded frequently for long time, the calibration of instrument is used frequently for
long time, the calibration of instrument may get disturbed. The instrument which is gone out of
calibration cannot give actual value of the measured. Therefore the output produced by such an
instrument has error. The error due to improper calibration of instrument is known as systematic
instrumental error, and it occurs regularly. Therefore this error can be eliminated by, properly
calibrating the instrument at frequent intervals.
2. Instrument reproducibility:
Though an instrument is calibrated perfectly under group of conditions, the output produced by that
instrument contains error. This occurs if the instrument is used under those set of conditions which
are not identical to the conditions existing during calibration. i.e., the instrument should be used
under those set of conditions at which -.the instrument is calibrated. This type of error may occur
systematically or accidentally.
3. Measuring arrangement:
The process of measurement itself acts as a source of error if the arrangement of different
components of a measuring instrument is not proper.
Example: While measuring length, the comparator law of Abbe should be followed. According to
this, actual value of length is obtained when measuring instrument and scale axes are collinear, and
any misalignment of these will give error value. Hence this type of error can be eliminated by having
proper arrangement of measuring instrument.
4. Work piece:
The physical nature of object (work piece) i.e., roughness, softness and hardness of the object acts as
a source of error. Many optomechanical and mechanical type of instruments contact the object under
certain fixed pressure conditions. Since the response of soft and hard objects under these fixed
16
conditions is different the output of measurement will be in error.
5. Environmental condition:
Changes in the environmental conditions are also major source of error. The environmental
conditions such as temperature, humidity, pressure, magnetic or electrostatic field surrounding the
instrument may affect the instrumental characteristics. Due to this the result produced by the
measurement may contain error. These errors are undesirable and can be reduced by the following
ways,
6. Observes skill:
It is a well-known fact that the output of measurement of a physical quantity is different from
operator to operator and sometimes even for the same operator the result may vary with sentimental
and physical states. One of the examples of error produced by the operator is parallax error in
reading a meter scale. To minimize parallax errors modern electrical instruments have digital display
of output.
1. gross errors:
This cause of errors mainly covers human mistakes in reading instruments and recording and
calculating measurement result. The responsibility of the mistake normally lies with the
experimenter.
Ex: The temperature is 31.50c, but it may be written as 21.50c. It is an error however they can be
avoided by adopting two means
2. systematic errors:
These types of errors are divided into three categories.
a. Instrumental errors
b. Environmental errors
c. Observational errors
a. Instrumentation errors:
These errors occur due to three main reasons.
a. Due to inherent short comings of the instrument
b. Due to misuse of instruments
17
c. Due to loading effects of instruments.
b. Environmental errors:
These errors are caused due to changes in the environmental conditions in the area surrounding the
instrument that may affect the instrument characteristics such as the effects of changes in
temperature, humidity, barometric pressure or if magnetic field or electrostatic field. These
undesirable errors can be reduced by the following ways.
(i) Arrangement must be made to keep the conditions approximately constant.
(ii) Employing hermetically sealing to certain components in the instrument, which eliminate the
effects of the humidity dust, etc.
(iii) Magnetic or electrostatic shields must be provided.
c. Observational errors:
These errors are produced by the experimenter. The most frequent error is the parallax error
introduced in reading a meter scale.
These errors are caused by the habits of individual observers. To minimize parallax errors modern
electrical instruments have digital display of output.
18
LECTURE – 2
Measurement of pressure
Pressure definition:-
The action of force against some opposite forces.
OR
A force in the nature of thrust distributed over a surface.
OR
The force acting against a surface within a closed container.
Units:-
Some of the commonly used pressure units are:
1bar = 105N/𝑚 2= 1.0197 kgf/𝑐𝑚2 = 760.06 mm of Hg. 1 micron = 1M = 10−3 mm of Hg.
1 torr = 1 mm of Hg. 1 𝜇 bar = 1 dyne/ 𝑐𝑚2 ,Pa = 1 N/𝑚 2.
Terminology:-
Following terms are generally associated with pressure and its measurement.
1
Mercury has a low vapour pressure (≈1.6 × 10−6bar at 20 °C) and thus for all intents and purposes
it can be neglected in comparison to 𝑝𝑎𝑡 which is about 1.0 bar at mean sea level. Then
𝑝𝑎𝑡 = 𝜌𝑔ℎ ........................... (ii)
Atmospheric pressure varies with altitude, because the air nearer the earth‟s surface is compressed
by air above. At sea level, value of atmospheric pressure is close to 1.01325 bar or 760 mm of Hg
column (= 10.33 m of water column).
Instruments and gauges used to measure fluid pressure generally measures the difference between
the unknown pressure „P‟ and the existing atmospheric pressure „𝑃𝑎𝑡𝑚‟. When the unknown
pressure is more than the atmospheric pressure the pressure recorded by the instrument is called
gauge pressure (𝑃𝑔). A pressure reading below the atmospheric pressure is known as vacuum
pressure or negative pressure. Actual absolute pressure is the sum of gauge pressure indication and
the atmospheric pressure.
𝑃𝑎𝑏𝑠 = 𝑷𝒈+ 𝑷𝒂𝒕
𝑃𝑎𝑏𝑠 = 𝑷𝒂𝒕- 𝑷𝒗𝒂𝒄
2
STATIC PRESSURE (𝑷𝒔) AND TOTAL PRESSURE (𝑷𝒕):-
Static pressure is defined as the force per unit area acting on a wall by a fluid at rest or flowing
parallel to the wall in a pipe line.
Static pressure of a moving fluid is measured with an instrument which is at rest relative to the
fluid. The instrument should theoretically move with same speed as that of the fluid particle itself.
As it is not possible to move a pressure transducer along in a flowing fluid; static pressure is
measured by inserting a tube into the pipe line at right angles to the flow path. Care is taken to
ensure that the tube does not protrude into the pipe line and cause errors due to impact and eddy
formation. When the tube protrudes into the stream, there would be local speeding up of the flow
due to its deflection around the tube; hence an erroneous reading of the static pressure would be
observed.
A typical gauge is schematically shown in Fig. It consists of an accurately machined, bored and
finished piston which is inserted into a close fitting cylinder; both of known cross- section areas. A
platform is attached to the top of the piston and it serves to hold standard weights of known
accuracy. The chamber and the cylinder are filled with a clean oil; the oil being supplied from an
oil reservoir provide with a check valve at its bottom. The oil withdrawn from the reservoir when
the pump plunger executes an outward stroke and forced into the space below the piston during
inward motion of the pump plunger. For calibrating a gauge, an appropriate amount of weight is
placed on the platform and the fluid pressure is applied until enough upward force is developed to
lift the piston-weight combination. When this occurs, the piston-weight combination begins to float
freely within the cylinder.
4
Under the equilibrium condition the pressure force is balanced against the gravity force on the mass
„m‟ of the calibrated masses, plus the piston and flat form and a frictional force. If „A‟ is the
equivalent area of the piston cylinder combination then:
Manometers:-
Manometers measure pressure by balancing a column of liquid against the pressure to measure.
Height of column so balanced is noted and then converted to the desired units. Manometers may be
vertical, inclined, open, differential or compound. Choice of any type depends on its sensitivity of
measurement, ease of operation and the magnitude of pressure being measured. Manometers can
be used to measure gauge, differential, atmospheric, and absolute pressure.
i. Piezo meter
ii. U- tube manometer
iii. Single column manometer
i. Piezo meter:
lt is a vertical transparent glass tube, the upper end of which is open to atmosphere and the lower
end is in communication with the gauge point ; a point in the fluid container at which pressure is to
be measured. Rise of fluid in the tube above a certain gauge point is a measure of the pressure at
that point.
Fluid pressure at gauge point A = atmospheric pressure pa at the free surface + pressure due to a
liquid column of height ℎ1
𝑝1 = 𝑝𝑎 + 𝑤ℎ1
Where, w is the specific weight of the liquid.
Similarly for the gauge point B, 𝑝2 = 𝑝𝑎 + 𝑤ℎ2
Let, ℎ1 = height of the light liquid above the datum line, ℎ2 = height of the heavy liquid above the
datum line. For the right limb the gauge pressure at point 2 is 𝑝2 = atmospheric pressure, i.e., zero
gauge pressure at the free surface+ pressure due to head ℎ2 of manometric liquid of specific weight
𝑤2, p2 = 0 + 𝑤2 ℎ2. For the left limb, the gauge pressure at point 1 is 𝑝1 = gauge pressure 𝑝𝑥 +
pressure due to height ℎ1 of the liquid of specific weight 𝑤1, 1 = 𝑝𝑥 + 𝑤1 ℎ1. Points 1 and 2 are at
the same horizontal plane; 𝑝1 = 𝑝2 and therefore, 𝑝𝑥 + 𝑤1 ℎ1 = 𝑤2 ℎ2, Gauge pressure in the
container, 𝑝𝑥 = 𝑤2 ℎ2 - 𝑤1 ℎ1
6
1. Vertical single column manometer:
To start with, let both limbs of the manometer be exposed to atmospheric pressure. Then the liquid
level in the wider limb (also called reservoir well basin) and narrow limbs will correspond to
position 0-0.
ℎ1 = height of center of pipe above 0-0
ℎ2 = rise of heavy liquid in right limb
For the left limb, gauge pressure at point 1 is: 𝑝1= 𝑝𝑥 + 𝑤1 ℎ1 + 𝑤1𝛿 h for the right limb, the
gauge pressure at point 2 is: 𝑝2= 0 + 𝑤2 ℎ2 + 𝑤2 𝛿 h Points 1 and 2 are at the same horizontal
plane: 𝑝1 = 𝑝2 and therefore Gauge pressure 𝑝𝑥 in the container is:
𝑝𝑥 = (𝑤2 ℎ2 - 𝑤1 ℎ1) + 𝛿 h (𝑤2 - 𝑤1)
7
Advantages of manometers:
- Relatively inexpensive and easy to fabricate
- good accuracy and sensitivity
- requires little maintenance; are not affected by vibrations
- particularly suitable to low pressures and low differential pressures
- sensitivity can be altered easily by effecting a change in the quantity of manometric liquid in the
manometer
Limitations
- generally large and bulky, fragile and gets easily broken
- not suitable for recording
- measured medium has to be compatible with the manometric fluid used
- readings are affected by changes in gravity, temperature and altitude
- surface tension of manometric fluid creates a capillary effect and possible hysteresis
- meniscus height, has to be determined by accurate means to ensure improved accuracy.
Bellow gauges:
The bellows is a longitudinally expansible and collapsible member consisting of several
convolutions or folds. The general acceptable methods of fabrication are:
(i) turning from a solid stock of metal, (ii) soldering or welding stamped annular rings, (iii) rolling
a tubing, and (iv) hydraulically forming a drawn tube. Material selection is generally based on
considerations like strength or the pressure range, hysteresis and fatigue.
In the differential pressure arrangement (Fig) two bellows are connected to the ends of an equal
arm lever. If equal pressures are applied to the two bellows, they would extend by the same
amount. The connecting lever would then rotate but no movement would result in the movement
sector. Under a differential pressure, the deflections of the bellow would be unequal and the
differential displacement of the connecting levers would be indicated by the movement of the
pointer on a scale.
8
Advantages:
1. Simple in construction.
2. Good for low to moderate pressures.
3. Available for gauge, differential and absolute pressure measurements.
4. Moderate cost.
Limitations:
1. Zero shift problems.
2. Needs spring for accurate characterization.
3. Requires compensation for temperature ambient changes.
BOURDON GAUGE:-
The pressure responsive element of a bourdon gauge consists essentially of metal tube (called
bourdon tube or spring), oval in cross section and bent to form a circular segment of,
approximately 200 to 300 degrees. The tube is fixed but open at one end and it is through this fixed
end that the pressure to be measured is applied. The other end is closed but free to allow
displacement under deforming action of the pressure difference across the tube walls. When a
pressure (greater than atmosphere) is applied to the inside of the tube, its cross-section tends to
become circular. This makes the tube straighten itself out with a consequent increase in its radius
of curvature, i. e., the free end would collapse and curve.
Type travel:
The motion of the free end commonly called tip travel is a function of tube length wall
thickness, cross sectional geometry and modulus of the tube material. For a bourdon tube a
deflection ∆a of the elemental tip can be expressed as
∆a = ( )( ) ( ) ( )
Where „a‟ is the total angle subtended by the tube before pressurization, P is the applied pressure
difference and „E‟ is the modulus of elasticity of the tube material.
9
Errors and their rectification: in general 3 types of error are found in bourdon gauges:
(i) Zero error or constant error which remains constant over the entire pressure range.
(ii) Multiplication error where in the gauge may tend to give progressively a higher or low reading.
(iii) Angularity error: quite often it is seen that a one– to–one correspondence does not occur.
The C-type bourdon tube has a small tip travel and this necessitates amplification by a lever,
10
quadrant, pinion and pointer arrangement. Increased sensitivity can be obtained by using a very
long length of tubing in the form of a helix, and a flat spiral as indicated in Fig.
Materials:
1. Pressure 100 to 700 KN/m2 (tubes are made of phosphor bronze)
2. For high pressure P=7000 to 63000 KN/m2 (tubes are made of alloy steel or k-monel)
Advantages:
1. Low cost and simple in construction.
2. Capability to measure gauge absolute and differential pressure.
3. Availability in several ranges.
Limitations:
1. Slow response.
2. Susceptibility to sharp vibration.
3. Mutually required geared movement for application.
DIAPHRAGM GAUGES:-
In its elementary form, a diaphragm is a thin plate of circular shape clamped firmly around its
edges. The diaphragm gets deflected in accordance with the pressure differential across the side;
deflection being towards the low pressure side. The pressure to be measured is applied to
diaphragm causing it to deflect, the deflection being proportional to applied pressure. The
movement of diaphragm depends on its thickness and diameter. The pressure deflection relation
for a flat diaphragm with adjustable clamp is given by
11
Diaphragm types: The diaphragms can be in the form of flat, corrugated or dished plates; the
choice depending on the strength and amount of deflection desired. Most common types of
diaphragms are shown in Fig.
Diaphragm material, pressure ranges and applications: Metallic diaphragms are generally
fabricated from a full hard, cold-rolled nickel, chromium or iron alloy which can have an elastic
limit up to 560 MN/ml. Typical pressure ranges are 0 - 50 mm water gauge, 0-2800 kN/𝑚2
pressure and 0 - 50 mm water gauge vacuum.
Typical applications are low pressure absolute pressure gauges, draft gauges, liquid level gauges
and many types of recorders and controllers operating in the low range of direct or differential
pressures.
Non-metallic slack diaphragms are made from variety of materials such as gold beaters, skill ,
animal membranes, impregnated silk clothes and synthetic materials like Teflon, neoprene,
polythene …Etc.
Advantages:
1. Relatively small size and moderate cost.
2. Capability to withstand high over pressures and maintain good linearity over a wide range.
3. Availability of gauge for absolute and differential pressure measurement.
4. Minimum of hysteresis and no permanent zero shift.
Limitations:-
1. Needs protection from shocks and vibrations.
2. Cannot be used to measure high pressure.
3. Difficult to repair.
1. Thermocouple gauge:
The schematic diagram of a thermocouple type conductivity gauge is shown in figure (1). The
pressure to be measured admitted to a chamber. A constant current is passed through the thin metal
strip in the chamber. Due to this current, the metal strip gets heated and acts as hot surface. The
temperature of this hot surface is sensed by a thermocouple which is attached to the metal strip.
The glass tube acts as the cold surface whose temperature is nearly equal to room temperature. The
conductivity of the metal strip changes due to the applied pressure. This change in conductivity
causes a change in the temperature, which is sensed by the thermocouple. The thermocouple
produces current corresponding to the thermocouple output which is then indicated by a mm. This
indicated current becomes a measure of the applied pressure when calibrated.
2. Pirani gauge:
The construction of a pirani gauge is shown in fig.
It consists of two identical tubes, platinum/tungsten Wire and a compensating element. A constant
current is passed through the platinum wire which is mounted along the axis of the glass tube. The
wire gets heated due to this current and its resistance is measured using a resistance bridge. The
gas whose pressure to be measured is admitted to the glass tube. The conductivity of the Wire
13
changes due to this applied pressure. This change in conductivity causes a change in temperature
of the wire which in turn causes a change in the resistance of the wire. This change in resistance is
measured using the resistance bridge. The other tube present in the gauge is evaluated to a very low
pressure and it acts as a compensating element to minimize the variations caused by ambient
temperature changes.
IONIZATION GAUGE:-
The hot filament ionization gauge consists of a heated filament (cathode) to furnish electrons, a
grid, and an anode plate. These elements are housed in an envelope which communicates with the
vacuum system under test. The grid is maintained at a positive potential of 100-350 V while the
anode plate is maintained at negative potential of about 3-50 V with respect to cathode. The
cathode is thus a positive ion collector and the anode plate is an electron collector.
The rate of ion production is proportional to the number of electrons available to ionize the gas and
the amount of gas present. Thus the ratio of + ve ions, i. e., the anode current 𝐼1 to -ve ions and
electrons, ie., grid current 𝐼2 is a measure of the gas pressure P. The following approximate relation
holds: 𝑝
𝑝
Where the proportionality constant S is called the sensitivity of the gauge. Sensitivity is a function
of the tube geometry, nature of the gas, and the operating voltages. Its value is determined by
calibration of the particular gauge.
Advantages:-
1. Wide pressure range 10−3 torr to 10−11 torr.
2. Constant sensitivity.
3. Possibility of process control and remote indication.
4. Fast response to pressure changes.
5. High cost and complex electrical circuit.
6. Calibration varies with gases.
7. Filament burns out if exposed to air by hot.
8. Decomposition of some gases by the hot filament.
9. Contamination of gas due to heat.
14
MCLEOD GAUGE:-
The unit comprises the system of glass tubes in which a known volume of gas at unknown pressure
is trapped and then isothermally compressed by a rising mercury column. Its operation involves
the following steps:
(i) Plunger is withdrawn and the mercury level is lowered to the cut off position, thereby admitting
gas at unknown pressure „𝑝0‟ into the system. let „𝑉0′ be the volume of the gas admitted into the
measuring capillary, the bulb and into the tube down to the cut off points.
(ii) The plunger is push in and the mercury level goes up. The plunger motion is continued until
the mercury level in the reference capillary reaches the zero mark. Let height „h‟ is the measure of
the compressed gas volume sealed into the measuring capillary. This height also represents rise in
gas pressure in terms of height of mercury column.
If „a‟ denotes the area of the measuring capillary then the final volume „𝑉𝑓′=a.h and the final
monomeric pressure 𝑃𝑓 =𝑝0+h. the unknown pressure is then calculated using boyle‟s law as
follows:
This is one type of manometer. This is used for calibration of pirani and thermocouple gauges.
15
LECTURE – 3
Construction:
Hook-type level indicator consists of a wire of corrosion resisting alloy (such as stainless steel)
about ¼ in (0.063 mm) diameter bent into U-Shane with one arm longer than the other as shown in
Fig. The shorter arm is pointed with a 600 tater while the longer one is attached to a slider having a
Vernier scale which moves over the main scale and indicates the level.
Working:
In hook-type level indicator, the hook is pushed below the surface of liquid whose level is to be
measured and gradually raised until the point is just about to break through the surface. It is then
clamped, and the level is read on the scale. This principle is further utilized in the measuring point
manometer in which the measuring point consists of a steel point fixed with the point upwards
underneath the water surface.
16
(ii) Sight Glass:
A sight glass (also called a gauge glass) is another method of liquid level measurement. It is used
for the continuous indication of liquid level within tank or vessel.
(iii) Float-type:
FIoat-Type Level Indicator moat operated level indicator is used to measure liquid levels in a tank
in which a float rests on the surface of liquid and follows the changing level of liquid. The
movement of the float is transmitted to a pointer through a suitable mechanism which indicates the
level on a calibrated scale. Various types of floats are used such as hollow mewl spheres,
cylindrical-shaped floats and disc- shaped floats.
Figure shows the simplest form of float operated mechanism for the continuous liquid level
measurement. In this case, the movement of the float is transmitted to the pointer by stainless steel
or phosphor-bronze flexible cable wound around a pulley, and the pointer indicates liquid level in
the tank. The float is made of corrosion resisting material (such as stainless steel) and rests on liquid
level surface between two grids to avoid error due to turbulence. With this type of instrument,
17
liquid level from ½ ft. (152 mm) to 60, ft. (1.52 m) can be easily measured.
The principle of operation of capacitance level indicator is based upon the familiar capacitance
equation of a parallel plate capacitor given by:
𝑐=𝐾
Therefore, it is seen from the above relation that if A and D are constant, then the capacitance of a
capacitor is directly proportional to the dielectric constant, and this principle is utilized in the
capacitance level indicator.
Figure shows a capacitance type Liquid level indicator. It consists of an insulated capacitance
probe (which is a metal electrode) firmly fixed near and parallel to the maul wall of the tank If
liquid in the tank is non-inductive, the capacitance probe and the tank wall form the plates of a
parallel plate capacitor and liquid in between them acts as the dielectric. If liquid is conductive, the
capacitance probe and liquid form the plates of the capacitor and the insulation of the probe acts as
the dielectric. A capacitance measuring device is connected with the probe and the tank wall,
which is calibrated in terms of the level of liquid in the tank.
18
(ii) Ultrasonic method:
Ultrasonic liquid level works on the principle of reflection of the sound wave from the surface of
the liquid. The schematic arrangement of liquid level measurement by ultrasonic liquid level gauge
is illustrated above.
The transmitter „T‟ sends the ultrasonic wave towards the free surface of the liquid. The wave gets
reflected from the surface. The reflected waves received by the receiver „R‟. The time taken by the
transmitted wave to travel to the surface of the liquid and then back to the receiver gives the level
of the liquid. As the level of the liquid reaches the time taken to reach the surface of the liquid and
then back to receiver also changes. Thus the change in the level of the liquid are determined
accurately.
Advantages:-
1. Operating principle is very simple.
2. It can be used for various types of liquids and solid substances.
Disadvantages:-
1. Very expensive.
2. Very experienced and skilled operator is required for measurement.
These are used for measuring the toxic and corrosive liquids. It is used to measure the level of
liquids which contain corrosive and toxic materials.
It contains a float in which a magnet is arranged and is placed in the chamber, whose liquid level is
to be determined. The float moves up and down with the increase and decrease in the level of
liquid respectively. A magnetic shielding device and an indicator containing small wafers arranged
in series and attached to the sealed chamber. These wafers are coated with luminous paint and
rotate 180*. As the level changes the float moves (along with the magnet) up and down. Due to
this movement of magnet, wafers rotate and present a black coloured surface for the movement of
float in opposite direction.
In this technique of level measurement, the air pressure in the pneumatic pipeline is adjusted and
maintained slightly greater than the hydrostatic pressure at the lower end of the bubbler tube. The
bubbler tube is dipped in the tank such that its lower end is at zero level i.e., reference level, and
the other end is attached to a pressure regulator and a pressure gauge. Now the supply of air
through the bubbler tube is adjusted so that the air pressure is slightly higher than the pressure
exerted by the liquid column in the vessel or tank. This is accomplished by adjusting the air
pressure regulator until a slow discharge of air takes place i.e., bubbles is seen leaving the lower
end of the bubbler tube. (In some cases a small airflow meter is arranged to control an excessive
airflow if any). When there is a small flow of air and the fluid has uniform density, the pressure
indicated by the pressure gauge is directly proportional to the height of the level in the tank
provided the gauge is calibrated properly in unit of liquid level.
FLOW MEASUREMENT:-
Introduction: Measurement of fluid velocity, flow rate and flow quantity with varying degree of
accuracy or a fundamental necessity in almost all the flow situations of engineering studying ocean
20
or air currents, monitoring gas input into a vacuum chamber, measuring blood movement in a vein.
The scientist or engineer is faced with choosing a method to measure flow. For experiment
procedures, it may be necessary to measure the rates of flow either into or out of the engines,
“Pumps, compressors and turbines”. In industrial organizations, flow measurement is needed for
providing the basis for controlling processes and operations. That is for determining the
proportions of materials entering or leaving a continuous manufacturing process. Flow
measurements are also made for the purpose of cost accounting in distribution of water and gas to
domestic consumers, and in the gasoline pumping stations.
1. Quantity meters:-
In this class of instruments, actual flow rate is measured. Flow rate measurement devices
frequently required accurate pressure and temperature measurements in order to calculate the
output of the instrument. The overall accuracy of the instrument depends upon the accuracy of
pressure and temperature measurements.
Quantity meters are used for the calibration of flow meters:
1. Quantity meters.
a. Weight or volume tanks.
b. Positive displacement or semi-positive displacement meters.
2. Flow meters.
a. Obstruction meters.
i. Orifice
ii. Nozzle
iii. Venture
iv. Variable-area meters.
b. Velocity probes.
i. Static pressure probes.
ii. Total pressure probes.
c. Special methods.
i. Turbine type meters.
ii. Magnetic flow meters.
iii. Sonic flow meter.
iv. Hot wire anemometer.
v. Mass flow meters.
vi. Vortex shielding phenomenon.
d. Flow visualization methods.
i. Shadow grapy.
ii. Schlieren photography.
iii. Interferometry.
21
ROTAMETER:-
The rotameter is the most extensively used form of the variable area flow meter. It consists of a
vertical tapered tube with a float which is free to move up or down within the tube as shown in Fig.
The tube is made tapered so that there is a linear relationship between the flow rate and position in
the float within the tube. The free area between float and inside wall of the tube forms an annular
orifice. The tube is mounted vertically with the small end at the bottom. The fluid to be measured
enters the tube from the bottom and passes upward around the float and exit at the top. When there
is no flow through the rotameter, the float rests at the bottom of the metering tube where the
maximum diameter of the float is approximately the same as the bore of the tube. When fluid
enters the metering tube the float moves up, and the flow area of the annular orifice increases. The
pressure differential across the annular orifice is proportional to the square of its flow area and to
the square of the flow rate. The float is pushed upward until the `lining force produced by the
pressure differential across its upper and lower surface is equal to the weight of the float If the
flow rate rises, the pressure differential and hence the lining force increases temporarily, and the
float then rises, widening the annular orifice until the force cawed by the pressure differential is
again equal to the weight of the Boat. Thus, the pressure differential remains constant and the area
of the annular orifice (i.e., free area between float and inside wall of the tube) to which the float
moves.
Changes in proportion to the flow rate: Any decrease in flow rate causes the float to drop to a
lower position. Every float position corresponds to one particular flow rate for a liquid of a given
density and viscosity. A calibration scale printed on the tube or near it provides a direct medication
of flow rate. The tube materials of rotameter may be of glass or metal.
Advantages:-
1. Simplicity of operation.
2. Ease of reading and installation.
3. Relatively low cost.
4. Handles wide variety of corrosive fluids.
5. Easily equipped with data transmission, indicating and recording devices.
22
Disadvantages:-
1. Glass tube subject to breakage.
2. Limited to small pipe sizes and capacities.
3. Less accurate compared to venture and orifice meters.
4. Must be mounted vertically.
5. Subject to oscillations.
Q=
Advantages:-
1. Good accuracy and repeatability.
2. Easy to install and maintain.
3. Low pressure drop.
4. Electrical output is available.
5. Good transient response.
Disadvantages:-
1. High cost.
2. The bearing of the rotor may be subject to corrosion.
3. Wear and tear problems.
Applications:-
1. Used to determine the fluid flow in pipes and tubes.
2. Flow of water in rivers.
3. Used to determine wind velocity in weather situations or conditions.
23
HOT WIRE ANEMOMETER:-
Principle:- When a fluid flows over an electrically heated surface, heat transfer takes place from
the surface or wire to the fluid. Hence, the temperature of the heated wire decreases which causes
variations in the resistance. The change that occurred in the resistance of the wire is related to the
flow rate.
The sensor is a 5 micron diameter platinum tungsten wire welded between the two prongs of the
probe and heated electrically as a part of Wheatstone bridge circuit. When the probe is introduced
into the fluid flowing, it tends to be cooled by the instantaneous velocity and consequently there is
a tendency for the electrical resistance to change. The rate of cooling depends up on the
dimensions and physical properties of the wire, difference of the temperature between the wire and
fluid, physical properties of the fluid, string velocity under measurement.
Depending on the associated electronic equipment, the hot wire may be operated in two modes:
Here the voltage across the bridge circuit is kept constant. Initially the circuit is adjusted such that
the galvanometer reads zero when the heated wire lies in stationary air. When the air flows the hot
wire cools, the resistance changes and the galvanometer deflects. The galvanometer deflections are
amplified measured in terms of air velocity or liquid velocity or gas velocity.
24
2. Constant temperature mode: -
Here the resistance of the wire and its temperature is maintained constant in the event of the
tendency of the hot wire to cool by the flowing fluid. The external bridge voltage is applied to the
wire to maintain a constant temperature. The reading on the voltmeter is recorded and correlated
with air velocity.
MAGNETIC FLOWMETER: -
Magnetic flowmeter depends up on the faradays law of electromagnetic induction. These meters
utilize the principles of faradays law of electromagnetic induction for making a flow measurement.
It states that whenever a conductor moves through a magnet field of given field strength; a voltage
is induced in the conductor which is proportional to the relative between the conductor and the
magnetic field. In case of magnetic flow meters, electrically conductive flowing liquid works as
the conductor the induced voltage, e is given by
e = 𝐵 𝐿 𝑉 × 10−8
Where, e = induced voltage, B = magnetic flux density in gauss, L = length of the conductor in cm,
V = velocity of the conductor in m/sec.
The equation of continuity to convert a velocity measurement to volumetric flow rate is given by
Q = AV
Where, Q = volumetric flow rate, A = cross sectional area of flowmeter, V = fluid velocity.
Fig illustrates the basic operating principle of a magnetic flowmeter in which the flowing liquid
acts as the conductor. The length L which is the distance between the electrodes is equal to the
25
pipe diameter. As the liquid passes through the pipe section, it also passes through the magnetic
field set up by the magnet coils, thus inducing the voltage in the liquid which is detected by the
pair of electrodes mounted in the pipe wall. The amplitude of the induced voltage is proportional to
the velocity of the flowing liquid. The magnetic coils may be energized either by AC or DC
voltage but the recent development is the pulsed DC-type in which the magnetic coils are
periodically energized.
Advantages:-
1. It can handle greasy materials.
2. It can handle corrosive fluids.
3. Accuracy is good.
4. It has very low pressure drop.
Disadvantages:-
1. Cost is more.
2. Heavy and larger in sizes.
3. Explosion proof when installed in hazardous electrical areas.
4. It must be full at all times.
Applications:-
1. Corrosive acids.
2. Cement slurries.
3. Paper bulb.
4. Detergents.
5. Bear … Etc.
The velocity of propagation of ultrasonic sound waves in a fluid is changed when the velocity of
the flow of fluid changes. The arrangement of flow rate measurement using ultrasonic transducer
contains two piezo-electric crystals placed in the fluid whose flow rate is to be measured. Of these
two crystals, one acts as a transmitting transducer and the other acts as a receiving transducer. The
transmitter and receiver are separated by some distance say “L”. Generally the transmitting
transducer is placed in the upstream and it transmits ultrasonic pulses. These ultrasonic pulses are
then received by the receiving transducer placed at the downstream flow. Let the time taken by the
ultrasonic pulsed to travel from the transmitter and received at the receiver is “delta”. If the
26
direction of propagation of the signal is same as the direction of flow then the transit time can be
given by:
∆𝑡1
Where L = distance between the transmitter and receiver, Vs = velocity of sound in the fluid, V =
velocity of flow in the pipe.
If the direction of the signal is opposite with the direction of the flow then the transit time is given
by:
∆ 𝑡2
∆𝑡
Compared to the velocity of the sound, the velocity of the flowing fluid is very very less. So,
2𝐿𝑉
∆𝑡=
𝑉𝑠2
Therefore the change in time is directly proportional to the velocity of fluid flow
∴∆𝑡 ά 𝑉
Speed is a rate variable defined as the time-rate of motion. Common forms and units of speed
measurement include: linear speed expressed in meters per second (m/s), and the angular speed of a
rotating machine usually expressed in radians per second (rad/s) or revolutions per minute (rpm).
Measurement of rotational speed has acquired prominence compared to the measurement of linear
speed. Angular measurements are made with a device called tachometer. The dictionary definitions
of a tachometer are:
* “an instrument used to measure angular velocity as of shaft, either by registering the number of
rotations during the period of contact, or by indicating directly the number of rotations per
minutes”
* “an instrument which either continuously indicates the value of rotary speed, or continuously
displays a reading of average speed over rapidly operated short intervals of time”
Tachometers may be broadly classified into two categories:
Mechanical tachometers and
Electrical tachometers.
Mechanical tachometers:
These tachometers employ only mechanical parts and mechanical movements for the measurement
of speed.
The revolution counter, sometimes called a speed counter, consists of a worm gear which is also the
shaft attachment and is driven by the speed source. The worm drives the spur gear which in turn
actuates the pointer on a calibrated dial. The pointer indicates the number of revolutions turned by
the input shaft in a certain length of time. The unit requires a separate timer to measure the time
interval. The revolution counter, thus, gives an average rotational speed rather than an instantaneous
rotational speed. Such speed counters are limited to low speed engines which permit reading the
counter at definite time intervals. A properly designed and manufactured revolution counter would
give a satisfactory speed measure up to 2000-3000 rpm.
2. Tachoscope:
28
The difficulty of starting a counter and a watch at exactly the same time led to the development of
tachoscope, which consists of a revolution counter incorporating a built-in timing device. The two
components are integrally mounted, and start simultaneously when the contact point is pressed
against the rotating shaft. The instrument runs until the contact point is disengaged from the shaft.
The rotational speed is computed from the readings of the counter and timer. Tachoscopes have
been used to measure speeds up to 5000 rpm.
The indicator has an integral stop watch and counter with automatic disconnect. The spindle
operates when brought in contact with the shaft, but the counter does not function until the start and
wind button is pressed to start the watch and engage the automatic clutch. Depressing of the starting
button also serves to wind the starting watch. After a fixed time-interval (usually 3 or 6 seconds),
the revolution counter automatically gets disengaged. The instrument indicates the average speed
over the short interval, and the dial is designed to indicate the rotational speed directly in rpm.
These speed measuring units have an accuracy of about 1% of the full scale and have been used for
speeds within the range 20,000 to 30,000 rpm.
29
The device operates on the principle that centrifugal force is proportional to the speed of rotation.
Two flyballs (small weights) are arranged about a central spindle. Centrifugal force developed by
these rotating balls works to compress the spring as a function of rotational speed. A grooved collar
or sleeve attached to the free end of the spring then slides on the spindle and its position can be
calibrated in terms of the shaft speed. Through a series of linkages, the motion of the sleeve is
usually amplified and communicated to the pointer of the instrument to indicate speed. Certain
attachments can be mounted onto the spindle to use these tachometers for the measurement of linear
speed.
Tachometers of the vibrating reed type utilize the fact that speed and vibration in a body are
interrelated. The instrument consists of a set of vertical reeds, each having its own natural frequency
of vibration. The reeds are lined up in order of their natural frequency and are fastened to a base
plate at one end, with the other end free to vibrate. When the tachometer base plate is placed in
mechanical contact with the frame of a rotating machine, a reed tuned to resonance with the
machine vibrations responds most frequently. The indicated reed vibration frequency can be
calibrated to indicate the speed of the rotating machine.
Electrical tachometers:
An electrical tachometer depends for its indications upon an electrical' signal generated in
proportion to the rotational speed of the shaft. Depending on the type of the transducer, electrical
tachometers have been constructed in a variety of different designs.
In an eddy current or drag type tachometer, the test shaft rotates a permanent magnet and this
induces eddy currents in a drag cup or disc held close to the magnet. The eddy currents produce a
torque which rotates the cup against the torque of a spiral spring. The disc tums in the direction of
the rotating magnetic field until the torque developed equals that of the spring. A pointer attached to
the cup indicates the rotational speed on a calibrated scale. The automobile speedometers operate
30
on this principle and measure the angular speed of the wheels. The rotational measurement is
subsequently converted into linear measurement by assuming some average diameter of the wheel,
and the scale is directly calibrated in linear speed units.
Eddy current tachometers are used for measuring rotational speeds up to 12,000 rpm with an
accuracy of ±3%.
The operation of this tachometer is based on alternately charging and discharging a capacitor.
These operations are controlled by the speed of the machine under test. The instrument essentially
consists of:
(i) Tachometer head containing a reversing switch, operated by a spindle which reverses twice with
each revolution.
(ii) Indicating unit containing a voltage source, a capacitor, milliammeter and a calibrating circuit.
When the switch is closed in one direction, the capacitor gets charged from d-c supply and the
current starts flowing through the ammeter. When the spindle operates the reversing switch to close
it in opposite direction, capacitor discharges through the ammeter with the current flow direction
remaining the same. The instrument is so designed that the indicator responds to the average
current. Thus, the indications are proportional to the rate of reversal of contacts, which in turn are
proportional to speed of the shaft. The meter scale is graduated to read in rpm rather than in
milliamperes. The tachometer is used within the range 200 - 10000 rpm.
3. Tachogenerators: These tachometers employ small magnet type d.c or a.c generators which
translate the rotational speeds into d.c. or a.c voltage signal. The operating principle of such
tachometers is illustrated in Fig. Relative perpendicular motion between a magnetic field and
conductor results in voltage generation in the conductor.
31
(i) D. C. tachometer generator: This is an accurately made dc generator with a permanent magnet
of horse-shoe type. With rotation of the shaft, a pulsating dc. Voltage proportional to the shaft
speed is produced and measured with the help of a moving coil voltmeter having uniform scale and
calibrated directly in terms of speed. The tachometer is sensitive to the direction of rotation and
thus can be used to indicate this direction by the use of an indicator with its zero point at mid-scale.
For greater accuracy, air gap of the magnetic paths must be maintained as uniform as possible.
Further, the instrument requires some form of commutation which presents the problem of brush
maintenance.
(ii) A.C. tachometer generator: The unit embodies a stator surrounding a rotating permanent
magnet. The stator consists of a multiple pole piece (generally four), and the permanent magnet is
installed in the shaft whose speed is being measured. When the magnet rotates, an ac. voltage is
induced in the stator coil. The output voltage is rectified and measured with a permanent magnet
moving coil instrument. The instrument can also be used to measure a difference in speed of two
sources by differentially connecting the stator coils.
Tacho generators have been successfully employed for continuous measurement of speeds up to
500 rpm with an accuracy of ±1%.
If the rotor has 60 teeth, and if the counter counts the pulses in one second, then me counter
will directly display the speed in revolutions per minute.
32
(ii) Capacitive type pick-up tachometer:
The device consists of a vane attached to one end of the rotating machine shaft. When the shaft
rotates between the fixed capacitive plates, there occurs a change in the capacitance. The capacitor
forms a part of an oscillator tank so that number of frequency changes per unit of time is a measure
of the shaft speed. The pulses thus produced are amplified, and squared, and may then be fed to
frequency measuring unit or to a digital counter so as to provide a digital analog of the shaft
rotation.
(iii) Photo-electric tachometer: These pick-ups utilize a rotating shaft to intercept a beam of light
falling on a photo-electric or photo conductive cell. The shaft has an intermittent reflecting (white)
and non- reflecting (black) surface. When a beam of light hits the reflecting surface on the rotating
shaft, light pulses are obtained and the reflected light is focused onto the photo-electric cell. The
frequency of light pulses is proportional to the shaft speed, and so will be the frequency of electrical
output pulses from the photo-electric cell.
(iv) Stroboscope:
The stroboscope utilizes the phenomenon of vision when an object is viewed intermittently. The
human sense of vision is so slow to react to light stimuli that it is unable to separate two different
light impulses reaching the eye within a very short Period of time (less than 0.1second). A
succession of impulses following one another at brief intervals is observed by the eye as a
continuous unbroken sequence. A mechanical disk type stroboscope consists essentially of a
whirling disk attached to motor whose speed can be varied and measured. A reference mark on the
33
rotating shaft on the shaft appears to be stationary. For this condition, the shaft speed equals that of
rotating disk, or some even multiple of this speed and is given by:
Vibration refers to the repeated cyclic oscillations of a system; the oscillatory motions may be
simple harmonic (sinusoidal) or complex (non-sinusoidal). The oscillations are caused when
acceleration is applied to the machine alternately in two directions
The excessive vibration level in a machine is an indication of the following troubles it can cause:
* Catastrophic failure as a result of stress caused by induced resonance and fatigue
* Excessive wear because of failure to compensate for vibration to which a product is subjected or
which is created by the product
* Faulty production
* Incorrect operation of precision equipment and machinery because of failure to compensate for
vibration and shock encountered in use
* human discomfort leading to adverse effects such as motion sickness, breathing and speech
disturbance, loss of touch of sensitivity etc.
Characteristics and units of vibrations: Vibration is generally characterized by
(i) The frequency in Hz, or
(ii) The amplitude of the measured parameter which may be displacement, velocity or acceleration.
Further, the units of vibration depend on the vibration parameter as follows:
(a) Displacement, measured in m, (b) velocity, measured in m/s and (c) acceleration, measured in
m/𝑠2.
Measurement of acceleration:
There are two types of accelerometers generally used for measurement of acceleration:
(i) Piezo-eletric type, and (ii) seismic type.
34
(i) Piezo-electric accelerometer: The unit is perhaps the simplest and most commonly used
transducer employed for measuring acceleration. The sensor consists of a piezo-electric crystal sand
witched between two electrodes and has a mass placed on it. The unit is fastened to the base whose
acceleration characteristics are to be obtained. The can threaded to the base acts as a 'spring and
squeezes the mass against the crystal. Mass exerts a force on the crystal and a certain output voltage
is generated. If the base is now accelerated downward, inertial reaction force on the base acts
upward against the top of the can. This relieves stress on the crystal. From Newton's second law
force = mass × acceleration
Advantages
* Rugged and inexpensive device
* High output impedance ‟
* High frequency response from 010 Hz to 50 kHz
* High sensitivity varies from 10 to 100 mv/g where g = 9.807 m/𝑠2
* Capability to measure acceleration from a fraction of g to thousands of g
Limitations
* Somewhat sensitive to changes in temperature
* Subject to hysteresis errors.
35
36
37
38
39
LECTURE - 6
“Before going into the details of measuring of humidity it is important to known some terms
related to humidity measurement”.
Terminology:
1.Humidity: The amount of Water vapour contained in air or gas is called humidity.
2.Dry Air: When there is no Water vapour contained in the atmosphere, it is called dry air.
3.Moist Air: When there is water vapour contained in the atmosphere, then the air is called moist
air.
4.Saturated Air: Saturated air is the moist air where the partial pressure of water - vapour equals
the saturation pressure of steam corresponding to the temperature of air.
5.Humidity Ratio or Specific Humidity or Absolute Humidity or Moisture Content: It is
defined as the ratio of the mass of water vapour to the mass of dry air in a given volume of the
mixture and is denoted by w
Humidity ratio (w) =
6. Relative Humidity: It is defined as the ratio of the mass of water vapour in a certain volume of
moist air at a given temperature to the mass of water vapour in the same volume of saturated air at
the same temperature and is denoted by RH or ∅.
Relative humidity = (at a given temperature)
Here a comparison is made between the humidity of air and the humidity of saturated air at the
same temperature and pressure. It is to be noted that if relative humidity is 100 % it is saturated air,
i.e., the air contains all the moisture it can hold.
It should also be noted that the degree of saturation (percentage of relative humidity) of air keeps
on changing with temperature.
7. Dew Point Temperature: By continuous cooling at constant pressure if the temperature of air
is reduced, the water vapour in the air will start to condense at particular temperature. The
41
temperature at which the water vapour starts condensing is called a dew point temperature.
8. Dry Bulb Temperature: When a thermometer bulb is directly exposed to an air -water vapour
mixture, the temperature indicated by the thermometer is the dry –bull temperature. This dry- bulb
temperate is not affected by the moisture present in the air i.e. the temperature of air is measured in
a normal way by the thermometer.
9. Wet Bulb Temperature: When a thermometer bulb is covered by a wet wick, and if the bulb
covered by the wet wick is exposed to air-water vapour mixture, the temperature indicated by the
thermometer is the wet bulb temperature. When air passed on the wet wick present on the bulb of
the thermometer, the moisture present in the wick starts evaporating and this creates a cooling
effect at the bulb. The bulb now measures the thermodynamic equilibrium temperature reached
between them. Then cooling is effected by the evaporation of water and heating by convection.
10. Wet Bulb Depression: Wet - bulb depression = (Dry bulb temperature) - (Wet bulb
temperature). Always dry bulb temperature is higher than the wet bulb temperature.
i.e., (Dry bulb temperature > Wet bulb temperature)
11. Percentage Humidity: It is defined as ratio of weight of water vapour in a unit weight of air
to weight of water vapour in the same weight of air if the air were completely saturated at the same
temperature.
1. Sling psychrometer:-
Principle: This instrument is used to measure both the dry bulb and wet bulb temperatures at a
time. With these temperatures we can measure the humidity content in air.
Applications:
- It is used for checking humidity level in air conditioned rooms and installations.
- This is used for setting & checking hair hygrometers.
- It is used in the measurement range of 0 to l00 % RH and can measure Wet bulb temperatures
between 0° C to 180° C.
- It is used for measuring wet bulb temperatures between 0° C to 180° C.
Limitations:
- Continuous recording of humidity is not possible. The evaporation process at the wet bulb will
add moisture to the air, which will disturb the measured medium. Automation is not possible with
these instruments.
- If the wick is covered with dirt, the wick will become stiff and its water absorbing capacity will
reduce.
-
2. Absorption hygrometer:
Principle: Humidity changes the physical, chemical and electrical properties of several materials.
This property is used in transducers that are designed and calibrated to read relative humidity
directly. There are two types of absorption hygrometers namely; (a). Mechanical humidity sensing
absorption hygrometer. (b). Electrical humidity sensing absorption hygrometer.
Principle:
Hygroscopic materials such as human hair, animal membranes, wood, paper etc, undergo changes
in linear dimensions when they absorb moisture from the surrounding air. This change in linear
dimension is used for the measurement of humidity present in air. A hair hygrometer has been
shown in fig.
43
Description:
The main parts of the mechanical hair hygrometer type are:
(a) Human hair as the humidity sensor. The hair is arranged in parallel beam and they are
separated from one another to expose them to the surrounding air. Numbers of hairs are placed in
parallel to increase mechanical strength as shown in fig.
(b) The hair arrangement is subjected to light tension by the use of a tension spring to ensure
proper functioning.
(c) The hair arrangement is connected to an arm and a link arrangement and link in turn is
attached to a pointer, pivoted at one end. The pointer sweeps on a humidity calibrated scale.
Operation: When the humidity of air is to be measured, the hair arrangement is exposed to the air
medium and this absorbs the humidity from the surrounding air and expands or contracts in the
linear direction. This expansion or contraction of the arrangement moves the arm & link and thus
the pointer on the calibrated scale, indicating the humidity present in the atmosphere. These
hygrometers are called membrance hygrometers when the sensing element is a membrance.
Applications:
- Temperature range of these hygrometers is 0 to 75°C
- RH (Relative humidity) range is 30 to 95 %.
Limitations:
- Response time is slow
- Calibration tends to change if is it used continuously
Principle:
Humidity changes the resistance of some material. This change in resistance is taken as a measure
of humidity
Description:
The main parts of this arrangement are:
(a) The two metal electrodes, which are coated and separated by a humidity sensing hygroscopic
salt (lithium chloride) as shown in fig.
(b) The leads of the electrodes are connected to a null balance Wheatstone bridge
Operation:
The electrodes coated with lithium chloride are exposed to atmosphere whose humidity is to be
44
measured. Humidity variation causes the resistance of lithium chloride to change. i.e., the chemical
absorbs or loses moisture and causes a change in resistance. The higher the humidity ( RH) in the
atmosphere, the more will be the humidity absorbed by lithium chloride and the lower will be the
resistance and the higher will be the resistance in case of lesser humidity. The change in resistance
is measured using a Wheatstone bridge which becomes a measure of humidity (RH) present in the
atmosphere.
Applications:
-These are used under constant temperature conditions.
-The accuracy of this instrument is ± 25%
-The response is very fast, of the order of few seconds.
Limitations:
This instrument should not be exposed to 100 % humidity as this makes chemical absorb all the
humidity and damage the instrument. Temperature corrections must be made if they are not used at
constant temperature conditions.
Principle: At constant pressure if the temperature of air is reduced, the water vapour in the will
start to condense at a particular temperature. This temperature is called dew point temperature.
Description:
The main parts of the arrangement are:
(a) A shiny mirror surface fixed with a thermocouple as shown in fig.
(b) A nozzle to provide a jet of air on the mirror.
(c) A light source focused constantly on the mirror.
(d) A photo cell to detect the amount of light reflected from the mirror.
Operation:
-The mirror is constantly cooled by a cooling medium which is maintained at a constant
temperature.
-A thermocouple is attached to this mirror whose leads are connected to a milli voltmeter.
-Constantly a light is made to fall at an angle on the mirror and the amount of reflected light is
sensed by a photo cell as shown in fig.
Now an air jet is made to fall on the mirror and the water- vapour (moisture) contained in the air
45
starts condensing on the mirror and they appear as small drops (dews) on the mirror. This moisture
formed on the mirror reduces the amount of light reflected which is detected by the photocell.
When for the first time there is a change in the amount of transmitted light; it becomes an
indication of dew formation at this instance i.e., the temperature indicated by the thermocouple
attached to the mirror becomes the dew-point temperature.
This instrument is used to know the time at which the dew appears for the first time and to know
the dew point temperature.
Applications:
-Cargoes can be protected from condensation damage by this instrument by maintaining the dew
point of air in holds lower than the cargo temperature.
-Used in industries for determining dew point.
Limitations:
-Effective light measurement is not possible.
-Limitations in cooling fluids exist.
The frequency shifts of the crystal due to the change of its weight in the presence of moisture of
sample gas are measured electronically and the difference in frequency is determined. This
frequency difference is converted into signal, which is then converted into digital form and
displayed. Thus, quartz crystal hygrometer can measure humidity or moisture content of gases
ranging from 1Vppm to 30Vppm.
46
MEASUREMENT OF FORCE, TORQUE AND POWER:-
An engineer is concerned not only with the generation of power by a prime-mover but is also
required to measure the useful output. That helps the engineer to know how well prime-mover is
doing its job in relation to the energy supplied to it. The terms related to engine output are:
(i) Force: Force represents the mechanical quantity which changes or tends to change the relative
motion or shape of the body on which it acts. Force is vector quantity specified completely by its
magnitude, point of application, line of action and direction.
The relationship between motion and force is provided by the laws of dynamics. Newton‟s second
low of motion states that force is proportional to the rate of change of momentum. That is
(ii) Work: Work represents the product of force and the displacement measured in the direction of
force. Work done = force × displacement; W= F s
The unit of work is joule (J) which is defined as the work done by a constant force of one newton
acting on a body and moving it through a distance of one metre in its direction.
1J=1Nm
(iii) Torque: It represents the amount of twisting effort, and numerically it equals the product of
force and the moment arm or the perpendicular distance from the point of rotation (fulcrum) to the
point of application of force. Consider a wheel rotated by the force F applied at radius r. Torque or
twisting moment is then given by T = F × r
Thus measurement of torque is intimately related to force measurement.
(iv) Power: Power is the rate of doing work and is obtained by dividing the work done by time.
The unit of power is watt (W), kilowatt (kW) or megawatt (MW). Watt represents a work
equivalent of one joule done per second.
47
In one rotation 𝜃 =2 𝜋 if the wheel thus N revolutions per minute, then the angular displacement
per second is 2𝜋𝑁/60.
Force Measurement:
A measure of the unknown force may be accomplished by the methods incorporating following
principles:-
(i) Balancing the force against a known gravitational force on a standard mass (scales and balances)
(ii) Translating the force to a fluid pressure and then measuring the resulting pressure and
pneumatic load cells)
(iii) Applying the force to some elastic member and then measuring the resulting (proving ring)
(iv) Applying the force to a known mass and then measuring the resulting acceleration
(v) Balancing the force against a magnetic force developed by interaction of a magnet current
carrying coil.
The proving (stress) ring is a ring of known physical dimensions and mechanical properties. When
an external compressive or tensile load is applied to the lugs or external bosses the ring changes in
its diameter; the change being proportional to the applied force. The amount of ring deflection is
measured by means of a micrometer screw and a vibrating reed which are attached to the internal
bosses. During use, the micrometer tip is advanced and its contact with the reed is indicated by
considerable damping of the reed vibration. The difference in the micrometer reading taken before
and after the application of load is the measure of the amount of the elongation or compression of
the ring. The proving ring deflection can also be picked by LVDT resulting in a proportional
voltage change. The device gives precise results when properly calibrated and corrected for
temperature variations. Instead of deflection, strain in an elastic member may be measured by a
strain gauge, and then correlated to the applied force.
Mechanical load cells: The term load cell is used to describe a variety of force transducers which
may utilize the deflection or strain of elastic member or the increase in pressure of enclosed fluids.
The resulting fluid pressure is transmitted to some form of pressure sensing device such as a
manometer or a bourdon tube pressure gauge. The gauge reading is identified and calibrated in
units of force.
49
In a hydraulic load cell the force variable is impressed upon a diaphragm which deflects and
thereby transmits the force to a liquid. The liquid medium, contained in a confined space, has a
preload pressure of the order of 2 bars. Application of force increases the liquid pressure; it equals
the force magnitude divided by the effective area of the diaphragm. The pressure is transmitted to
and read on an accurate pressure gauge calibrated directly in force units. The system has a good
dynamic response; the diaphragm deflection being less than 0.05 mm under full load. This is
because the diaphragm has a low modulus and substantially all the force is transmitted to the
liquid. These cells have been to measure loads up to about 2.5 MN with accuracy of the order of 0.
I percent of full scale; resolution is about 0.02 percent.
A pneumatic load cell operates on the force-balance principle and employs a nozzle-flapper
transducer similar to the conventional relay system. A variable downward force is balanced by an
upward force of air pressure against the effective area of a diaphragm. Application of force causes
the flapper to come closer to the nozzle and the diaphragm to deflect downwards. The nozzle
opening is nearly shut-off and this results into an increased back pressure in the system. The
increased pressure acts on the diaphragm and produces an effective upward force which tends to
return the diaphragm to its preload position.
For any constant applied force, the system attains equilibrium at a specific nozzle opening and a
corresponding pressure is indicated by the height of mercury column in a manometer. Since the
maximum pressure in the system is limited to the air-supply pressure, the range of 'M unit can be
extended only by using a larger diameter diaphragm. The commercially available load cells
operating on this principle can measure loads up to 250 KN with an accuracy of 0.5 percent of full
scale. The air consumption is of the order of 0.17 𝑚3/hr of free air.
50
A simple load cell consists of a steel cylinder which has four identical strain gauges mounted upon
it; the gauges 𝑅1 and 𝑅4 are along the direction of applied load and the gauges 𝑅2 and 𝑅3 are
attached circumferentially at right angles to gauges 𝑅1 and 𝑅4. These four gauges are connected
electrically to the four limbs of a Wheatstone bridge circuit. When there is no load on the cell, all
the four gauges have the same resistance. Evidently then the terminals B and D are at the same
potential; the bridge is balanced and the output voltage is zero
51
applications such as draw-bar and tool-force dynamometers, crane load monitoring, and road
vehicle weighing device etc.
Problem
Torques measurement:
The main purpose of torque measurement is to determine the mechanical power required or
developed by a machine. Torque measurement also helps in obtaining load information necessary
for stress or strain analysis. In some cases other variables are determined by measuring torque. For
example, in the case of rotating cylinder viscometer, measurement of torque developed at the fixed
end of the stationary cylinder help in determining the viscosity of the fluid between the movable
and stationary cylinder.
52
Optical torsion meter: The meter uses an optical method to detect the angular twist of a rotating
shaft.
The unit comprises two castings A and B which are fitted to the shaft at a known distance apart.
These castings are attached to each other by a tension strip C which transmits torsion but has little
resistance to bending. When the shaft is transmitting a torque, there occurs a relative movement
between the castings which results in partial inclination between the two mirrors attached to the
castings. The mirrors are made to reflect a light beam onto a graduated scale; angular deflection of
the light ray is then proportional to the twist of, and hence the torque in, the shaft. For constant
torque measurements from a steam turbine, the two mirrors are arranged back to back and there
occurs a reflection from each mirror during every half revolution. A second system of mirrors
giving four reflections per revolution is desirable when used with a reciprocating engine whose
torque varies during a revolution
Electrical torsion meter: A system using two magnetic or photoelectric transducers, as shown in
Fig, involves two sets of measurements.
(i) a count of the impulse from either slotted wheel. This count gives the frequency or shaft speed.
(ii) a measure of the time between pulses from the two wheels. This signal is proportional to the
53
twist of, and hence torque T in the shaft. These two signals, T and 𝜔, can be combined to estimate
the power being transmitted by the shaft.
Strain- gauge torsion meter: A general configuration of a strain gauge bridge circuit widely
employed for torque measurement from a rotating shaft is shown in Fig.
Four bonded-wire strain gauges are mounted on a 45' helix with the axis of the rotation and are
placed in pairs diametrically opposite. If the gauges are accurately placed and have matched
characteristics, the system is temperature compensated and insensitive to bending and thrust or
pull effects. Any .change in the gauge circuit then results only from torsional deflection. When the
shaft is under torsion, gauges I and 4 will elongate as a result of the tensile component of a pure
shear stress on one diagonal axis, while gauges 2 and 3 will contract owing to compressive
component on the other diagonal axis. These tensile and compressive principal strains can be
measured, and the shaft torque can be calculated
A main problem of the system is carrying connections from the strain gauges (mounted on the
rotating shaft) to a bridge circuit which is stationary. For slow shaft rotations, the connecting wires
are simply wrapped around the shaft. For continuous and fast shaft rotations, leads from the four
junctions of the gauges are led along the shaft to the slip rings. Contact with the slip rings is made
with the brushes through which connections can be made to the measuring instrument.
Commercial-strain-gauge torque sensors are available with built-in slip rings and speed sensors. A
family of such devices covers the range 6 Nm to 1000 kNm with full-scale output of about 40 mV.
The dynamometer is a device used to measure the torque being exerted along a rotating shaft so as
to determine the shaft power input or output of power-generating, transmitting, and absorbing
machinery. Dynamometers are generally classified into:
(i) Absorption dynamometers in which the energy is converted into heat by friction while being
measured. The heat is dissipated to the surroundings where it generally serves no useful purpose.
Absorption dynamometers are used when the test-machine is a power generator such as an
engine, turbine and an electric motor. The types commonly used include Prony brakes, hydraulic
or fluid friction brakes, fan brakes and eddy current dynamometers.
(ii) Transmission dynamometers in which the energy being transmitted either to or from
dynamometer is not absorbed or dissipated. After measurement, the energy is conveyed to the
surroundings in a useful mechanical or electrical form. A small amount of power, however, can
be lost by friction at the joints of the dynamometer. The type includes torsion and belt
dynamometers, and strain gauge dynamometers.
(iii) Driving dynamometer which may be coupled to either power-absorbing or power generating
devices since it may operate either a motor or a generator. These instruments measure power and
also supply energy to operate the tested devices. They are essentially useful in determining
performance characteristics of such machines as pumps and compressors. Electric cradled
54
dynamometer is a typical example of the driving dynamometer.
Mechanical brakes: The Prony and the rope brakes are the two types of mechanical brakes chiefly
employed for power measurement. The prony brake has two common arrangements in the block
type and the band type. Whereas the block type is employed to high speed shafts with a small
pulley, the band type measures the power of low speed shafts having a relatively large pulley.
The block type prony brake consists of two blocks of wood each of which embraces rather less
than one half of the pulley rim. One block carries a lever arm to the end of which a pull can be
applied by means of a dead weight or spring balance. A second arm projects from the block in the
opposite direction and carries a counter-weight to balance the brake when unloaded. When
operating, friction between the blocks and the pulley tends to rotate the blocks in the direction of
the rotation of the shaft. This tendency is prevented by adding weights at the extremity of the lever
arm so that it remains horizontal in a position of equilibrium.
Let W be the weight in newton, I be the effective length of the lever arm in meter, and N be the
revolutions of the crankshaft per minute. Then:
55
Hydraulic dynamometer:-
A hydraulic dynamometer uses fluid-friction rather than dry friction for dissipating -the input
energy. The unit consists essentially of two elements namely a rotating disk and stationary casing.
56
The rotating disk is keyed to the driving shaft of the prime-mover and it revolves inside the
stationary casing. The casing is mounted on antifriction (trunnion) bearings and has a brake arm
and a balance system attached to it. Such bearings allow the casing to rotate freely except for the
restraint imposed by the brake arm. Further, the casing is in two-halves; one of which is placed on
either side of the rotating disk. Semi-elliptical recesses in the casing match with the corresponding
grooves inside the rotating disk to form chambers through which a stream of water flow is
maintained. When brake is operating, the water follows a helical path in the chamber. Vortices and
eddy-currents are set-up in the water and these tend to turn the dynamometer casing .in the
direction of rotation of the engine shaft. This tendency is resisted by the brake arm balance system
that measures the torque.
57
LECTURE-7
The term control implies to regulate, direct or command. A control system may thus be defined as;
”an assemblage of devices and components connected or related so as to command direct or
regulate it self or another system".
EX:1
1. An electrical switch which serves to control the flow of electricity in a circuit. The input signal
(command) is the flapping of the switch on or off; and the corresponding output (controlled)
signal is the flow or non-flow of electric current.
2. A thermal system where it is desired to maintain the temperature of hot water at a prescribed
value. Before the operator can carry out his task satisfactorily, the following requirements must be
met:
(a) The operator must be told what temperature is required for the water. This temperature, called
the set point or desired value, constitutes the input to the system.
(b) The operator must be provided with some means of observing the temperature (sensing
element). For that a thermometer is installed in the hot water pipe and it measures temperature
compares with the desired value. This difference between the desired value and the actual
measurement value is error or actuating signal.
e=r-c
where r refers to the set-point or reference input and c denotes the controlled variable.
5
(c) The operator must be provided with some means of influencing the temperature (control
element and must be instructed what to do to move the temperature in a desired direction (control
function).
3. A driving system of an automobile (accelerator, carburetor and an engine vehicle) where
command signal is the force on the acceleration pedal and the automobile speed is the controlled
variable. The desired change in engine speed can be obtained by controlling pressure on the
accelerator pedal.
4. An automobile steering system where the driver is required to keep the automobile in the
appropriate lane of the road ways. The eyes measure the output (heading of the automobile), the
brain and hands react to any error existing between the input (appropriate lane) and the output
signals, and act to reduce the error to zero.
5. A biological control system where a person moves his finger to point towards an object. The
command signal is the position of the object and the output is the actual pointed direction.
Other well-known examples of control systems are: electric frying pans, water pressure
regulators, toilet-tank water level, electric irons, refrigerators and household furnaces with
thermostatic control.
5
Classification of control systems:
These are two basic types of control systems, 1. open loop and 2. closed loop systems.
1 Open-loop systems (unmonitored control system). The main features of an open loop system
are:
(a) there is no comparison between the actual (controlled) and the desired values of a variable.
(b) for each reference input, there corresponds a fixed operating condition (output) and this output
has no effect on the control action, i. e., the control action is independent of output.
(c) for the given set-input, there may be a big variation of the controlled variable depending upon
the ambient conditions. Since there is no comparison between actual output and the desired value,
rapid changes can occur in the output if there occurs any change in the external load.
2. Closed-loop systems (monitored control systems). The main features of a closed loop system
are:
(a) There is comparison between the actual (controlled) and the desired values of the variable. To
accomplish it, the output signal is fed back and the loop is completed.
(b) The error signal (deviation between the reference inputs and the feedback signals) then
actuates the control element to minimize the error and bring the system output to the desired value.
(c) The system operation is continually correcting any error that may exist. As long as the output
does not coincide with the desired goal, there is likely to be some kind of actuating signal.
The performance of such a system is evaluated with reference to the following desirable
characteristics:
* minimum deviation following a disturbance
*minimum time interval before return to set point
* minimum off-set due to change in operating conditions.
6
(ii) The level control system depicted in Fig. 15.10 is an automatic control system where inflow of
water to the tank is dependent on the water level in the tank. The automatic controller maintains
the liquid level by comparing the actual level with a desired level and correcting any error by
adjusting the opening of the control valve.
(iii) A pressure control system where the pressure inside the furnace is automatically controlled by
effecting a change in the position of the damper (Fig 15.12).
6
6
6
6