Ind Instru Note Unit 1
Ind Instru Note Unit 1
COURSE OUTCOMES
On completion of the course, the students will be able to
CO1: Explain the analog electronic and pneumatic, signal transmission techniques and devices used in process industries.(K2-
describe,A1)
CO2: Describe the operating principle of sensors used to measure position, displacement, velocity and acceleration. (K2,A1)
CO3: Describe the operating principles and outline the application aspects of pressure measurement systems.(K2, A1)
CO4: Explain the operating principle of force and torque measurement systems.(K2-describe, A1)
PSO 1
Acquire hands on training on electronic
system design, process instrumentation and
control systems.
PSO 2
Solve real life industrial and research
problems by applying domain knowledge
and skills.
PSO 3
Identify community specific problems and
provide acceptable technical solutions to
them using a multidisciplinary approach.
SYLLABUS DETAILS
Unit I
Analog electronic transmitters & Pneumatic systems: CO1: 14hrs
Introduction to electronic transmitters. Sensor linearization techniques, redundant measurement
systems.
Flapper-nozzle assembly. Pneumatic relays, air filter regulator, pneumatic force balance systems,
introduction to compressed air supply systems.
Unit II
Measurement of position, displacement, velocity, acceleration: CO2 : 14 hrs
Limit switch, Proximity Sensors - Inductive, Photoelectric, Capacitive and Magnetic. Shaft encoders,
Tachogenerators, Tachometers. stroboscopes. Accelerometers. Introduction to vibration measurement.
Unit III
Measurement of pressure and vacuum: 16hrs: CO3
Concept of absolute, gauge and differential pressure. Pressure units and measurement principles. Elastic
pressure sensors: bourdon tube, bellows, diaphragm and capsule. Manometers. Pressure gauge. Pressure
switch. Electronic pressure transmitters: capacitive, piezo-resistive and resonator type. Calibration of
pressure measuring devices. Installation of pressure measuring devices in different services.
Measurement accessories - chemical seal and snubbers.
Vacuum measurement: Mcleod gauge, thermal conductivity and ionization gauge.  Â
Unit IV
Force and Torque measurement systems: 12hrs: CO4
Strain gauge, strain gauge signal processing, Load cells: column, shear and bending beam type.
magnetostrictive load cell. Introduction to industrial weighing systems and belt conveyor weighing
systems. Weigh feeders. Principle of torque measurement in rotating shafts.
LESSON PLAN
Industrial Instrumentation Course code: IEE/PC/B/T/223
Â
MONDAY (SATURDAY DUE TO NBA) Introduction to electronic transmitters.
redundant measurement systems..
WEDNESS DAY Sensor linearization techniques
MONDAY Flapper-nozzle assembly. Pneumatic relays
WEDNESS DAY air filter regulator, pneumatic force balance systems,
MONDAY introduction to compressed air supply systems
WEDNESS DAY Limit switch, Proximity Sensors - Inductive,
MONDAY Photoelectric,
WEDNESS DAY Capacitive and Magnetic.
MONDAY Shaft encoders, Tachogenerators, Tachometers.
WEDNESS DAY stroboscopes. Accelerometers.
MONDAY Introduction to vibration measurement
WEDNESS DAY Concept of absolute, gauge and differential pressure. Pressure units and
measurement principles
MONDAY Elastic pressure sensors: bourdon tube, bellows, diaphragm and capsule.
WEDNESS DAY Manometers. Pressure gauge. Pressure switch. Electronic pressure
transmitters:
MONDAY capacitive, piezo-resistive and resonator type.
WEDNESS DAY Measurement accessories - chemical seal and snubbers.
MONDAY Calibration of pressure measuring devices. Installation of pressure measuring
devices in different services.
WEDNESS DAY Vacuum measurement: Mcleod gauge, thermal onductivity
MONDAY ionization gauge
WEDNESS DAY ASSIGNMENT/ CAT
MONDAY Strain gauge, strain gauge signal processing,
WEDNESS DAY Load cells: column, shear and bending beam type. magnetostrictive load
cellshafts.
MONDAY Introduction to industrial weighing systems and belt conveyor weighing
systems.
WEDNESS DAY Weigh feeders. Principle of torque measurement in rotating
MONDAY Assignment/CAT 2
Sensors and transmitters have been playing an increasingly important role in the field of
instruments and meters, and in industrial automation. These two instruments are commonly used
on equipment that requires temperature, pressure, flow, and object-space measurements, and the
results must be transmitted for further automation control.
Those unaccustomed with such instruments may easily get confused since both sensor and
transmitter are used for medium measurement. So, to get started, let’s first talk about the
differences between sensors and transmitters.
A sensor consists of a sensitive element and conversion element. The sensing element can
sense the measured variables (temperature, pressure, liquid level, and flow), and the conversion
element can convert the sensed variables into non-standard electrical signals or other forms of
output signals. The output signal of a sensor is non-standard.
Different from sensors, a transmitter can’t t sense the measured variables initially.
Rather, it just converts the non-standard electrical signal outputted by the sensor into a measurable
electric signal, usually in a 4- to 20-mA current signal, or 1- to ~5-V dc voltage signal. Meanwhile,
a transmitter also amplifies the signal for subsequent receiving instrument.
At present, many transmitters are integrated with sensors to create one instrument. This
integrated instrument is referred to as a transmitter, not a sensor; for example, the Rosemount
transmitter 3051 series and ABB transmitter TTH 200 series, etc.
Industry : Economic activity concerned with the processing of raw materials and manufacture of
goods in factories. The engineering sector is made up of a wide range of industries (including
fabricated metal products, industrial machinery and equipment, electronics and other electrical
equipment, transportation equipment, and instruments and related products).
Manufacture of food products
Manufacture of beverages
Manufacture of tobacco products
Manufacture of textiles
Manufacture of wearing apparel
Manufacture of leather and related products
Manufacture of wood and of products of wood and cork, except furniture;
manufacture of articles of straw and plaiting materials
Manufacture of paper and paper products
Printing and reproduction of recorded media
Manufacture of coke and refined petroleum products
Manufacture of chemicals and chemical products
Manufacture of basic pharmaceutical products and pharmaceutical preparations
Manufacture of rubber and plastic products.
Manufacture of other non-metallic mineral products
Manufacture of basic metals
Manufacture of fabricated metal products,
except machinery and equipment
Manufacture of computer, electronic and optical products
Manufacture of electrical equipment
Manufacture of machinery and equipment n.e.c.
Manufacture of motor vehicles, trailers and semi-trailers
Manufacture of other transport equipment
Manufacture of furniture
Other manufacturing
Repair and installation of machinery and equipment
Prior to the invention of electronic circuitry, process control systems used pneumatic control
signals. In these systems, controllers were powered by distinct pressures of compressed air.
Eventually, air compression of 3-15 psi became the industry standard for a few reasons. First, it
was very expensive to engineer a system that would detect pressure signals under 3 psi. Second,
signals below 3 psi were unrecognizable. Lastly, using 3 psi to indicate a value of 0% measurement
made it easier to identify when system faults occurred, in other words, when the signal dropped to
zero. As electronic systems made their debut in the 1950s, current became the preferred, more
precise, and more efficient process control signal.
Another reason why a 0 mA signal is not efficient is the inability to clearly differentiate between
a measurement of zero and a system failure in which the signal would drop to zero. The term live
zero is used to describe a loop signal where the zero value is a number higher than zero (i.e. 4
mA)4. The term dead zero denotes a loop signal where the zero value is indeed zero (i.e. 0 mA).
3. The advantage of using a live zero versus a dead zero
As a result, using a 4-20 mA signal is preferred because it supports two-wire transmission that
supplies the power needed for loop-powered devices like transmitters and displays to operate.
Now that we have reviewed the reasons why a 0 mA signal is not practical, it's easy to see why the
process control industry has preferred the 4-20 mA signal range.
4-20 mA transmitter
This is the device used to transmit data from a sensor over the two-wire current loop. There can be
only one Transmitter output in any current loop. It acts like a variable resistor with respect to its
input signal and is the key to the 4-20mA signal transmission system. The transmitter converts the
real world signal, such as flow, speed, position, level, temperature, humidity, pressure, etc., into
the control signal necessary to regulate the flow of current in the current loop.
The level of loop current is adjusted by the transmitter to be proportional to the actual sensor input
signal. An important distinction is that the transmitted signal is not the current in the loop, but
rather the sensor signal it represents. The transmitter typically uses 4mA output to represent the
calibrated zero input or 0%, and 20mA output to represent a calibrated full-scale input signal or
100% as shown. the design of 2-wire field transmitters easier.
An example of a circuit utilizing the XTR116 is shown in Figure ^. Here, a resistive sensor is
placed in a full resistive bridge. The output of the bridge is connected to an instrumentation
amplifier, INA, which provides gain and level shifting of the sensor output. The INA output
connects to the input of the XTR116 which then precisely controls the output current through the
Q1 BJT to regulate the current between 4 mA and 20 mA. The XTR116 also integrates a +5 V
linear regulator, VREG, and a 4.096 V precision reference, VREFÂ-. The VREG output is used to
power the INA and the op amp circuitry internal to the XTR116. The VREF output provides a
precise low-drift excitation voltage for the resistive bridge. The most common issue that people
encounter when designing 2-wire field transmitter systems results from violating the compliance
voltage of the system. The XTR116Â has a minimum power-supply voltage, V COMPLIANCE,
of +7.5 V between V+ and IOÂ for proper operation. If the resistive load and/or resistive losses
due to cable length cause the supply voltage to decrease below +7.5 V, the system will lose its
ability to regulate the output current.
In Figure^, an 18 V drop occurs in the current loop due to the 20 mA output current and the 900
ohm of series resistance in the loop. With a 24 V supply, this would only leave +6 V across
the XTR116Â which doesn’t meet the minimum supply voltage requirement of +7.5 V! As a
result the output current will not reach 20 mA and will typically become non-linear as the input
circuitry loses power.
Voltage compliance issues are directly related to Ohm’s Law. The product of the output current
and the resistance in the loop can’t exceed the supply voltage applied to the system. If
VCOMPLIANCE, and VLOOP are known, the maximum loop resistance, RMAX, can be
calculated as shown in Equation 1.
While voltage compliance issues can occur in
the field due to long wiring distances, poor
quality wires and multiple receivers, they also
commonly occur in testing when the wrong
value resistor is placed in the circuit for a load.
If the output current of the transmitter stops
increasing during testing, measure the voltage
drop across the load. If the load voltage drop is
higher than expected, the load resistance is
likely the cause of the output current issue.
ADVANTAGES OF 2 WIRE
TRANSMITTER
The main advantage of a two-wire loop is that it minimizes the number of wires needed to run
both power and signal. The use of a current loop to send the signal also has the advantages of
reduced sensitivity to electrical noise and to loading effects.
The electrical noise is reduced because the two wires are run as a twisted pair, ensuring that each
of the two wires receives the same vector of energy from noise sources, such as electro-
magnetic ï¬Felds due to a changing current in a nearby conductor or electric motor.
Since the receiving electronics connected to the transmitter is designed to ignore common-mode
signals, the resulting common-mode electrical noise is ignored.
The sensitivity to loading effects is reduced because the current in the twisted pair is not affected
by the added resistance of long cable runs.
A long cable or other series resistance will cause a greater voltage drop but does not affect the
current level as long as enough voltage compliance is available in the circuit to supply the signal
current.
The circuit compliance to handle a given voltage drop from additional loop devices depends on
the transmitter output circuit and on the power supply voltage.
The typical power supply for industrial transmitters is +24 VDC. If 6 volts, for example, are
needed to power the transmitter and its output circuit, then 18 volts of compliance remain to
allow for wire resistance, load resistance, voltage drops across intrinsic safety (IS) barriers and
remote displays, etc.
Where the current loop signal is connected to the main receiving equipment (PLC/DCS) or data
acquisition system, a precision load resistor of 250 ohms is normally connected.
This converts the 4 to 20 mA current signal into a 1 to 5 volt signal, since it is standard practice
to conﬕgure the analog-to-digital converter of the receiving equipment (PLC/DCS) as a
voltage-sensing input.
Disadvantages of two-wire transmitters
Low impedance capability is the major limitation of the two-wire transmitter
The type and number of devices which can be driven by the system and also the distance is
limited
Three-wire transmitter
The 3 -wire transmitter would transmit the data signal and the power with respect to the common
ground. The three-wire transmitters are energized by the supply voltage in the transmitter and the
transmitter would source the loop current. The receiver common is connected to the transmitter
common. In this transmitter, the current loop can also be operated on a measuring instrument that
has high input impedance. The three-wire arrangement is not widely used but it would deliver
more power to the module electronics.
Sensor linearization techniques, redundant measurement systems.
The sensor is an important device in instrumentation, measurement, and control applications. It
can be used to measure various physical, chemical, and physiological parameters. It plays a very
important role in numerous industrial, home, healthcare, defence, environmental, and agricultural
applications . Various sensors such as (i) capacitive, (ii) resistive, (iii) inductive, (iv) impedance,
(v) aerometric (vi) electrochemical, (vii) chemical/biological field effect transistors
(ChemFET/BioFET), (viii) surface acoustic wave (SAW), etc. Many sensors show a non-linear
response with the variation of the measurement.
However, there may be some sensors including some electrochemical sensors, which are linear
but for a limited range of measurement . It can be linearized to some extent by processing the
sensing materials as well as suitably designing the geometrical configuration of the structures. But
this is tedious, time-consuming, and difficult to achieve in many cases .
The response for very thin hydrophilic sensing film based two electrodes parallel plate moisture
sensor is quite linear . In the study of Silva et al. (2015), several multi-layered structures of
spintronic materials were engineered to fabricate the magnetoresistive sensors to obtain a
A linear response. Many factors such as materials, geometries, and layout strategies are studied to
improve the linear response as well as the detection limit of the sensors.
With the availability of advanced fast active devices at low cost, it may be easy to linearize the
response by the signal conditioning circuits with relatively small delay.
Generally, the response of a sensor can be voltage or current, frequency or time signal. In most of
the cases, the output signal varies nonlinearly with the variation of input measurement parameters.
Also, in many cases, the environmental factors such as temperature, humidity, or pressure affect
the sensor characteristics nonlinearly. Sometimes, these environmental factors modify the
input€“output relation of the sensor. These factors are more critical for chemical sensors.
Figure above shows the nonlinear impedance response of a ceramic humidity sensor. The desired
linear response (Zlin) is also shown in the same graph. The response shown in Figure 1 has ∼
29% nonlinearity. It is also caused due to inappropriate selection of an electronic circuit.
Most of the humidity sensors fabricated using ceramic, or polymer or porous silicon materials have
a nonlinear response. A typical nonlinear response curve of a sensor can be represented by nth-
order polynomial function, the order of which depends on the nonlinearity value. A typical third-
order response (y) can be represented by
By linearization, the nonlinear response curve can be converted into a straight line fit, which
simplifies the calibration process. So, the calibration may be performed in the shortest time and at
low cost. Hence, it is most convenient rather than to refer a nonlinear calibration curve or to
compute from a nonlinear calibration Equations (1) and (2).
The third group of methods involves the linear conversion of temperature into frequency or time
period of the output signal. Linearization feature of the circuit has been realized by the correct
choice of the thermistor parameters and the frequency selective passive components. The
thermistor response is linearized by identifying the linear regions and varying the thermistor
characteristics parameter (β) using a timer circuit. A one-bit sigma€“delta modulator circuit
modified with NTC thermistor is used to compensate the thermistor non-linearity. A timing resistor
is appropriately chosen to obtain an approximately linear relation between the time-period of the
output pulse train and the ambient temperature. For high precision temperature measurement,
Errors due to lead resistance, thermoelectric effect, and amplifier offsets are also studied. The
thermistor response is linearized by identifying the linear regions and varying the thermistor
characteristics parameter (β) using a timer circuit. A one-bit sigma€“delta modulator circuit
modified with NTC thermistor is used to compensate the thermistor non-linearity
Linearization of thermocouple characteristics
Because of high linearity, the diode temperature sensor is fabricated in a chip from (LM35 National
semiconductor, AD509 by Analog device, etc.) by several IC manufacturers. Moreover, the
nonlinearity of the sensor characteristic is usually compensated by replacing the diode by a bipolar
transistor shorting base collector junction. When the base-emitter voltage is used as an output
signal, the exponential characteristic of the base-emitter junction is compensated by the
exponential characteristic of the collector current versus the base-emitter voltage. This results in a
nice linear behaviour. But to maintain linear response, the current flowing through the device
should be constant and small less than 100 μA. With the increase in temperature, the output
voltage drops, so the current varies which, in turn, causes some nonlinearity. However, the main
drawback of this device is the limited temperature range.
Several works have reported the development of analog signal conditioning circuits for
compensating the nonlinearity of metallic alloy resistive sensors, which are popular as a resistance
temperature detector Resistance versus temperature characteristic for most metallic materials can
be represented by high-order polynomial function, the order of which depends on the material, the
accuracy, and the temperature range to be measured. For small temperature range from −20°C
to 150°C, the platinum RTD is linear within ±0.3% The effect of nonlinearity, self-heating
error, and the lead resistance on the RTD temperature
The chemo resistive sensors are another important class of resistive sensors, which are used for air
pollutants detection (Korotcenkov and Cho, 2011). The detection limit of the chemical species by
the sensors such as metal oxide, field effect, and thermoelectric gas sensors is affected by the
nonlinear response of the sensor. Electrical response (Rc) of the metal oxide gas sensor for
reducing gas with the variation of concentration (Cg) can be represented by an empirical relation.
where K is the characteristic coefficient of the gas sensing film; and β is the slope of the response
curve. For oxidizing gas, the resistance value increases with increase in gas concentration.
Additionally, such sensors suffer from cross-sensitivity due to the presence of non-target gases
and humidity in the same environment. Estimation of detection limit through linearized calibration
models for MOX gas sensor to detect carbon monoxide in the presence of humidity is reported.
In commercial Figaro gas
sensor, the problem of
humidity is eliminated by
using cyclic high and low
voltage pulse applied to the
heater. At high voltage
pulse, the humidity effect is
eliminated, and at low
pulse, the sensor is heated
at the optimum temperature
to obtain the selective
response to the target gas.
The logarithmic of the
output and input best fits the response of the MOX sensors. But for the small range, the response
is quasi-linear. So, a logarithmic signal conditioning circuit can be used to linearize the response
curve. This work is mainly about the determination of the detection limit using univariate and
multivariate linearized modes
Piecewise linearization is one of the simplest and the basic technique of linearization, where the
nonlinear response curve is divided into small linear segments. Each linear segment is then
implemented by the analog signal conditioning circuits. When the voltage signal (Vs)
corresponding to a particular %RH is less than 3%, the output will be obtained from the segment
1, otherwise, the output will be obtained from the segment 2. A piecewise linearization circuit
having two slopes with VB as breakpoint implemented using p-n junction diode is shown in Figure
4B. The first slope is formed by the resistances Rs and R1, and the second slope is formed by Rs
and the parallel
combination of R1
and R2 . For better
accuracy, the
nonlinear response
can be divided into
more number of linear
pieces. But there
should be a trade-off
between the accuracy
and the complexity of
the circuit as both the
factors increase with
the increase in the
number of segments.
The nonlinearity of a
voltage-controlled resistor used in adaptive filter for dynamic compensation of the load cell is
piecewise linearized. Important features of the circuit are fast speeds, low power dissipation but
the circuit is relatively complex. Very recently, a simple piecewise linear circuit having 2 bits flash
ADC, 4X1 multiplexer (MUX), and four analog circuits. The ADC and the MUX are used to select
one of the linear pieces implemented by op-amp analog circuits. With the help of the combined
interfacing and the linearization circuits, the nonlinearity of the capacitive humidity sensor is
reduced to less than 1% value. Hardware implementation of the circuit is simple and can be
implemented in chip form. The proposed scheme is shown in Figure 5. The nonlinear signal is
divided into four approximate linear pieces. This technique can be used to linearize any type of
nonlinear response curves such as parabolic, sigmoidal, hyperbolic, etc
Due to the advancement of IC chip technology, digital methods nowadays are most commonly
used when high performance and high accuracy is demanded. Another advantage of this method
is the programmability, which helps to process signals from different sensors. In case of smart
sensors, most often analog output signal is converted into binary data. The digital
data are then manipulated to have a linear relation. There are two common approaches such as (i)
deriving a linear equation, and (ii) look-up table.
If the relation between the sensing parameter and the digital data is nonlinear, an equation can be
developed to obtain the linearized value of the parameter. For example, the voltage output of a gas
sensor is related to ppm gas concentration
In the interpolation method, the value of an intermediate point between two known given points is
determined using a straight-line approximation. This method offers fast execution speed, less
memory requirement, but accuracy depends on the number of segments. A piecewise nonlinear
ADC scheme using PWM is proposed. To improve the accuracy of temperature measurement
(±0.08°C), the ninth-order polynomial fitted curve of TC is implemented using a
microcontroller- based signal conditioning unit. The circuit is complex to implement and the
accuracy depends on the fast and advanced ADC. Few works describing auto-calibrated smart
temperature sensor with nonlinearity compensation have been reported. In such smart sensors, the
nonlinearity is compensated by piecewise linearization or parallel compensating oscillator circuit
but the range is. Applications of embedded microcontrollers for interfacing and signal conditioning
of the sensor€™s output are discussed
Recently, conventional dual slope analog to digital converter with necessary signal conditioning
circuits have been employed to linearize the response of thermistor, Hall effect sensor, and single
or double resistive element Wheatstone bridge In such schemes, the sensors are the integral parts
of the dual slope ADC, which directly converts the sensing parameters into digital form with
linearized output. This is to note that if the nonlinear sensor is digitized before its linearization, the
ADC will require higher bit resolution than that required for a linearized version of the sensor. On
the other hand, a look-up table will fit the linearization requirements only when the memory size
required to store the table is moderate. But the size of the look-up table
depends on the level of the sensor nonlinearity.
A digital operation can require high computing resources, so that some of the proposed solutions
can be more expensive than the sensor itself. The execution time of a linearization scheme may
also be an important parameter for certain applications. This is an important parameter when the
sensor is part of a feedback/manual control system, where control action depends on the measured
value. Even for monitoring purpose, the response time is important. So, the response of the sensor
including a necessary signal conditioning circuit should be fast in many applications. It is true that
the response time of many sensors is much larger than the linearization time. For example, most
of the gas sensors, which work on adsorption and desorption principle have long response and
recovery time and the sensor is also having high nonlinearity . The response time may be several
tens of seconds to a minute. To reduce the overall response time of the sensor and the signal
conditioning circuit, efforts should be made to design the linearization circuit, which provides low
response time. The linearization time can be minimized by judicious selection of electronics
devices and reducing hardware components as far as possible.
The software algorithms implemented by the digital system efficiently perform the linearization
job with greater efficiency, utility, and flexibility than other methods discussed above. Various
software algorithms like spline or polynomial curve fitting techniques, and intelligent soft-
computing techniques such as artificial neural networks (ANNs), fuzzy logic, neuro-fuzzy logic,
support vector machine, etc., are extensively employed for the purpose of sensor linearization
Redundant Measurement Systems
Reliability in process control computing can be defined as the correct operation of a system up to
a time t = T, given that it was operating correctly at the starting time t = 0.1 However, correct
operation can have many meanings, depending on the requirements previously established for the
system. A common attitude today is that single or multiple failures can be accepted as long as the
system does not go down or the desired operation is not interrupted or disturbed.
Reliability is therefore a goal to be expected of a system and is set by the users. To obtain a certain
measure of reliability, the term fault tolerant computing can be used. It may be defined as the
ability to execute specified algorithms correctly regardless of hardware errors and program errors.
Since different computers in different applications have widely different requirements for
reliability, availability, recovery time, data protection, and maintainability, an opportunity
exists for the use of many different fault-tolerant techniques. The understanding of fault
tolerance can be helped by first understanding faults. A fault can be defined as the deviation of
one or more logic variables in the computer hardware from their design-specified values.
A logic value for a digital computer is either a zero or a one. A fault is the appearance of an
incorrect value such as a logic gate stuck on zero or stuck on one. The fault causes an error if it, in
turn, produces an incorrect operation of the previously correctly functioning logic elements.
Therefore, the term fault is restricted to the actual hardware that fails. Faults can be classified in
several ways. Their most important characteristic is a function of their duration.
They can be either permanent (solid or hard) or transient (intermittent or soft). Permanent faults
are caused by solid failures of components.4 They are easier to diagnose but usually require the
use of more drastic correction techniques than do transient faults.
Transient faults cause 80 to 90% of faults in most systems. Transient faults, or intermittent, can be
defined as random failures that prevent the proper operation of a unit for only a short period of
time not long enough to be tested and diagnosed as a permanent failure. Often, transient faults
become permanent with further deterioration of the equipment. Then, permanent fault-tolerant
techniques must be used for system recovery.
The goal of system reliability or of fault-tolerant computing therefore is to either prevent or
be able to recover from faults and continue correct system operation. This also includes
immunity to software faults induced into the system. To achieve a high reliability, it is essential
that component reliability be as high as possible.
As the complexity of computer systems increase, almost any level of guaranteed reliability of
individual elements becomes insufficient to provide a satisfactory probability of successful task
completion.
Therefore, successful fault-tolerant computers must use a judicious selection of protective
redundancy to help meet the reliability requirements. The three redundancy techniques are as
follows:
1. Hardware redundancy
2. Software redundancy
3. Time redundancy
These three techniques cover all methods of fault tolerance.
Hardware redundancy can be defined as any circuitry in the system that is not necessary for
normal computer operation should no faults occur. Software redundancy, similarly, is additional
program instructions present solely to handle faults. Any retrial of instructions is known as time
redundancy.
Hardware Redundancy
Hardware redundancy can be described as the set of all hardware components that need to be
introduced into the system to provide fault tolerance with respect to operational faults.
These components would be superfluous should no faults occur, and their removal would not
diminish the computing power of the system in the absence of faults.
In achieving hardware fault tolerance, it is clear that one should use the most reliable
components available. However, increasing component reliability has only a small impact on
increasing system reliability. Therefore, it is more important to be able to recover from failures
than to prevent them.
Redundant techniques allow recovery and are thus very important in achieving fault-tolerant
systems. The techniques used in achieving hardware redundancy can be divided into two
categories:
static (or masking) redundancy and dynamic redundancy.
Static techniques are effective in handling both transient and permanent failures. Masking is
virtually instantaneous and automatic tech. It can be defined as any computer error correction
method that is transparent to the user and often to the software. Redundant components serve to
mask the effect of hardware failures of other components. Many different techniques of static
redundancy can be applied. The simplest or lowest level of complexity is by a massive
replication of the individual components of the system.
For example, four diodes connected as two parallel pairs that are themselves connected in series
will not fail if any one diode fails open or short. Logical gates in similar qu added arrangements
can also guard against single faults, and even some multiple faults, for largely replicated
systems.
More sophisticated systems use replication at higher levels of complexity to mask failures. Instead
of using a mere massive replication of components configured in fault-tolerant arrangements,
identical no redundant computer sections or modules can be replicated and their outputs voted
upon. Examples are triple modular redundancy (TMR) and more massive modular redundancy
(NMR), where N can stand for any odd number of modules.
With the use of some codes, data that has been garbled (i.e., bits changed due to hardware errors)
can sometimes be recovered instantaneously with the use of redundant hardware.
Dynamic recovery methods are, however, better able to handle many of these faults. Higher levels
of fault tolerance can be achieved more easily through dynamic redundancy and implemented
through the dual actions of fault detection and recovery. This often requires software help in
conjunction with hardware redundancy.
Many of these methods are extensions of static techniques. Massive redundancy in components
can often be better utilized when controlled dynamically. Redundant modules, or spares, can have
a better fault tolerance when they are left unpowered until needed, since they will not degrade
while awaiting use. This technique, standby redundancy, often uses dynamic voting techniques to
achieve a high degree of fault tolerance.
This union of the two methods is referred to as hybrid redundancy. Additional hardware is needed
for the detection and switching out of faulty modules and the switching in of good spares within
the system by this technique. Error detecting and error correcting codes can be used to dynamically
achieve fault tolerance in a computing system. Coding refers to the addition of extra bits to and
the rearranging of the bits of a binary word that contains information. The strategy of coding is to
add a minimum number of check bits, the additional bits, to the message in such a way that a given
degree of error detection or correction is achieved. Error detection and correction is accomplished
by comparing the new word, hopefully unchanged after transmission, storage, or processing, with
a set of allowable configurations of bits. Discrepancies discovered in this manner signal the
existence of a fault, which sometimes be corrected if enough of the original information remains
intact. Encoding and decoding words with the use of redundant hardware can be very effective in
detecting errors. Through hardware or software algorithms, incorrect data can also often be
reconstructed. Otherwise, the detected errors can be handled by module replacement and software
recovery actions. The actions taken depend on the extent of the fault and of the recovery
mechanisms available to the computing system.
Software Redundancy refers to all additional software installed in a system that would not be
needed for a fault-free computer. Software redundancy plays a major role in most fault tolerant
computers. Even computers that recover from failures mainly by hardware means use software to
control their recovery and decision-making processes. The level of software used depends on the
recovery system design. The recovery design depends on the type of error or malfunction that is
expected. Different schemes have been found to be more appropriate for the handling of different
errors. Some can be accomplished most efficiently solely by hardware means. Others need only
software, but most use a mixture of the two.
For a functional system, i.e., one without hardware design faults, errors can be classified into two
varieties: (1) software design errors and (2) hardware malfunctions.
The first category can be corrected mainly by means of software. The software methods, though,
are often used to correct hardware faults especially transient ones. The reduction and correction of
software design errors can be accomplished through the techniques outlined below.
Computers may be designed to detect several software errors. Examples include the use of illegal
instructions (i.e., instructions that do not exist), the use of privileged instructions when the system
has not been authorized to process them, and address violations. This latter refers to reading or
writing into locations beyond usable memory. These limits can often be set physically on the
hardware.
Computers capable of detecting these errors allow the programmer to handle the errors by causing
interrupts. The interrupts route the program to specific locations in memory. The programmer,
knowing these locations, can then add his own code to branch to his specific subroutines, which
can handle each error in a specified manner.
Software recovery from software errors can be accomplished via several methods. As mentioned
before, parallel programming, in which alternative methods are used to determine a correct
solution, can be used when an incorrect solution can be identified.
Some less sophisticated systems print out diagnostics so that the user can correct the program off-
line from the machine. This should only be a last resort for a fault-tolerant machine. Nevertheless,
a computer should always keep a log of all errors incurred, memory size permitting.
Preventive measures used with software methods refer mainly to the use of redundant storage.
Hardware failures often result in a garbling or a loss of data or instructions that are read from
memory. If hardware techniques such as coding cannot recover the correct bit pattern, those words
will become permanently lost. Therefore, it is important to at least duplicate all necessary program
and data storage so that it can be retrieved if one copy is destroyed.
In addition, special measures should be taken so that critical programs such as error recovery
programs are placed in non-volatile storage, i.e., read-only memory. Critical data as well should
be placed in non-destructive readout memories.
An example of such a memory is a plated-wire memory. The second task of the software in
fault tolerance is to detect and diagnose errors. Software error-detection techniques for software
errors often can be used to detect transient hardware faults. This is important, since a relatively
large number of malfunctions are intermittent in nature rather than solid failures Time-redundant
processes, i.e., repeated trials, shall be used for their recovery.
Software detection techniques do not localize the sources of the errors. Therefore, diagnostic test
programs are frequently implemented to locate the module or modules responsible. These
programs often test the extent of the faults at the time of failure, or perform periodic tests to
determine malfunctions before they manifest themselves as errors during program execution.
Almost every computer system uses some form of diagnostic routines to locate faults.
In a fault tolerant system, the system itself initiates these tests and interprets their results, as
opposed to the outside insertion of test programs by operators in other systems
Fault-Tolerant Computer System Design
The design of a fault-tolerant industrial computer system should be different from that of a similar
system for a space borne computer system. Maintenance is available in an industrial environment
to replace any modules that may have failed. In addition, the system may be much larger, and a
hierarchy of many computers of different sizes may be necessary to handle the various operations.
Therefore, a fault tolerant communication network may be required as well.
Valid future designs must incorporate provisions for these advances and allow for larger
replacement modules for quicker and simpler fault location and maintenance. The ways in which
faults manifest themselves have not changed. They may be summarized as the following:
1. Intra-module data errors
2. Inter-module data transfer errors
3. Address errors
4. Control signal errors
5. Power failure
6. Timing failure
7. Reconfiguration faults
The two main designs considered here are that of a duplex system with two identical computers
operating in parallel and that of a triplex system (see Figure 1.10a). The triplex system has three
computers operating synchronously. In
addition to those error detecting and
correcting capabilities already built into
the computers, fault-tolerant features will
be present in software for both systems.
The duplex system will feature a
comparison of data for fault detection
with rollback and recovery to handle
transient errors. The triplex system will
incorporate a software voting scheme
with memory reload to recover from
transient failures. This removes the
overhead of rollback. Each duplicated
system of computers will communicate
internally via a parallel data bus that will
allow high-speed communication, plus a
parallel control bus that will initiate
interrupts to handle any faults within the
system. All computer elements will
communicate with higher level systems
via a full-duplex synchronous serial bit
bus, a bus that will permit simultaneous
message transfer in both directions, through the protocol microprocessor. With these components,
a fully reliable system should be realized
FIELD INSTRUMENT REDUNDANCY AND VOTING
the above concepts apply not only to process computers but also to basic process control systems
(BPCSs) and safety instrumented systems (SISs), where they also improve performance,
availability, and reliability. In the case of field instruments and final control elements, they mainly
guarantee continuity of operation and increase uptime, whereas, in SIS systems, they minimize
nuisance or spurious interventions and alarms. The techniques used in BPCS and SIS systems are
similar and have initially been developed for the inherently more demanding SIS applications. For
SIS systems, the need of international regulations has been recognized (ANSI/ISA84.01-1996,18
IEC 61508-1998/2000,19 and IEC 61511,20 in draft version) while, for non-safety related control
loops, this is left to good engineering practice. Therefore, the discussion of redundancy and voting
techniques, as applied to the field instruments of BPCS systems, will be based on the SIS standards
as guidelines. The BPCS goal is to improve control loop availability such that the trigger point for
the intervention of the associated SIS system is unlikely ever to be reached. Thereby, redundancy
in BPCS also improves safety. This is because increased availability reduces the number of
shutdowns, which tend to shorten the life of the plant due to the resulting thermal and other
stresses. One of the main objectives of measurement and control specialists is to improve the
availability and accuracy of measurements. To achieve that goal and to minimize systematic
uncertainty while increasing reliability, correct specification, instrument selection, and installation
are essential.
Assuming that the transmitters have been properly specified, selected, and installed, one can
further improve total performance by measuring the same variable with more than one sensor.
Depending on the importance of the measurement, redundancy can involve two or more
detectors measuring the same process variable. When three or more sensors are used, one can
select the majority view by voting. With this approach, one would select m measurements out of
the total n number of signals so, that m > n/2. In industrial practice, n is normally 3 so that m is 2.
The redundant and voting techniques have been standardized in various SIS-related documents,
including ANSI/ISA84.01, IEC 61508, and IEC 61511. The SIS systems usually evaluate on off
signals or threshold limits of analog signals whereas, in process control, redundancy and voting is
obtained by the evaluation of multiple analog signals. The main difference between BPCS and SIS
systems is that SIS is a dormant system, but continuously self-checking, and it is called upon to
operate only in an emergency.
In addition, the SIS is fail safe; i.e., if it fails, it brings the plant to a safe status. SIS malfunctioning
is inferred from diagnostic programs and not from plant conditions, because the plant cannot be
shut down or brought to unsafe conditions just to test the SIS system. All international regulations
follow this approach. In contrast to SIS systems, the BPCS control loops are always active and, if
they malfunction, they actuate alarms, which the operator immediately notices. The consequence
is that the SIS-based definitions developed in IEC 61508, to some extent, can also be used as
guidelines for control loops that require high uptime and whose unavailability would, within a
short time, drive the plant to conditions requiring plant shutdown. IEC 61508 Part 6 gives the
definition of the various architectures most commonly used in the safety instrumented systems.
They apply for use with one, two, or three elements and their various combinations. The elements
that are used in a single or multiple configuration can be either transmitters or final control
elements, but they are mainly for transmitters, and only very rarely for control valves, because of
the substantial difference in costs. The control system, such as a DCS system, is usually configured
with multiple controllers and redundant other system components (e.g., system bus, I/O bus, HMI).
IEC 61508 considers and gives definitions to the configurations described below
*** IEC 61508 is an international standard published by the International Electro technical
Commission consisting of methods on how to apply, design, deploy and maintain automatic
protection systems called safety-related systems. It is titled Functional Safety of
Electrical/Electronic/Programmable Electronic
Safety-related Systems (E/E/PE, or
E/E/PES)1oo1 Single-Transmitter
Configuration (Figure 1.10b) A single
transmitter is used, as in many control loops.
These loops consist of an analog transmitter and
an analog controller (pneumatic or electronic).
This configuration is the most prone to overall
malfunctioning. Errors and failures can be
Diagnostic Coverage
The diagnostic coverage in the BPCS is much less
than in the SIS, for reasons outlined previously,
and is provided mainly in and by the DCS, which
has the capability of comparing the signals
received from the transmitters and determining
whether they are within the imposed limits so as to
consider them to be concurrent. If an inconsistency is detected, the DCS is capable of signaling
the abnormal situation and to maintain control, at least in some instances, without operator
intervention. 1oo1D The diagnostic coverage can be partly integral to the transmitter and/or
external in the control system (rate of change alarms, overrange alarms detecting the individual
fault).
In a broader sense, in addition, the material balance (data reconciliation) performed in the DCS
can contribute to detect a failure in the flow transmitters or their unreliable reading. 1oo2D The
signal from each transmitter is checked to verify if it is within the validity limits (i.e., 4-20 mA).
If a transmitter is outside the validity range, its signalis discarded, the controller receives the value
from the other transmitter and an alarm is issued to warn the operator about the malfunctioning. If
both transmitters are within validity limits, the difference among their signals is calculated. In case
the difference is within a preset value (in the range of few percent), the average value is assumed
as good and used for the control function (Figure 1.10e) The acceptable discrepancy between the
two transmitters depends on the measurement conditions; for instance, the acceptable discrepancy
in the level measurement in a steam drum is larger than in the case of a pressure measurement. As
an indication, for two level transmitters installed at different ends of the steam drum, 5%
discrepancy is acceptable. However, for pressure measurement, 2% should not be exceeded.
Normally, in the process industry, it is not necessary to select a very small discrepancy (such as
twice the declared accuracy) between the transmitted values, because the difference could be the
result of many causes other than a transmitter failure or the need for recalibration (the main reason
could be the installation). Sometimes
a common percentage discrepancy value is chosen and used for all measures, because experience
has shown that it is unlikely that a transmitter fails to a value close to the correct one. When the
discrepancy is beyond the preset value, but both signals are within validity limits, it is not possible
to determine which one is invalid. In this case, an alarm is produced, and the controller is
automatically forced to manual, with output frozen at the last valid value. The operator then has
the responsibility to discard one of the two transmitters and use the other as the input to the
controller, then switch to auto again.
The signal from each transmitter is checked to verify whether it is within the validity limits (i.e.,
4 to 20 mA). If a transmitter is outside the validity range, its signal is discarded as invalid, and the
remaining two are used as if they were in 1oo2D configuration. If no invalid signal is detected,
then the discrepancy between the values is calculated. Supposing the three signals are X, Y, and
Z, the differences X − Y, Y − Z, and Z − X are calculated. If each of them is within the preset
limits, the median value is taken as good and used as process variable by the controller. If one
difference exceeds the preset limit, an alarm is issued to the operator, and the median value is used
as process variable for the controller. If two differences exceed the preset limit, the value of the
transmitter involved in both the excessive differences is discarded, an alarm is issued to the
operator, and the average value of the remaining two is used as process value. If all three
differences exceed the preset limit, this means that at least two transmitters are not reliable. In this
case, the controller is automatically forced to manual, with output equal to last valid value (Figure
1.10f). The operator has the responsibility to select one of the three transmitters as the good one,
use it as input to the controller, and switch to auto again. There are some possible variations in the
algorithms used for the selection of the valid signals and the discarding of the unreliable ones, and
they depend on the available control blocks of the involved DCS.
Compressed air is used for a perse range of commercial and industrial applications. As it is widely
employed throughout industry, it is sometimes considered to be the fourth utility at many facilities.
In many facilities, compressed air systems are the least energy efficient of all equipment. There is
a tremendous potential to implement compressed air energy efficiency practices. It has been
common practice in the past to make decisions about compressed air equipment and the end uses
based on a first cost notion. Ongoing energy, productivity and maintenance costs need to be
considered for optimal systems. In other words, best practice calls for decisions to be based on the
life cycle cost of the compressed air system and components. Improving and maintaining peak
compressed air system optimization requires addressing both the supply and demand sides of the
system and understanding how the two interact. Properly managing a compressed air system can
not only save electricity, but also decrease downtime, increase productivity, reduce maintenance,
and improve product quality. Optimal performance can be ensured by properly specifying and
sizing equipment, operating the system at the lowest possible pressure, shutting down unnecessary
equipment, and managing compressor controls and air storage. In addition, the repair of chronic
air leaks will further reduce costs. For a typical compressed air end use, like an air motor or
diaphragm pump, it takes about 10 units of electrical energy input to the compressor to produce
about one unit of actual mechanical output to the work. For this reason, other methods of power
output, such as direct drive electrical motors, should be considered first before using compressed
air powered equipment. If compressed air is used for an application, the amount of air used should
be the minimum quantity and pressure necessary, and should only be used for the shortest possible
duration. Compressed air use should also be constantly monitored and revaluated.
Compressors are work-absorbing devices that are used for increasing the pressure of the fluid
(Air, Oil, Refrigerant) at the expense of work done on fluid. The compressors used for
compressing air are called air compressors. Compressors are invariably used for all applications
requiring high-pressure air. Some of the popular applications of compressors are, for driving
pneumatic tools and air operated equipments, spray painting, compressed air engine,
supercharging in internal combustion engines, material handling (for transfer of material), surface
cleaning, refrigeration and air conditioning, chemical industry, etc. Compressors are supplied with
low-pressure air (or any fluid) at inlet which comes out as high-pressure air (or any fluid) at the
outlet. Work required for increasing pressure of air is available from the prime mover driving the
compressor. Generally, electric motor, internal combustion engine or steam engine, turbine, etc.
are used as prime movers.
There are two basic types of air compressors:
Compressor Type
There are two ways to increase the pressure of a gas. One is to reduce the volume of the gas. The
other is to increase the velocity of the gas. Positive displacement compressors reduce the gas
volume. There are several different types of positive displacement compressors. They include:
Reciprocating
Rotary or helical screw, or rotary lobe
Sliding vane
Liquid piston
Diaphragm
Of these, reciprocating compressors and rotary screw or helical screw compressors are most often
used in gas plant and refinery compressed air systems. Centrifugal compressors and axial
compressors increase pressure primarily by increasing the gas velocity. centrifugal compressors
are more often used in compressed air systems.
Rotary Screw Compressors
Rotary screw compressors have gained popularity and market share (compared to reciprocating
compressors) since the 1980s. These units are most commonly used in sizes ranging from about 5
to 900 HP. The most common type of rotary compressor is the helical twin, screw compressor.
Two mated rotors mesh together, trapping air, and reducing the volume of the air along the rotors.
Depending on the air purity requirements, rotary screw compressors are available as lubricated or
dry (oil free) types.
Reciprocating Compressor :
A reciprocating compressor is a positive-displacement machine that uses a piston to compress a
gas and deliver it at high pressure. Various compressors are found in almost every industrial
facility.
Reciprocating compressors have been the most widely used for industrial plant air systems. The
two major types are single acting and double acting, both of which are available as one or two-
stage compressors. The Single acting cylinder performs compression on one side of the piston
during one direction of the power stroke. Two-stage compressions reach the final output pressure
in two separate compression cycles, or stages, in series.
The double-acting compressor is configured to provide a compression stroke as the piston moves
in either direction. This is accomplished by mounting a crosshead on the crank arm which is then
connected to a double-acting piston by a piston rod. Distance pieces connect the cylinder to the
crankcase. They are sealed to prevent the mixing of crankshaft lubricant with the air, but vented
so as to prevent pressure built up.
Air for compressed tool and instrument air systems Hydrogen, oxygen, etc. for chemical
processing Light hydrocarbon fractions in refining, Various gases for storage or transmission
Other applications
SINGLE-CYLINDER RECIPROCATING COMPRESSOR
Piston compressors are available as single or double-acting, oil-lubricated or oil-free with different
numbers of cylinders in different configurations. With the exception of really small compressors
with vertical cylinders, the V configuration is the most common for small compressors. On double-
acting, large compressors the L type with vertical low-pressure cylinder and horizontal high-
pressure cylinder, offer immense benefits and is why this the most common design. The
construction and working of a
piston-type reciprocating
compressor is very much similar to
that of an internal combustion
engine.
Parts Of Reciprocating
Compressors :
Piston type compressor consists of
cylinder, cylinder head, and piston
with piston rings, inlet and outlet
spring-loaded valves, connecting
rod, crank crankshaft and bearings.
ig shows various parts of three-stage (V type) reciprocating air compressor with the receiver (air
tank). The pressure switch is connected to the electric motor. When the desired pressure in the air
tank is reached it stops the motor and hence the compressor. The safety valve opens when the
pressure in the air tank exceeds the set safe pressure.
Advantages of multi staging:
1. Good volumetric efficiency as compression is done in more than one stage and hence
compression ratio is controlled.
2. Lower discharge temperature and hence selection of the material of construction for
cylinder and its components and results in smaller size of subsequent stages.
3. Reduced work of compression, as due to intercooling, compression is closer to
isothermal (gives rise to minimum work of compression). This results in to saving of
power and smaller sizes of subsequent stages.
4. Limits pressure differential. This reduces excess strains in the frame.
The drain valve drains the condensate produced at the condenser and the receiver.
Cylinders and intercoolers are either air-cooled (with fins) or water-cooled (with water
jackets in the cylinder). The air-cooled compressor is used for low-pressure applications
and water-cooled compressors are used for high pressure applications.
Range: Used of pressures up to 4-30 bar and low delivery volumes (< 10000 m3/h). For
pressures exceeding 30 bar multi-stage compressors are required. The multi-stage compressors
are available with pressure up to 250-350 bar.
Advantages of Reciprocating compressor
1. Piston type compressors are available in a wide range of capacity and pressure
2. Very high air pressure (250 bar) and air volume flow rate is possible with multi-staging.
3. Better mechanical balancing is possible by a multistage compressor by proper cylinder
arrangement.
4. High overall efficiency compared to another compressor
Some lubricant will enter the air; therefore, some air/lubricant separation is necessary
Limited application for high pressure ratio demands
Difficult and high-cost multi-staging
Limited discharge pressure (up to 200 psig (1,378 kPag) for high pressure models)
Not flexible to capacity control
Rotary Vane Compressors vs. Screw Compressors
Image from Chemical Engineering World
Rotary vane compressors came around a lot earlier than their screw counterparts. They are
simpler in design and have almost double the life expectancy of screw compressors. Nonetheless,
screw compressors dominate
Countless limit switches are found in manufacturing. They are used as control devices and safety
devices for machinery and personnel. In all cases, the limit switches will send a digital signal to
the control system. Based on the hardware and software tied to said switches, the system is able
to take appropriate action.
Why is the limit switch important?
Limit switches are an inexpensive way to create a link between the physical and electrical
domains. They have been developed a number of decades ago and the mass adoption of their use
significantly lowered their cost for the end user. They thus play an important role in
manufacturing due to their simplicity and low cost.
Use Cases of Limit Switches
Product Detection & Count - As a product pushes against a limit switch, a signal is sent to the
control system. Through simple PLC ladder logic, the user can count the number of times the
product goes by the limit switch and display the counter for the operator.
Personnel Safety - A limit switch can be used to detect the opening of a safety guard that stops and
de-energizes the machine. If the guard is opened during operation, the machine stops. If the guard
is opened while the machine is stopped, the limit switch prevents the machine from starting. In
both cases, the limit switch is used to safeguard the operator from potential harm.
Machine Safety - A limit switch can be used to protect machinery from unintentional damage. This
includes components that are part of changeovers (end of arm tools), components that may wear-
out over time (motor clutch) and components that may damage others if they fail (gears, shafts,
etc.).
The
switch will provide a set of contacts that can be used in Normally Open (NO) and Normally Closed
(NC) circuits.
There is an argument to be made for either configuration. However, when it comes to limit switches
being used for safety purposes, it’s always advised to have current circulating in resting
state―. This is important as during a problem in the circuit, the safety should trigger. Should it
be set to no-power in the resting state―, the circuit may fail to prevent injury or damage.
Proximity Sensor
"Proximity Sensor" includes all sensors that perform non-contact detection in comparison
to sensors, such as limit switches, that detect objects by physically contacting them.
Proximity Sensors convert information on the movement or presence of an object into an
electrical signal.
There are three types of detection systems that do this conversion:
* systems that use the eddy currents that are generated in metallic sensing objects by
electromagnetic induction,
* systems that detect changes in electrical capacity when approaching the sensing object,
* systems that use magnets and reed switches.
Definition of non-contact position detection switches. JIS gives the generic name
"proximity switch" to all sensors that provide non-contact detection of target objects that
are close by or within the general vicinity of the sensor, and classifies them as inductive,
capacitive, ultrasonic, photoelectric, magnetic, etc.
This Technical Explanation defines all inductive sensors that are used for detecting metallic
objects, capacitive sensors that are used for detecting metallic or non-metallic objects, and
sensors that utilize magnetic DC fields as Proximity Sensors.
1. Proximity Sensors detect an object without touching it, and they therefore do not cause
abrasion or damage to the object. Devices such as limit switches detect an object by
contacting it, but Proximity Sensors are able to detect the presence of the object electrically,
without having to touch it.
2. No contacts are used for output, so the Sensor has a longer service life (excluding sensors
that use magnets). Proximity Sensors use semiconductor outputs, so there are no contacts
to affect the service life.
3. Unlike optical detection methods, Proximity Sensors are suitable for use in locations
where water or oil is used. Detection takes place with almost no effect from dirt, oil, or
water on the object being detected. Models with fluororesin cases are also available for
excellent chemical resistance.
4. Proximity Sensors provide high-speed response, compared with switches that require
physical contact. For information on high-speed response
5. Proximity Sensors can be used in a wide temperature range. Proximity Sensors can be
used in temperatures ranging from −40 to 200°C.
6. Proximity Sensors are not affected by colors. Proximity Sensors detect the physical
changes of an object, so they are almost completely unaffected by the object's surface color.
7. Unlike switches, which rely on physical contact, Proximity Sensors are affected by
ambient temperatures, surrounding objects, and other Sensors. Both Inductive and
Capacitive Proximity Sensors are affected by interaction with other Sensors. Because of
this, care must be taken when installing them to prevent mutual interference. (Refer to the
Precautions for Correct Use in the Safety Precautions for All Proximity Sensors.) Care
must also be taken to prevent the effects of surrounding metallic objects on Inductive
Proximity Sensors, and to prevent the effects of all surrounding objects on Capacitive
Proximity Sensors.
8. There are Two-wire Sensors. The power line and signal line are combined. If only the
power line is wired, internal elements may be damaged. Always insert a load. (Refer to the
Precautions for Safe Use
CAPACITIVE SENSORS
Operating Principles
In these sensors, a high frequency oscillator creates a field in the surroundings of the sensing
surface. The presence of any capacitive object in these surroundings causes a change in the
oscillation amplitude, and a threshold circuit detects that change and generates the output. The
triggering distance depends on the size, shape, and material of the object. If the sensitivity to metals
is taken as 1.0, the sensitivity to water is also 1.0, plastic or glass is 0.5, and wood is 0.4. Usually
a screw is placed on the capacitive sensor, which allows regulation of the operating distance.
Capacitive sensors are more often used for linear than angular proximity measurements. Either the
dielectric or one of the capacitor plates is movable for displacement measurement. Capacitive
proximity sensors use the measured object as one plate, and the sensor contains the other plate.
The capacitance changes according to the question
C= k/d
where k = is a constant, depending on the area of the plates and the dielectric constant d = the
distance between the plate
Capacitive transducers are available with packaged signal conversion circuitry for DC output
operation. Capacitive sensors are widely used for dimensional inspections in large-volume
manufacturing operations, such as the filling of containers or the monitoring of the wearing of
moving surfaces. In nonconductive materials (glass, plastics, wood), the switch detects the change
in dielectric constant; in conductive materials, an additional signal is produced by terminal
conductivity. The proximity switches illustrated in Figure 7.14a can detect liquids, glass, plastic,
wood, or metallic objects. For the proximity switches shown, the sensing distance can be fixed or
adjustable between 0.1 and 1.0 in. (3 to 25 mm). Proximity switches provided with sensing plates
can operate over a range of 0.2 to 5 in. (5 to 127 mm), can detect capacitance changes down to
0.02 pF, and can detect more than 100 operations/s. The switch is operated when the capacitance
caused by the approaching object exceeds the reference level set to trigger the switch.
In this type of proximity switch, similar to the capacitive one, an electromagnetic field is generated
by a high frequency (radio frequency) oscillator circuit in front of a coil. If a metallic object moves
inside the field generated by the sensor, an eddy current is generated in the metallic object, which
loads the oscillator and causes a voltage drop in it. Figure 7.14b shows the sensing envelope of the
switch for a particular target size. The envelope increases with target size and decreases with
nonferrous metals. The target can enter this envelope axially or laterally and is detected when they
first touch the envelope. This switch is also called a self-contained proximity switch or an eddy-
current killed oscillator design. The outside appearance is similar to the capacitance units shown
in Figure . The sensing face of the probe contains the coil. The switch has no moving parts and
therefore its mean time between failure is long, about 200,000 hours. It is also immune to shock
and vibration and can be connected directly to programmable logic controllers. Detection ranges
can vary from 0.1 to 2 in. (2 to 50 mm). Typical application include machine tools, material
handling, packaging, and conveyors.
INDUCTIVE SENSORS
MAGNETIC SENSORS
Magnetic sensors are actuated by the presence of permanent magnets. The magnetically actuated
reed switch consists of two low reluctance ferromagnetic reeds enclosed in glass bulbs filled with
inert gas. The reciprocal attraction of both reeds in the presence of a magnetic field, caused by
magnetic induction, closes an electric contact. For this design to function, the object to be detected
must contain a magnet. When the actuating magnet reaches the actuating distance from the reed
switch, the contact is closed. These switches can operate the loads directly (without relays) because
their contact ratings are around 15 VA. Their natural applications are in the area of counting the
rotation or reciprocation of objects. Their speed of closure can approach 100/s, and their life
expectancy is in the tens of millions of operations. A proximity switch that is used less often is the
variable reluctance sensor, which alters the voltage generated at its coil terminals as an object
distorts its magnetic flux. This principle is more often applied in connection with rotating
machinery, such as tachometers for speed measurement.
Hall-Effect Sensors
One of the most successful magnetic proximity switches is actuated by the field of magnets due to
the Hall effect. Their most common actuator is a moving permanent magnet. As shown in Figure,
the magnet movement can be headon or slide-by. The curves are based on a microswitch standard
magnet, which is 1.25 in. (31.8 mm) long and 0.25 in. (6.4 mm) in diameter. The induction (gauss)
of the Hall-effect sensor varies with the distance to the magnet. This switch eliminates the contact-
bounce problem of mechanical limit switches and provides a directly computer-compatible output.
Speed of operation is about 25 kHz. The Hall-effect switch is not recommended for use in areas
where high magnetic fields are present, and its connecting wires should not be run in the same
conduit with high-power lines
OPTICAL SENSORS
These sensors consist of a light source (emitter) and light receiver and depend on light-sensitive
elements to detect the presence of objects. Three types are available:
1. Direct Reflection The emitter and receiver are housed together and use the reflected light directly
from the detected object.
2. Reflector with Reflector The emitter and receiver are housed together and require a reflector. In
this design, the object is detected when it interrupts the light beam between the sensor and the
reflector.
3. Thru Beam The emitter and receiver are housed separately and they detect the object when it
interrupts the light beam between them.
Photoelectric and laser devices are capable of measuring position, thickness, flatness, length, and
other dimension related properties. The available proximity switch designs can be grouped
according to the:
1. Light source (incandescent, light emitting diodes [LED], infrared, laser)
2. Detector used (photocells, photo-transducers)
3. Light path (thru-beam or the reflective mode, which can be implemented in the diffuse; specular,
retroreflective, or fiber-optic configurations)
Photoelectric sensors can detect the presence or absence of opaque or translucent objects at
distances from a few millimeters to several hundred feet or meters. They do not require physical
contact; are relatively inexpensive; and are well suited for counting, mail and package handling,
security surveillance, and many other applications
Here a simple way remember how to wire up a 3-wire DC PNP or NPN sensor:
PNP = Switched Positive
NPN = Switched Negative
Switched refers to which side of the controlled load (relay, small indicator, PLC input) is being
switched electrically. Either the load is connected to Negative and the Positive is switched
(PNP), or the load is connected to Positive and the Negative is switched (NPN). These diagrams
illustrate the differences between the two connections.