ch1 ch2
ch1 ch2
Example
A set of independent voltage measurements taken by four observers was recorded as 117.02 V, 117.11 V, 117.08 V, and 117.03
V. Calculate (a) the average voltage, (b) the range of error.
Solution
(a)
E1 E 2 E 3 E 4 117.02 117.11 117.08 117.03
E av 117.06 V
N 4
Range E max E av 117.11 117.06 0.05 V
(b)
but also
TYPES OF ERROR
No measurement can be made with perfect accuracy, but it is important to find out what the accuracy actually is and how different
errors have entered into the measurement. A study of errors is a first step in finding ways to reduce them. Such a study also allows
us to determine the accuracy of the final test result.
Errors may come from different sources and are usually classified under three main headings:
1-Gross errors: largely human errors, among them misreading of instruments, incorrect adjustment and improper application of
instruments, and computational mistakes.
2-Systematic errors: shortcomings of the instruments, such as defective or worn parts, and effects of the environment on the
equipment or the user.
3-Random errors: those due to-causes that cannot be directly established because of random variations in the parameter or the
system of measurement.
Gross Errors
This class of errors mainly covers human mistakes in reading or using instruments and in recording and calculating measurement
results. As long as human beings are involved, some gross errors will inevitably be committed. Although complete elimination of
gross errors is probably impossible, one should try to anticipate and correct them. Some gross errors are easily detected; others
may be very elusive. One common gross error, frequently committed by beginners in measurement work, involves the improper
use of an instrument. In general, indicating instruments change conditions to some extent when connected into a complete circuit,
so that the measured quantity is altered by the method employed. For example, a well-calibrated voltmeter may give a misleading
reading when connected across two points in a high-resistance circuit. The same voltmeter, when connected in a low-resistance
circuit, may give a more dependable reading. These examples illustrate that the voltmeter has a "loading effect" on the circuit,
altering the original situation by the measurement process.
Example 1-7 A voltmeter, having a sensitivity of 1,000 , reads 100 V on its 150-V scale when connected across an
unknown resistor in series with a milliammeter. When the milliammeter reads 5 , calculate (a) apparent resistance of the un-
known resistor, (b) actual resistance of the unknown resistor, (c) error due to the loading effect of the voltmeter.
Solution
Neglecting the resistance of the milliammeter, the value of the unknown resistor is R x = 20 k .
MEASUREMENTS ( DUC ) 1 Mohammad Hameed
(b) The voltmeter resistance equals
Since the voltmeter is in parallel with the unknown resistance, we can write
(c) %
Example 1-8
Repeat Example 1-7 if the milliammeter reads 800 and the voltmeter reads 40 V on its 150-V scale.
Solution
(a)
(b)
(c)
Errors caused by the loading effect of the voltmeter can be avoided by using it intelligently. For example, a low-resistance
voltmeter should not be used to measure voltages in a vacuum tube amplifier. In this particular measurement, a high-input
impedance voltmeter is required.
A large number of gross errors can be attributed to carelessness or bad habits, such as improper reading of an instrument,
recording the result differently from the actual reading taken, or adjusting the instrument incorrectly. Consider the case in which a
multirange voltmeter uses a single set of scale markings with different number designations for the various voltage ranges. It is
easy to use a scale which does not correspond to the setting of the range selector of the voltmeter. A gross error may also occur
when the instrument is not set to zero before the measurement is taken; then all the readings are off.
Errors like these cannot be treated mathematically. They can be avoided only by taking care in reading and recording the
measurement data. Good practice requires making more than one reading of the same quantity, preferably by a different observer.
Never place complete dependence on one reading but take at least three separate readings, preferably under conditions in which
instruments arc switched off-on.
Systematic Errors
This type of error is usually divided into two different categories: (1) instrumental errors, defined as shortcomings of the
instrument; (2) environmental errors, due to external conditions affecting the measurement.
Instrumental errors are errors inherent in measuring instruments because of their mechanical structure. For example, in the
d'Arsonval movement friction in bearings of various moving components may cause incorrect readings. Irregular spring tension,
stretching of the spring or reduction in tension due to improper handling or overloading of the instrument will result in errors.
Other instrumental errors are calibration errors, causing the instrument to read high or low along its entire scale. (Failure to set the
instrument to zero before making a measurement has a similar effect.)
There are many kinds of instrumental errors, depending on the type of instrument used. The experimenter should always take
precautions to insure that the instrument he is using is operating properly and does not contribute excessive errors for the purpose
at hand. Faults in instruments may be detected by checking for erratic behavior, and stability and reproducibility of results. A
quick and easy way to check an instrument is to compare it to another with the same characteristics or to one that is known to be
more accurate.
Instrumental errors may be avoided by (1) selecting a suitable instrument for the particular measurement application; (2)
applying correction factors after determining the amount of instrumental error; (3) calibrating the instrument against a standard.
Environmental errors are due to conditions external to the measuring device, including conditions in the area surrounding the
instrument, such as the effects of changes in temperature, humidity, barometric pressure, or of magnetic or electrostatic fields.
Thus a change in ambient temperature at which the instrument is used causes a change in the elastic properties of the spring in a
moving-coil mechanism and so affects the reading of the instrument. Corrective measures to reduce these effects include air
conditioning, hermetically sealing certain components in the instrument, use of magnetic shields, and the like.
Systematic errors can also be subdivided into static or dynamic errors. Static errors are caused by limitations of the measuring
device or the physical laws governing its behavior. A static error is introduced in a micrometer when excessive pressure is applied
in torquing the shaft. Dynamic errors are caused by the instrument's not responding fast enough to follow the changes in a
measured variable.
Random Errors
These errors are due to unknown causes and occur even when all systematic errors have been accounted for. In well-designed
experiments, few random errors usually occur, but they become important in high-accuracy work .
CHAPTER 2
SYSTEMS OF UNITS OF MEASUREMENT
MEASUREMENTS ( DUC ) 2 Mohammad Hameed
SYSTEMS OF UNITS
In 1790 the French government issued a directive to the French Academy of Sciences to study and to submit proposals for a
single system of weights and measures to replace all other existing systems. The French scientists decided, us a first principle, that
a universal system of weights and measures should not depend on man-made reference standards, but instead be based on
permanent measures provided by nature. As the unit of length, therefore, they chose the meter, defined as the ten-millionth part of
the distance from the pole to the equator along the meridian passing through Paris. As the unit of mass they chose the mass of a
cubic centimeter of distilled water at 4°C and normal atmospheric pressure (760 mm Hg) and gave it the name gram. As the third
unit, the unit of time, they decided to retain the traditional second, defining it as 1/86,400 of the mean solar day.
As a second principle, they decided that all other units should be derived from the aforementioned three fundamental units of
length, mass, and time. Next—the third principle—they proposed that all multiples and submultiples of basic units be in the
decimal system, and they devised the system of prefixes in use today. Table 2-1 lists the decimal multiples and submultiples.
The proposals of the French Academy were approved and introduced as the metric system of units in France in 1795.
Britain, in the meantime, had been working on a system of electrical units, and the British Association for the Advancement of
Science decided on the centimeter and the gram as the fundamental units of length and mass.
From this developed the centimeter-gram-second or CGS absolute system of units, used by physicists all over the world.
Complications arose when the CGS system was extended to electric and magnetic measurements because of the need to introduce
at least one more unit in the system. In fact, two parallel systems were established. In the CGS electrostatic system, the unit of
electric charge was derived from the centimeter, gram, and second by assigning the value 1 to the permittivity of free space in
Coulomb's law for the force between electric charges. In the CGS electromagnetic system, the basic units are the same and the
unit of magnetic pole strength is derived from them by assigning the value 1 to the permeability of free space in the inverse square
formula for the force between magnetic poles.
A more comprehensive system was adopted in 1954 and designated in 1960 by international agreement as the System
International (SI). In the SI system, six basic units are used, namely, the meter, kilogram, second, and ampere of the MKSA
system and, in addition, the Kelvin and the candela as the units of temperature and luminous intensity, respectively. The SI
units are replacing other systems in science and technology; they have been adopted as the legal units in France, and will become
obligatory in other metric countries. The six basic SI quantities and units of measurement, with their unit symbols, are listed in
Table 2-2.
CLASSIFICATION OF STANDARDS
An important step in the measurement of a quantity is defining the unit of measurement, for example, the unit of the length
could be a yard or a meter or some other chosen unit. Hence a physical standard of unit for the measurement of length has to be
constructed. This is the metre bar. It is made of a special material having a specific shape and two lines engraved on it. The
distance between the tow lines under controlled environmental conditions is one meter.
A standard is physical representation of a unit of measurement. These standards are used to determine the value of other
Standards of
Measurement
It is desirable that any working standard of measurement of any particular parameter used daily either in a laboratory or in
industry is compared with a secondary standard which is more accurate and better maintained. This secondary standard is kept in
regional testing or certifying laboratory.
This secondary standard in turn is compared to a national standard or primary standard kept at standard institute or National
laboratory in the country. This Primary standard is further compared to an international Standard which is very accurately
preserved standard.
International Standards
International standards are defined by International agreement. They are periodically evaluated and checked by absolute mea-
surements in terms of the fundamental units of physics. They represent certain units of measurement to the nearest possible
accuracy attainable by science and technology of measurement. These International standards are not available to ordinary users
for measurements and calibration. Some of the electrical International standards are as follows.
Primary Standards
The principle functions of primary standards are the calibration and verification of secondary standards. Primary standards are
maintained at the National Standards laboratory in different countries. These laboratories are responsible for maintaining the
Primary standards. Primary standards are calibrated against the fundamental units and their derived mechanical and electrical
units respectively.
Primary standards are not available for use outside the National laboratory. They are absolute standards of high accuracy that
can be used as the ultimate reference.
Secondary Standards
Secondary standards are basis reference standard used by measurement and calibration laboratories in the industry. These are
maintained by the particular industry to which they belong. Each industry has it’s own secondary standard. In our country, the
Electronics Regional Test Laboratory (ERTL) maintains the secondary standard in Electronics and Electrical Engineering. Each
laboratory periodically sends it’s secondary standards to the National standards laboratory for calibration and comparison against
MEASUREMENTS ( DUC ) 4 Mohammad Hameed
the primary Standard. After comparison and calibration, the national standard laboratory returns the secondary standards to the
particular industrial laboratory with a certification of measuring accuracy in comparison to the primary standard.
Working Standards
Working standards are principal tools of a measurement laboratory. These standards are used to check and calibrate laboratory
instrument for accuracy and performance. Working standards are the tools for day-to-day measurements. They are checked
periodically against secondary standards. The instruments in our laboratory are calibrated against working standards or are used to
compare measurements in industrial application. e.g. manufacturers of electronics components such as capacitors, resistors etc use
a standard called a working standard for checking components’ value being manufactured. A standard resistor is used to check
resistors being manufactured.
Electrical Standards
All electrical measurements are based on the fundamental quantities I, R and V. Systematic measurements depend on the
definitions of these quantities. These quantities are related to each other by Ohm’s law, V = I.R. It is therefore sufficient to define
only two parameters, to obtain the definition of the third. Hence, in electrical measurements, it is possible to assign values of a
standard by defining units of the other two standards. Standards of emf and Resistance are usually maintained at the National
laboratory. The base values of other standards are defined from these two standards.
Electrical Standards
PROBLEMS
Q1) A digital voltmeter has a read-out range from 0 to 9,999 counts. Determine the resolution of the instrument in volt when the
full scale reading is 9.999 V.
Q2) A voltmeter, having a sensitivity of 1,000 /V, reads 40 V on its 150-V scale when connected across an unknown resistor in
series with a milliammeter. When the milliammeter reads 800 mA, calculate (a) apparent resistance of the unknown resistor, (b)
actual resistance of the unknown resistor, (c) error due to the loading effect of the voltmeter.
Q3) A set of independent current measurements were recorded as 10.03, 10.10 , 10.11 and 10.08 A. calculate (a) the average
current, and (b) the range of error.
Q4) List four sources of possible errors in instruments.
Q5) Define
(a) instrumental error, (b) limiting error,
(c) calibration error, (d) environmental error,
(e) Random error,