ch1 8

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 5

CHAPTER 1

MEASUREMENT AND ERROR


1-1 DEFINITIONS
Measurement generally involves using an instrument as a physical means of determining a quantity or variable. The instrument
serves as an extension of human faculties and in many cases enables a person to determine the value of an unknown quantity
which his unaided human faculties could not measure. An instrument, then, may be defined as a device for determining the
value or magnitude of a quantity or variable. The electronic instrument, as its name implies, is based on electrical or electronic
principles for its measurement function. An electronic instrument may be a relatively uncomplicated device of simple
construction such as a basic dc current meter. As technology expands, however, the demand for more elaborate and more
accurate instruments increases and produces new developments in instrument design and application. To use these instruments
intelligently, one needs to understand their operating principles and to appraise their suitability for the intended application.
Measurement work employs a number of terms which should be defined here.
Instrument: a device for determining the value or magnitude of a quantity or variable.
Accuracy: closeness with which an instrument reading approaches the true value of the variable being measured.
Precision: a measure of the reproducibility of the measurements; i.e.,
given a fixed value of a variable, precision is a measure of the degree to which successive measurements differ from one
another.
Sensitivity: the ratio of output signal or response of the instrument to a change of input or measured variable.
Resolution: the smallest change in measured value to which the instrument will respond.
Error: deviation from the true value of the measured variable.
Several techniques may be used to minimize the effects of errors. For example, in making precision measurements, it is
advisable to record a series of observations rather than rely on one observation. Alternate methods of measurement, as well as
the use of different instruments to perform the same experiment, provide a good technique for increasing accuracy. Although
these techniques tend to increase the precision of measurement by reducing environmental or random error, they cannot account
for instrumental error.*
This chapter provides an introduction to different types of error in measurement and to the methods generally used to express
errors, in terms of the most reliable value of the measured variable.

1-2 ACCURACY AND PRECISION


Accuracy refers to the degree of closeness or conformity to the true value of the quantity under measurement. Precision refers
to the degree of agreement within a group of measurements or instruments.
To illustrate the distinction between accuracy and precision, two voltmeters of the same make and model may be compared.
Both meters have knife-edged pointers and mirror-backed scales to avoid parallax, and they have carefully calibrated scales.
They may therefore be read to the same precision. If the value of the series resistance in one meter changes considerably, its
readings may be in error by a fairly large amount. Therefore the accuracy of the two meters may be quite different. (To
determine which meter is in error, a comparison measurement with a standard meter should be made.)
Precision is composed of two characteristics: conformity and the number of significant figures to which a measurement may be
made. Consider, for example, that a resistor, whose true resistance is 1,384,572 , is measured by an ohmmeter which
consistently and repeatedly indicates 1.4 M . But can the observer "read" the true value from the scale? His estimates from the
scale reading consistently yield a value of 1.4 M . This is as close to the true value as he can read the scale by estimation.
Although there are no deviations from the observed value, the error created by the limitation of the scale reading is a
precision error. The example illustrates that conformity is a necessary, but not sufficient, condition for precision because of the
lack of significant figures obtained. Similarly, precision is a necessary, but not sufficient, condition for accuracy.
Too often the beginning student is inclined to accept instrument readings at face value. He is not aware that the accuracy of a
reading is not necessarily guaranteed by its precision. In fact, good measurement technique demands continuous skepticism as
to the accuracy of the results.
In critical work, good practice dictates that the observer make an independent set of measurements, using different instruments
or different measurement techniques, not subject to the same systematic errors. He must also make sure that the instruments
function properly and are calibrated against a known standard, and that no outside influence affects the accuracy of his
measurements.
Example 1-1
A set of independent voltage measurements taken by four observers was recorded as 117.02 V, 117.11 V, 117.08 V, and 117.03
V. Calculate (a) the average voltage, (b) the range of error.
Solution
(a)
E1  E 2  E 3  E 4 117.02  117.11  117.08  117.03
E av    117.06 V
N 4
Range  E max  E av  117.11  117.06  0.05 V
(b)
but also

The average range of error therefore equals

1
When two or more measurements with different degrees of accuracy are added, the result is only as accurate as the least
accurate measurement. Suppose that two resistances are added in series as in

1-4 TYPES OF ERROR


No measurement can be made with perfect accuracy, but it is important to find out what the accuracy actually is and how
different errors have entered into the measurement. A study of errors is a first step in finding ways to reduce them. Such a study
also allows us to determine the accuracy of the final test result.
Errors may come from different sources and are usually classified under three main headings:
Gross errors: largely human errors, among them misreading of instruments, incorrect adjustment and improper application of
instruments, and computational mistakes.
Systematic errors: shortcomings of the instruments, such as defective or worn parts, and effects of the environment on the
equipment or the user.
Random errors: those due to-causes that cannot be directly established because of random variations in the parameter or the
system of measurement.
Each of these classes of errors will be discussed briefly and some methods will be suggested for their reduction or elimination.

1-4.1 Gross Errors


This class of errors mainly covers human mistakes in reading or using instruments and in recording and calculating
measurement results. As long as human beings are involved, some gross errors will inevitably be committed. Although
complete elimination of gross errors is probably impossible, one should try to anticipate and correct them. Some gross errors
are easily detected; others may be very elusive. One common gross error, frequently committed by beginners in measurement
work, involves the improper use of an instrument. In general, indicating instruments change conditions to some extent when
connected into a complete circuit, so that the measured quantity is altered by the method employed. For example, a well-
calibrated voltmeter may give a misleading reading when connected across two points in a high-resistance circuit (Example I-
7). The same voltmeter, when connected in a low-resistance circuit, may give a more dependable reading (Example 1-8). These
examples illustrate that the voltmeter has a "loading effect" on the circuit, altering the original situation by the measurement
process.
Example 1-7
A voltmeter, having a sensitivity of 1,000  /V, reads 100 V on its 150-V scale when connected across an unknown resistor in
series with a milliammeter.
When the milliammeter reads 5 mA, calculate (a) apparent resistance of the unknown resistor, (b) actual resistance of the
unknown resistor, (c) error due to the loading effect of the voltmeter.
Solution
(a) The total circuit resistance equals

Neglecting the resistance of the milliammeter, the value of the unknown resistor is R x = 20 k .
(b) The voltmeter resistance equals

Since the voltmeter is in parallel with the unknown resistance, we can write

(c) %

Example 1-8
Repeat Example 1-7 if the milliammeter reads 800 mA and the voltmeter reads 40 V on its 150-V scale.
Solution

(a)

(b)

(c)
Errors caused by the loading effect of the voltmeter can be avoided by using it intelligently. For example, a low-resistance

2
voltmeter should not be used to measure voltages in a vacuum tube amplifier. In this particular measurement, a high-input
impedance voltmeter (such as a VTVM or TVM) is required.
A large number of gross errors can be attributed to carelessness or bad habits, such as improper reading of an instrument,
recording the result differently from the actual reading taken, or adjusting the instrument incorrectly. Consider the case in
which a multirange voltmeter uses a single set of scale markings with different number designations for the various voltage
ranges. It is easy to use a scale which does not correspond to the setting of the range selector of the voltmeter. A gross error
may also occur when the instrument is not set to zero before the measurement is taken; then all the readings are off.
Errors like these cannot be treated mathematically. They can be avoided only by taking care in reading and recording the
measurement data. Good practice requires making more than one reading of the same quantity, preferably by a different
observer. Never place complete dependence on one reading but take at least three separate readings, preferably under conditions
in which instruments arc switched ofT-on.

1-4.2 Systematic Errors


This type of error is usually divided into two different categories: (1) instrumental errors, defined as shortcomings of the
instrument; (2) environmental errors, due to external conditions affecting the measurement.
Instrumental errors are errors inherent in measuring instruments because of their mechanical structure. For example, in the
d'Arsonval movement friction in bearings of various moving components may cause incorrect readings. Irregular spring
tension, stretching of the spring, or reduction in tension due to improper handling or overloading of the instrument will result in
errors. Other instrumental errors are calibration errors, causing the instrument to read high or low along its entire scale. (Failure
to set the instrument to zero before making a measurement has a similar effect.)
There are many kinds of instrumental errors, depending on the type of instrument used. The experimenter should always take
precautions to insure that the instrument he is using is operating properly and does not contribute excessive errors for the
purpose at hand. Faults in instruments may be detected by checking for erratic behavior, and stability and reproducibility of
results. A quick and easy way to check an instrument is to compare it to another with the same characteristics or to one that is
known to be more accurate.
Instrumental errors may be avoided by (1) selecting a suitable instrument for the particular measurement application; (2)
applying correction factors after determining the amount of instrumental error; (3) calibrating the instrument against a standard.
Environmental errors are due to conditions external to the measuring device, including conditions in the area surrounding the
instrument, such as the effects of changes in temperature, humidity, barometric pressure, or of magnetic or electrostatic fields.
Thus a change in ambient temperature at which the instrument is used causes a change in the elastic properties of the spring in a
moving-coil mechanism and so affects the reading of the instrument. Corrective measures to reduce these effects include air
conditioning, hermetically sealing certain components in the instrument, use of magnetic shields, and the like.
Systematic errors can also be subdivided into static or dynamic errors. Static errors are caused by limitations of the measuring
device or the physical laws governing its behavior. A static error is introduced in a micrometer when excessive pressure is
applied in torquing the shaft. Dynamic errors are caused by the instrument's not responding fast enough to follow the changes in
a measured variable.

1-4.3 Random Errors


These errors are due to unknown causes and occur even when all systematic errors have been accounted for. In well-designed
experiments, few random errors

QUESTIONS
1. What is the difference between accuracy and precision?
3
2. List four sources of possible errors in instruments.
3. What are the three general classes of errors?
4. Define
(a) instrumental error, (b) limiting error,
(c) calibration error, (d) environmental error,
(e) random error, (f) probable error.

PROBLEMS
1. A 0-1-mA milliammeter has 100 divisions which can easily be read to the nearest division. What is the resolution of the
meter?
2. A digital voltmeter has a read-out range from 0 to 9,999 counts. Determine the resolution of the instrument in volts when the
full-scale reading is 9.999 V.
3. State the number of significant figures in each of the following:
(a) 542, (b) 0.65,
(c) 27.25, (d) 0.00005,
(e) 40 X 106, (f) 20,000.
4. Four capacitors are placed in parallel. The capacitor values are 36.3 /j,F, 3.85 jLtF, 34.002 fiF, and 850 nF, with an
uncertainty of one digit in the last place. What is the total capacitance? Give only the significant figures in the answer.
5. A voltage drop of 112.5 V is measured across a resistor passing a current of 1.62 A. Calculate the power dissipation of the
resistor. Give only significant figures in the answer.

Meter
6. What voltage would a 20,000-H/V meter on a 0-1-V scale show in the circuit of Fig. PI-6?
Figure P 1-6

7. The voltage across a resistor is 200 V, with a probable error of ± 2 per cent, and the resistance is 42 O with a probable error
of ± 1.5 per cent. Calculate (a) the power dissipated in the resistor, (b) the percentage error in the answer.
8. The following values were obtained from the measurements of the value of a resistor: 147.2 H, 147.4 ft, 147.9 a, 148.1 H,
147.1 a, 147.5 H, 147.6 H, 147.4 a, 147.6 ft, and 147.5 O. Calculate (a) the arithmetic mean, (b) the average deviation, (c) the
standard deviation, (d) the probable error of the average of the ten readings.
9. Six determinations of a quantity, as entered on the data sheet and presented to you for analysis, are 12.35, 12.71, 12.48,
10.24, 12.63, and 12.58. Examine the data and on the basis of your conclusions calculate (a) the arithmetic mean, (b) the
standard deviation, (d) the probable error in per cent of the average of the readings.
10. Two resistors have the following ratings:
Rl = 36 ft ± 5% and ^2 = 75 H ± 5%
Calculate (a) the magnitude of error in each resistor, (b) the limiting error in ohms and in per cent when the resistors are
connected in series, (c) the limiting error in ohms and in per cent when the resistors are connected in parallel.
11. The resistance of an unknown resistor is determined by the Wheatstone bridge method. The solution for the unknown
resistance is stated as Rx = R1R2/R3,
where R1= 500 H ± 1%
R2 = 615 H ± 1%
R, = 100 O ± 0.5%
Calculate (a) the nominal value of the unknown resistor, (b) the limiting error in ohms of the unknown resistor, (c) the limiting
error in per cent of the unknown resistor.
12. A resistor is measured by the voltmeter-ammeter method. The voltmeter reading is 123.4 V on the 250-V scale and the
ammeter reading is 283.5 mA on the 500-mA scale. Both meters are guaranteed to be accurate within ± 1 per cent of full-scale
reading. Calculate (a) the indicated value of the resistance, (b) the limits within which you can guarantee the result.
13. In a dc circuit, the voltage across a component is 64.3 V and the current is 2.53 A. Both current and voltage are given with
an uncertainty of one unit in the last place. Calculate the power dissipation to the appropriate number of significant figures.
14. A power transformer was tested to determine losses and efficiency. The input power was measured as 3,650 W and the
delivered output power was 3,385 W, with each reading in doubt by ± 10 W. Calculate (a) the percentage uncertainty in the
losses of the transformer, (b) the percentage uncertainty in the efficiency of the transformer, as determined by the difference in
input and output power readings.
15. The power factor and phase angle in a circuit carrying a sinusoidal current are determined by measurements of current,
voltage, and power. The current is read as 2.50 A on a 5-A ammeter, the voltage as 115 V on a 250-V voltmeter, and the power
as 220 W on a 500-W wattmeter. The ammeter and voltmeter are guaranteed accurate to within ±0.5 per cent of full-scale
indication and the wattmeter to within ± 1 per cent of full-scale reading. Calculate (a) the percentage accuracy to which the
power factor can be guaranteed, (b) the possible error in the phase angle.
CHAPTER 2
SYSTEMS OF UNITS OF MEASUREMENT
4
2-2 SYSTEMS OF UNITS
In 1790 the French government issued a directive to the French Academy of Sciences to study and to submit proposals for a
single system of weights and measures to replace all other existing systems. The French scientists decided, us a first principle,
that a universal system of weights and measures should not depend on man-made reference standards, but instead be based on
permanent measures provided by nature. As the unit of length, therefore, they chose the meter, defined as the ten-millionth part
of the distance from the pole to the equator along the meridian passing through Paris. As the unit of mass they chose the mass
of a cubic centimeter of distilled water at 4°C and normal atmospheric pressure (760 mm Hg) and gave it the name gram. As
the third unit, the unit of time, they decided to retain the traditional second, defining it as 1/86,400 of the mean solar day.
As a second principle, they decided that all other units should be derived from the aforementioned three fundamental units of
length, mass, and time. Next—the third principle—they proposed that all multiples and submultiples of basic units be in the
decimal system, and they devised the system of prefixes in use today. Table 2-1 lists the decimal multiples and submultiples.
The proposals of the French Academy were approved and introduced as the metric system of units in France in 1795. The
metric system aroused considerable interest elsewhere and finally, in 1875, 17 countries signed the so-called Metre Convention,
making the metric system of units the legal system. Britain and the United States, although signatories of the convention,
recognized its legality only in international transactions but did not accept the metric system for their own domestic use.
Britain, in the meantime, had been working on a system of electrical units, and the British Association for the Advancement of
Science decided on the centimeter and the gram as the fundamental units of length and mass.

TABLE 2-1 Decimal Multiples and


Submultiples
Name Symbol Equivalent
tera
giga
mega
kilo
hecto
deca
deci
centi
milli
micro
nano
pico
femto
atto

From this developed the centimeter-gram-second or CGS absolute system of units, used by physicists all over the world.
Complications arose when the CGS system was extended to electric and magnetic measurements because of the need to
introduce at least one more unit in the system. In fact, two parallel systems were established. In the CGS electrostatic system,
the unit of electric charge was derived from the centimeter, gram, and second by assigning the value 1 to the permittivity of free
space in Coulomb's law for the force between electric charges. In the CGS electromagnetic system, the basic units are the same
and the unit of magnetic pole strength is derived from them by assigning the value 1 to the permeability of free space in the
inverse square formula for the force between magnetic poles.
A more comprehensive system was adopted in 1954 and designated in 1960 by international agreement as the System
International (SI). In the SI system, six basic units are used, namely, the meter, kilogram, second, and ampere of the MKSA
system and, in addition, the Kelvin and the candela as the units of temperature and luminous intensity, respectively. The SI
units are replacing other systems in science and technology; they have been adopted as the legal units in France, and will
become obligatory in other metric countries. The six basic SI quantities and units of measurement, with their unit symbols,
are listed in Table 2-2.

TABLE 2-2 Basic SI Quantities, Units, and Symbols


Quantity Unit Symbol
Length meter m
Mass kilogram kg
Time second s
Electric current ampere A
Thermodynamic temperature Kelvin K
Luminous intensity candela cd

You might also like