0% found this document useful (0 votes)
12 views20 pages

Syllabus APSC 2028 Winter 2024

The document outlines the schedule, philosophy, requirements, and evaluation criteria for an Applied Sciences 2028 laboratory course. The course consists of eight required experiments taking place on Tuesday or Wednesday afternoons. Students are evaluated based on pre-lab questions, laboratory reports, and exit interviews.

Uploaded by

nmnhut.ca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

Syllabus APSC 2028 Winter 2024

The document outlines the schedule, philosophy, requirements, and evaluation criteria for an Applied Sciences 2028 laboratory course. The course consists of eight required experiments taking place on Tuesday or Wednesday afternoons. Students are evaluated based on pre-lab questions, laboratory reports, and exit interviews.

Uploaded by

nmnhut.ca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Applied Sciences 2028

Winter 2024

Schedule and location


Two sections:

• Tuesday from 14:30 to 17:30 (FR01B)

• Wednesday from 14:30 to 17:30 (FR02B)

Locations: P321 in the I.U.C. Physics building.

Philosophy and objectives


Laboratory work is an integral part of the learning process in the physical sciences. Reading
textbooks and doing problem sets are all very well, but there’s nothing like hands-on experience
to truly understand physics. The laboratory sessions complement your classwork. If you mentally
dissociate the two and view the labs as something to be ticked off a list, you are doing yourself a
great disservice, missing out on an excellent opportunity to learn more deeply.

In addition to allowing for course specific investigations related to certain physical concepts, lab
work is meant to develop analytical skills. Various factors may influence the outcome of an experi-
ment, resulting in data differing noticeably from the model predictions. A large part of experimental
science is learning to understand these outside influences and control them when possible and learn-
ing to evaluate the limits of the applicability of theoretical models. The key to any new advance
based on experiment is to be able to draw meaningful conclusions from data that agree or disagree
with predictions.

The same general specifications will apply for APSC 2028 as for previous PHYS 1081 labs, how-
ever you should keep in mind that while the PHYS 1081 labs were only a small component of a
course, APSC 2028 is in itself a 2-credit-hour course and therefore much more work is expected.
This is reflected in the effort, time and thought needed to prepare for, perform and analyze the
experiments.

1
Requirements
Generalities
You must complete all eight laboratory exercises. If you miss a lab session due to illness
(documentation required), you must make arrangements with the course instructors for completion
of the skipped experiment at some other time.

You should be able to complete the experimental work for each exercise within the
three-hour period. Careful preparation will make this much easier! Your analysis need not be
complete, you can finish that for your report, but you should have done enough analysis to know
that your data are good enough to support your conclusions. If you are unable to finish, you can
arrange with the lab instructors to come back at another time, but yosu can only do this twice for
free. After that, a 5% penalty will be applied to your lab report mark for each lab where you have
to come back beyond two times.

Students will use a lab notebook in this course. It must be a black hardcover books as used in PHYS
1081 labs. The notebook not only contains information about what was done and measurements
taken, it is also the place to carry out analysis calculations and to write out thoughts concerning
the meaning of the results obtained. When analyzing and discussing results, always be as
explicit, detailed, and quantitative as possible.

The main point to keep in mind when writing in your notebook is that you, and others, should be
able to read your notebook weeks or months from now and understand what was done. Specifica-
tions on keeping a lab notebook for this course are given in Appendix A.

Evaluation
• Prelabs: (15%)

Obviously, when one is doing R&D work, one does not start fiddling with instruments before
thinking about why one wants the data and what one will do with them! With this in mind,
each experiment has associated preparatory questions which must be answered before coming
to lab. Answer the pre-lab questions on a separate piece of paper that you can later attach in
your notebook; a lab instructor or T.A. will grade it toward the beginning of the lab session.
You should allot a couple of hours for proper lab preparation.

• Reports: (70%)

Reports will be submitted on Crowdmark for each laboratory exercise. If you are in the
Tuesday section (FR01B) then your report is due at 11:59 PM on Thursday. If you are in the

2
Wednesday section (FR01B) then your report is due at 11:59 PM on Friday.

A well-kept lab notebook should make writing the report easy as the report should include
the same sections as you have maintained in your notebook (see Appendix A for details).

– Title, date, names of experimenters (you and your lab partner(s))

– Objective

– Equipment

– Methods and data [Do not duplicate the procedure from the lab manual, but do detail
any things that you and your partner(s) specifically chose to do in order to carry out
the procedure.]

– Analysis (including error analysis)

– Discussion

– Conclusions (make these as quantitative as possible)

Criteria for marking of the reports are:

– Data acquisition: Did you acquire and note all the information necessary for the analysis
and for the reconstruction of the experiment? This includes information about the
instrumentation used, set-up diagram, etc, in addition to data tables.

– Analysis: Is your analysis correct? Complete?

– Discussion and conclusions: Did you understand what the data were saying about phys-
ical processes? Did you understand the limitations of your data?

– Analysis skills: This includes uncertainties, tables, figures, etc

– Presentation: Is it easy to follow your work on read-through? Is the report clear


(unambiguous) and concise?

• Exit interviews: (15%)

Once the experiment is complete, but before leaving the lab, each group must seek out one
of the instructors to complete a brief exit interview.

3
Intellectual Responsibility
UNB’s policy
The University of New Brunswick places a high value on academic integrity and has a policy on
plagiarism, cheating and other academic offences.

Plagiarism includes:

1. quoting verbatim or almost verbatim from any source, including all electronic sources, without
acknowledgement;

2. adopting someone else’s line of thought, argument, arrangement, or supporting evidence with-
out acknowledgement;

3. submitting someone else’s work, in whatever form without acknowledgment;

4. knowingly representing as one’s own work any idea of another.

Examples of other academic offences include: cheating on exams, tests, assignments or reports; im-
personating somebody at a test or exam; obtaining an exam, test or other course materials through
theft, collusion, purchase or other improper manner, submitting course work that is identical or
substantially similar to work that has been submitted for another course; and more as set out in
the academic regulations found in the Undergraduate Calendar.

Penalties for plagiarism and other academic offences range from a minimum of F (zero) in the
assignment, exam or test to a maximum of suspension or expulsion from the University, plus a
notation of the academic offence on the student’s transcript.

For more information, please see the Undergraduate Calendar, Section B, Regulation VII.A, or
visit http://nocheating.unb.ca. It is the student’s responsibility to know the regulations.

4
Appendices

5
A Lab notebook
Keeping a good lab notebook seems like a simple and obvious task, but it requires more care and
thought than most people realize. It is a skill that requires consistent effort and discipline and is
worth developing. Your lab notebook is your written record of everything you did in the lab.
Hence it includes not only your tables of data, but notes on your procedure, and your data analysis
as well. With practice, you will become adept at sharing your time fairly between conducting the
experiment and recording relevant information in your notebook as you go along.

A.1 Layout and presentation


The cardinal rule of keeping a lab notebook is this: give yourself plenty of room. Doing so makes
extending tables or descriptions of procedure easy, and typically also makes your notebook easier
to read. Use the right-hand pages for writing (notes, data tables, analysis, etc). The left-hand
pages are to be used for graphs, computer printouts (e.g. plots, computer code), or for rough work.
They can also be used to insert extra notes later on if need be.

Label everything clearly: sections, tables, plots. You or someone else should be able to pick up
your notebook weeks or months from completion of a given experiment and easily understand what
was done. Also, it should be straightforward for anyone to find specific pieces of information in
your notebook.

It is important that you leave the first couple of pages of your notebook available for a Table of
Contents. Number pages as you go along and keep your Table of Contents up to date. Note that
the table of contents should be more than a list of the experiment titles and where they start. You
should also list key elements of the individual experiments.

A.2 Required sections


The notebook for every experiment must include the following:

• Title, date, names of experimenters (you and your lab partner(s))

• Objective

• Equipment

• Procedure and data

• Analysis (including error analysis)

• Discussion

6
• Conclusion

A.2.1 Title, date, experimenters


You should begin each new experiment on a fresh page in your notebook. Start with a brief title for
the experiment – just enough to remind you what that section of your notebook is about. Indicate
the date on which the experiment was performed and the names of the experimenters (you and
your lab partners()).

A.2.2 Equipment
List the equipment used, identifying large pieces of equipment with manufacturer’s name and the
model, and the serial number if available. With this information, you can repeat the experiment
with the identical equipment if for some reason you are interrupted and have to return to the
equipment much later. Or, if you are suspicious of some piece of equipment, having this information
will let you avoid that particular item. Also, it is not unheard of to discover at some later date
that a certain instrument has a “feature” which can affect measurements. Hence, you need to know
exactly which instruments you used if you do not want to have to redo an experiment just in case
you used that instrument and your data were affected.

A.2.3 Procedure and Data


The Procedure and Data section can be subdivided into several subsections if there are several
distinct sets of measurements. Each such subsection should be clearly labeled.

It is often very useful to make a quick sketch of the setup, or a schematic diagram (e.g. for electron-
ics). This is not meant to be a pretty picture, it must convey the important physical information
e.g. the various lengths or angles measured should be shown on your diagrams, and the various
measurement devices must also be clearly identified (e.g. you may use two multimeters in some
experiment: which one was where?). Obviously, there are circumstances where you should have
different diagrams for the different subsections and others where one diagram can be used for all
subsections.

The next step is to present the data you acquired. You need not copy all the details of the data
acquisition procedure given in the lab manual but you do need to include information that explains
what the data that follow are about. Sometimes this will require a sentence or two. Often, a clear
subsection title combined with information about settings that apply to all data in the table (e.g.
the voltage at which all the data in the table were acquired) will be sufficient. Be certain that you
include enough information for the experiment to be reproducible.

Numerical data More often than not, a table is the best format in which to record data: it is
easy to consult and makes trends in the data more obvious. Of course, a table is only useful if it is

7
presented in such a way that the information in it can be used at some later time. This implies that
columns (possibly rows also) and the table itself should be clearly labelled. Units must be specified;
this can be done either in the column headings or within the table proper, as appropriate. You
should also indicate the uncertainty to be associated with each measurement. If the uncertainty is
the same for all data of a certain set, you can simply indicate that uncertainty at the top of the
column for those data.

You will need at least two columns, one for the independent variable and one for each dependent
variable. It is also good to have an additional column, usually at the right-hand edge of the page,
labelled “Remarks.” That way, if you make a measurement and decide that you didn’t quite carry
out your procedure correctly, you can make a note to that effect in the “Remarks” column.

If you use a computer for data acquisition and/or analysis, be sure to note exactly where the data
are stored, including filenames. If files get moved or renamed at some later date, you must go back
into your notebook to indicate the change.

Graphs In many cases, it is useful to present your data graphically.

The point of a graph is to visually convey the results of an experiment. If you choose the scale of
your graph such that the trend present in the data is not apparent, then your graph is not useful.
Axes need not start at zero, choose a scale that allows you to show all your data and for which
the trends are easily seen (ask yourself: “What is the smallest range of x and y that I could use to
show all my data?”).

As with tables, a graph is not very useful if it is not properly labelled. Both the axes and the graph
itself should be labelled. Always include the units in the axis labels. The title of the graph should
be useful, it should not be a simple repetition of the axis titles.

Finally, an essential element of any graph is uncertainty bars (sometimes called “error bars”). See
the appendix on “Experimental Uncertainty Analysis” for more details.

A.2.4 Analysis
The analysis section includes all the data processing, all the calculations done using your data. Error
analysis is part of this. There may also be graphs in this section if the graphs show calculated
data rather than raw data. What the analysis section does not include is interpretation of the
results.

A.2.5 Discussion
Once you have completed the experiment and performed any necessary calculations in your note-
book, you should stop to think about what your result means, what it implies. This is where you

8
need to call on the physics you knew before performing this experiment.

If you are trying to reproduce a previous result — theoretical or experimental — you need to
comment on whether are not the difference between your result and the previous one is significant
or not1 . If the difference is significant, you need to think about why that is. Did the theory you
are trying to check apply to the experimental conditions you had? Were the conditions of your
experiment sufficiently different from those of the previous experiment? Think carefully about your
experiment: where might there be systematic errors for which you have not accounted? Can you
correct for them now? If not, how might you check that your suspicions are correct and quantify
the effect so that you could later correct your data? This might not always be possible, but it is
important to give it some thought. If you were not trying to reproduce a result, analysis of your
data is often a matter of identifying patterns and trends.

In both cases, you should also consider the various sources of uncertainty and how they may have
affected your results (see the appendix on “Experimental Uncertainty Analysis” for information on
assessing the impact of the various sources of uncertainty). Finally, you might comment on how
the experiment could be improved, but if it is something you could have fixed at the time (e.g. this
experiment would have worked better if we had closed the window), then you should really have
done it at the time.

A.2.6 Conclusions
Conclusions are concise, specific and generally quantitative. They summarize your findings. You
should explain how your conclusions relate back to the results of your analysis.

1
There are times when the accepted value or theoretical value of a parameter you are trying to measure is not
given in the lab manual. In such circumstances, it is up to you to look it up. You should state the source of the
expected/theoretical value that you use.

9
B Experimental Uncertainty Analysis
An intrinsic feature of every measurement is the uncertainty associated with the result of that
measurement. No measurement is ever exact. Being able to determine easily and assess intelligently
measurement uncertainties is an important skill in any type of scientific work. The measurement
(or experimental) uncertainty should be considered an essential part of every measurement.

Why make such a fuss over measurement uncertainties? Indeed, in many cases the uncertainties
are so small that, for some purposes, we need not worry about them. On the other hand, there are
many situations in which small changes might be very significant. A clear statement of measurement
uncertainties helps us assess deviations from expected results. For example, suppose two scientists
report measurements of the speed of light (in vacuum). Scientist Curie reports 2.99 × 108 m/sec.
Scientist Wu reports 2.98 × 108 m/sec. There are several possible conclusions we could draw from
these reported results:

1. These scientists have discovered that the speed of light is not a universal constant.

2. Curie’s result is better because it agrees with the “accepted” value for the speed of light.

3. Wu’s result is worse because it disagrees with the accepted value for the speed of light.

4. Wu made a mistake in measuring the speed of light.

However, without knowing the uncertainties, which should accompany the results of the measure-
ment, we cannot assess the results at all!

B.1 Expressing experimental uncertainties


Suppose that we have measured the distance between two points on a piece of paper. There
are two common ways of expressing the uncertainty associated with that measurement: absolute
uncertainty and relative uncertainty.

B.1.1 Absolute uncertainty


We might express the result of the measurement as

5.1 cm ± 0.1 cm

By this we mean that the result (usually an average result) of the set of measurements is 5.1 cm,
but given the conditions under which the measurements were made, the fuzziness of the points,
and the refinement of our distance measuring equipment, it is our best judgment that the “actual”
distance might lie between 5.0 cm and 5.2 cm.

10
B.1.2 Relative (or percent) uncertainty
We might express the same measurement result as

5.1 cm ± 2%.

Here the uncertainty is expressed as a percent of the measured value. Both means of expressing
uncertainties are in common use and, of course, express the same uncertainty.

B.1.3 Graphical Presentation: Uncertainty Bars


If we are presenting our data on a graph, it is traditional to add uncertainty bars (also misleadingly
but commonly called “error bars”) to the plotted point to indicate the uncertainty associated with
that point. We do this by adding an “I-bar” to the graph, with the I-bar extending above and
below the plotted point (if the numerical axis is vertical). If we are plotting a point on an (x, y)
graph and there is some uncertainty in both the x and y values, then we use a horizontal I-bar for
the x uncertainty and a vertical I-bar for the y uncertainty.

B.1.4 An aside on significant figures


The number of significant figures quoted for a given result should be consistent with the uncertainty
in the measurement. In the previous example, it would be inappropriate to quote the results as 5
cm ± 0.1 cm (too few significant figures in the result) or as 5.132 cm ± 0.1 cm (too many significant
figures in the result). The best policy is to match the result and its uncertainty by citing as final
digit of the result the one that is limited by the uncertainty, in this case 5.1 cm ± 0.1. However, it is
worth noting that some scientists prefer to give the best estimate of the next significant figure after
the one limited by the uncertainty, for example 5.13 cm ± 0.1 cm. The uncertainties themselves,
since they are estimates, are usually quoted with only one significant figure, or in some cases, (for
very high precision measurements, for example) with two significant figures.

B.2 Systematic errors, precision and random effects


Why aren’t measurements perfect? The causes of measurement uncertainties can be divided into
three broad categories: systematic problems, limited precision, and random effects.

B.2.1 Sytematic errors


Systematic errors occur when a piece of equipment is improperly constructed, calibrated, or used.
For example, suppose the stopwatch that you are using runs properly at 20◦ C, but you happen to
be using it where the temperature is closer to 30◦ C, which (unknown to you) causes it to run 10%
too fast.

11
One does not generally include systematic errors in the uncertainty of a measurement: if you
know that a systematic problem exists, you should fix the problem (for example, by calibrating
the stopwatch). The most appropriate thing to do with systematic problems in an experiment is
to find them and to eliminate them. Unfortunately, no well-defined procedures exist for finding
systematic errors: the best you can do is to be clever in anticipating problems and alert to trends
in your data that suggest their presence. In some cases, you are aware of systematic effects but
you are unable to determine their effects absolutely precisely. Then it would be appropriate to
include in the stated measurement uncertainty a contribution due to the imprecise knowledge of
the systematic effects.

B.2.2 Limited precision


Limited precision is present because no measurement device can determine a value to infinite
precision. Dials and linear scales (such as meter sticks, thermometers, gauges, speedometers, and
the like) can at best be read to within one tenth (or so) of the smallest division on the scale. For
example, the smallest divisions on a typical metric ruler are 1 mm apart: the minimum uncertainty
for any measurement made with such a ruler is therefore about ±0.1 mm. This last statement is
not an absolute criterion, but a rule-of-thumb based on experience. Commonly, an uncertainty of
±half of the smallest division on the instrument’s scale is quoted.

For measuring devices with a digital readout (a digital stopwatch, a digital thermometer, a digital
voltmeter, and so on), the minimum uncertainty is ±1 in the last displayed digit. For example,
if your stopwatch reads 2.02 seconds, the “true” value for the time interval may range anywhere
from 2.010...01 to 2.029999 seconds. (We don’t know whether the electronics in the watch rounds
up or down.) So, in this case, we must take the uncertainty to be at least ±0.01 second.

Accuracy and Precision. It is important to note that “accuracy” and “precision” are not in-
terchangeable terms. Accuracy refers to how well the experimental result reproduces the true value
whereas precision is a question of the internal consistency and repeatability of a set of measure-
ments without worrying about matching those measurements with other measurements or with
standard units. For example, if I am using a stopwatch that is running too fast by a fixed amount,
my set of timing measurements might be very precise if they are very repeatable. But they are not
very accurate because my time units are not well matched to the standard second.

B.2.3 Random effects


Random effects show up in the spread of results of repeated measurements of the same quantity.
For example, five successive stopwatch measurements of the period of a pendulum might yield the
results 2.02 s, 2.03 s, 2.01 s, 2.04 s, 2.02 s. Why are these results different? In this case, the
dominant effect is that it is difficult for you to start and stop the stopwatch at exactly the right
instant: no matter how hard you try to be exact, sometimes you will press the stopwatch button a

12
bit too early and sometimes a bit too late. These unavoidable and essentially random measurement
effects cause the results of successive measurements of the same quantity to vary.

In addition, the quantity being measured itself may vary. For example, as the temperature in the
lab room increases and decreases, the length of the pendulum may increase and decrease. Hence,
its period may change. In laboratory experiments, we try to control the environment as much as
possible to minimize these fluctuations. However, if we are measuring the light radiated by a star,
there is no way to control the star and its inherent fluctuations. Furthermore, fluctuations in the
interstellar medium and in the earth’s atmosphere may cause our readings to fluctuate in a random
fashion.

In our analysis, we will assume that we are dealing with random effects, that is, we will assume that
we have eliminated systematic errors. For most experiments we try to remove systematic errors
and reduce calibration uncertainties and the effects of limited precision so that random effects are
the dominant source of uncertainty.

B.3 Determining experimental uncertainties


There are several methods for determining experimental uncertainties. Here we mention three
methods, which can be used easily in most of the laboratory measurements in this course.

B.3.1 Estimate Technique


In this method, we estimate the precision with which we can measure the quantity of interest,
based on an examination of the measurement equipment (scales, balances, meters, etc.) being used
and the quantity being measured (which may be “fuzzy,” changing in time, etc.). For example, if
we were using a ruler with 0.1 cm marks to measure the distance between two points on a piece of
paper, we might estimate the uncertainty in the measured distance to be about ±0.05 cm, that is,
we could easily estimate the distance to within ∼ 1/2 scale marking. Here we are estimating the
uncertainty due to the limited precision of our equipment.

B.3.2 Sensitivity Estimate


Some measurements are best described as comparison or “null” measurements, in which we balance
one or more unknowns against a known quantity. For example, in some electrical circuit exper-
iments, we determine an unknown resistance in terms of a known resistance by setting a certain
potential difference in the circuit to zero. We can estimate the uncertainty in the resulting resis-
tance by slightly varying the known resistance to see what range of resistance values leads to a
“zero potential difference condition” within our ability to check for zero potential difference.

13
B.3.3 Repeated Measurement (Statistical) Technique
If a measurement is repeated in independent and unbiased ways, the results of the measurements
will be slightly different each time. A statistical analysis of these results then, it is generally agreed,
gives the “best” value of the measured quantity and the “best” estimate of the uncertainty to be
associated with that result.

Mean Value (Average Value) The usual method of determining the best value for the result is
to compute the “mean value” of the results: If x1 , x2 , ..., xN are the N results of the measurement
of the quantity x, then the mean value of x, usually denoted by x, is defined as
N
x1 + x2 + .... + xN 1 X
x= = xi (1)
N N i=1

Standard Deviation The uncertainty in the result is usually expressed as the “root-mean-
squared deviation” (also called the “standard deviation”), usually denoted as ∆x (read “delta
x”) or as σx (Greek letter sigma). [Note that here ∆x does not mean the change in x, but rather is
a measure of the spread in x values in the set of measurements.] Formally, the standard deviation
is computed as s
(x1 − x)2 + ... + (xN − x)2
∆x = σx = (2)
N −1

Let’s decipher Eq. 2 in words. Eq. 2 tells us to take the difference between each of the measured
values (x1 , x2 , ...) and the mean value x. We then square each of the differences (so we count
plus and minus differences equally). Next we find the average of the squared differences by adding
them up and dividing by the number of measured values. (The –1 in the denominator of Eq. 2 is
a mathematical refinement to reflect the fact that we have used the N values once to calculate the
mean. In practice, the numerical difference between using N and using N − 1 in the denominator
is insignificant for our purposes.) Finally, we take the square-root of that result to give a quantity
which has the same units as the original x values. Although determining the standard deviation
may be tedious for a large array of data, it is generally accepted as the “best” estimate of the
measurement uncertainty.

The standard deviation gives the uncertainty in a value, but we want the uncertainty in the mean
value. This quantity is called the standard deviation of the mean (SDOM) and is computed
as
σx
SDOM = √ (3)
N

B.3.4 Interpretation
What meaning do we give to the uncertainty determined by one of the methods given above? The
usual notion is that it gives us an estimate of the spread of values we would expect if we repeated

14
the measurements many times (being careful to make the repetitions independent of one another).
For example, if we repeated the distance measurements cited in the example (distance between
2 points on a piece of paper) many times, we would expect most (statisticians worry a lot about
how to make “most” more quantitative) of the measurements to fall within ±σx of the mean. See
example below.

95% Confidence Range Sometimes uncertainties are expressed in terms of what is called the
“95% Confidence Range” or “95% Confidence Limits.” These phrases mean that if we repeat the
measurements over and over many times, we expect 95% of the results to fall within the stated
range. (It also means that we expect 5% of the results to fall outside this range!) Numerically,
the 95% Confidence Range is about twice the standard deviation. Thus, we expect 95% of future
measurements of that quantity to fall in the range centred on the mean value.

Aside: The standard deviation value is the “68% Confidence Range.” The actual multiplicative
factor for the 95% Confidence Range is 1.98 if the measurements are distributed according to the
so-called “normal” (“Gaussian”) distribution. But, for all practical purposes using 2 is fine for
estimating the 95% Confidence Range.

In general, we cannot expect exact agreement among the various methods of determining
experimental uncertainties. As a rule of thumb, we usually expect the different methods
of determining the uncertainty to agree within a factor of two or three.

B.3.5 Example
Suppose that five independent observers measure the distance between two rather fuzzy marks on
a piece of paper and obtain the following results:

d1 = 5.05 cm
d2 = 5.10 cm
d3 = 5.15 cm
d4 = 5.20 cm
d5 = 5.10 cm

If the observers were using a scale with 0.1 cm markings, the estimate technique would suggest an
uncertainty estimate of about ±0.05 cm. The statistical technique yields a mean value d = 5.12 cm
and for the standard deviation 0.057 cm ≈ 0.06 cm. We see that in this case we have reasonable
agreement between the two methods of determining the uncertainties. We should quote the result
of this measurement as 5.12 cm ± 0.06 cm or 5.12 cm ± 1%.

In practice, it is not really meaningful to use the statistical estimation procedure unless
you have at least ten or so independent measurements.

15
It is worth re-emphasizing at this point that our analysis applies only to “random” uncertainties,
that is, essentially uncontrollable fluctuations in equipment or in the system being measured, that
collectively lead to scatter in our measured results. We have (implicitly) assumed that we have
eliminated (or will correct for) so-called systematic errors, that is, effects that are present that
may cause our results to be systematically (not randomly) high or low.

B.4 Assessing uncertainties and deviations from expected results


The primary reason for keeping track of measurement uncertainties is that the uncertainties tell us
how much confidence we should have in the results of the measurements.

If the results of our measurements are compared to results expected on the basis of
theoretical calculations or on the basis of previous experiments, we expect that, if no
mistakes have been made, the results should agree with each other within the combined
uncertainties. For the uncertainty we traditionally use the 95% Confidence Range (that
is, two times the standard deviation).

(Note that even a theoretical calculation may have an uncertainty associated with it because there
may be uncertainities in some of the numerical quantities used in the calculation or various math-
ematical approximations may have been used in reaching the result.) As a rule of thumb, if the
measured results agree with the expected results within the combined uncertainties, we usually can
view the agreement as satisfactory. If the results disagree by more than the combined uncertainties,
something interesting is going on and further examination is necessary.

Example Suppose a theorist from MIT predicts that the value of X in some experiment to be
333 ± 1 Nm/s. An initial experiment gives the result 339 ± 7 Nm/s, which result overlaps the
theoretical prediction within the combined uncertainties. We conclude that there is satisfactory
agreement between the measured value and the predicted value given the experimental and theo-
retical uncertainties. However, suppose that we refine our measurement technique and get a new
result 335.1 ± 0.1 Nm/s. Now the measured result and the theoretical result do not agree. [Note
that our new measured result is perfectly consistent with our previous result with its somewhat
larger uncertainty.] We cannot tell which is right or which is wrong without further investigation
and comparison.

B.5 Propagating uncertainties


(Sometimes misleadingly called “propagation of errors”) In most measurements, some calculation
is necessary to link the measured quantities to the desired result. The question then naturally
arises: How do the uncertainties in the measured quantities affect (propagate to) the results? In
other words, how do we estimate the uncertainty in the desired result from the uncertainties in the
measured quantities?

16
B.5.1 “High-low Method”
One way to do this is to carry through the calculation using the extreme values of the measured
quantities, for example 5.06 cm and 5.18 cm from the distance measurement example above, to find
the range of result values. This method is straightforward but quickly becomes tedious if several
variables are involved.

Example Suppose that you wish to determine a quantity, X, which is to be calculated indirectly
using the measurements of a, b, and c, together with a theoretical expression: X = ab/c.

Suppose, further, that you have already determined that

a = 23.5 ± 0.2 m
b = 116.3 ± 1.1 N
c = 8.05 ± 0.03 s

The “best” value of X is computed from the best (mean) values of a, b, and c:
23.5 × 116.3
Xbest = = 339.509 Nm/s
8.05

(We’ll clean up the significant figures later.) But X could be about as large as what you get by
using the maximum values of a and b and the minimum value of c:
23.7 × 117.4
Xhigh = = 346.930 Nm/s
8.02

And similarly, we find


23.3 × 115.2
Xlow = = 332.198 Nm/s
8.08

Notice that Xhigh and Xlow differ from Xbest by about the same amount, namely 7.3. Also note that
it would by silly to give six significant figures for X. Common sense suggests reporting the value of
X as, say, X = 339.5 ± 7.3 Nm/s or X = 339 ± 7 Nm/s.

B.5.2 General Method


The general treatment of the propagation of uncertainties is given in detail in texts on the statistical
analysis of experimental data. A particularly good reference at this level is John Taylor, An
Introduction to Uncertainty Analysis. Here we will develop a very simple, but general method for
finding the effects of uncertainties.

17
Suppose we want to calculate some result R, which depends on the values of several measured
quantities x, y, z:
R = f (x, y, z) (4)

Let us also suppose that we know the mean values and the uncertainties (standard deviations, for
example) for each of these quantities. Then the uncertainty in R due to the uncertainty in x, for
example, is calculated from
∆x R = f (x + ∆x, y, z) − f (x, y, z) (5)
where the subscript on left-hand ∆ reminds us that we are calculating the effect due to x alone.
We might call this the “partial uncertainty.” Note that Eq. 5 is much like our “high-low” method
except that we focus on the effect of just one of the variables. In a similar fashion, we may calculate
the partial uncertainties in R due to ∆y and to ∆z.

By calculating each of these contributions to the uncertainty individually, we can find


out which of the variables has the largest effect on the uncertainty of our final result.
If we want to improve the experiment, we then know how to direct our efforts.

We now need to combine the individual contributions to get the overall uncertainty in the result.
The usual argument is the following: If we assume that the measurements of the variables are
independent so that variations in one do not affect the variations in the others, then we argue that
the net uncertainty is calculated as the square root of the sum of the squares of the individual
contributions: q
∆R = (∆x R)2 + (∆y R)2 + (∆z R)2 (6)

The formal justification of this statement comes from the theory of statistical distributions and
assumes that the distribution of successive measurement values is described by the so-called Gaus-
sian (or, equivalently, normal) distribution.

In rough terms, we can think of the fluctuations in the results as given by a kind of “motion” in
a “space” of variables x,y,z. If the motion is independent in the x, y, and z directions, then the
net “speed” is given as the square root of the sum of the squares of the “velocity” components. In
most cases, we simply assume that the fluctuations due to the various variables are independent
and use Eq. 6 to calculate the net effect of combining the contributions to the uncertainties.

Note that our general method applies no matter what the functional relationship between R and
the various measured quantities. It is not restricted to additive and multiplicative relationships as
are the usual simple rules for handling uncertainties.

The method introduced here is actually just a simple approximation to a method that uses partial
derivatives. Recall that in computing partial derivatives, we treat as constants all the variables

18
except the one with respect to which we are taking the derivative. For example, to find the
contribution of x to the uncertainty in R, we calculate
∂f (x, y, z)
∆x R = ∆x (7)
∂x
with analogous expressions for the effects of y and z. We then combine the individual contributions
as above.

B.5.3 Connection to the traditional simple rules for uncertainties


To see where the usual rules for combining uncertainties come from, let’s look at a simple functional
form:
R=x+y

Using our procedure developed above, we find that

∆x R = ∆x
∆y R = ∆y

and combining uncertainties yields


q
∆R = (∆x)2 + (∆y)2 (8)

The traditional rule for handling an additive relationships says that we should add the two (abso-
lute) uncertainty contributions. We see that the traditional method overestimates the uncertainty
to some extent.

B.5.4 Traditional simple rules for uncertainties


For a sum Add the absolute uncertainties, i.e.

If A = B + C then ∆A = ∆B + ∆C

For a difference Add the absolute uncertainties, i.e.

If A = B − C then ∆A = ∆B + ∆C

For a product Add the relative uncertainties, i.e.


∆A ∆B ∆C
If A = B × C then = +
A B C
19
For a ratio Add the relative uncertainties, i.e.
B ∆A ∆B ∆C
If A = then = +
C A B C

For a square root Divide the relative uncertainty by 2.


√ ∆A ∆B
If A = B then =
A 2B

20

You might also like