Collection Related guidance on working scientifically
Related guidance on working scientifically
Related guidance on working scientifically.
What is a measurement?
A measurement tells you about a property of something you are investigating, giving it a number and a unit. Measurements are always made using an instrument of some kind. Rulers, stopclocks, chemical balances and thermometers are all measuring instruments.
Some processes seem to be measuring, but are not, e.g. comparing two lengths of string to see which one is longer. Tests that lead to a simple yes/no or pass/fail result do not always involve measuring.
The quality of measurements
Evaluating the quality of measurements is an essential step on the way to sensible conclusions. Scientists use a special vocabulary that helps them think clearly about their data. Key terms that describe the quality of measurements are:
- Precision (repeatability or reproducibility)
- Measurement uncertainty
Validity: A measurement is ‘valid’ if it measures what it is supposed to be measuring. What is measured must also be relevant to the question being investigated.
If a factor is uncontrolled, the measurements may not be valid. For example, if you were investigating the heating effect of a current ( P = I 2 R) by increasing the current, the resistance of the wire may change as it is heated by the current to different temperatures. This would skew the results.
Correct conclusions can only be drawn from valid data.
Accuracy: This describes how closely a measurement comes to the true value of a physical quantity. The ‘true’ value of a measurement is the value that would be obtained by a perfect measurement, i.e. in an ideal world. As the true value is not known, accuracy is a qualitative term only.
Many measured quantities have a range of values rather than one ‘true’ value. For example, a collection of resistors all marked 1 kΩ. will have a range of values, but the mean value should be 1 kΩ.. You can have more confidence in a number of measurements of a sample rather than an individual measurement. The variation enables you to identify a mean, a range and the distribution of values across the range.
Precision: The closeness of agreement between replicate measurements on the same or similar objects under specified conditions.
reproducibility (precision): The extent to which a measurement replicated under the same conditions gives a consistent result. Repeatability refers to data collected by the same operator, in the same lab, over a short timescale. Reproducibility refers to data collected by different operators, in different laboratories. You can have more confidence in conclusions and explanations if they are based on consistent data.
Measurement uncertainty: The uncertainty of a measurement is the doubt that exists about its value. For any measurement – even the most careful – there is always a margin of doubt. In everyday speech, this might be expressed as ‘give or take…’, e.g. a stick might be two metres long ‘give or take a centimetre’.
The doubt about a measurement has two aspects:
- the width of the margin, or ‘interval’. This is the range of values one expects the true value to lie within. (Note this is not necessarily the range of values one might obtain when taking measurements of the value, which may include outliers.)
- confidence level’, i.e. how sure the experimenter is that the true value lies within that margin. Discussion of confidence levels is generally appropriate only in advanced level science courses.
Uncertainty in measurements can be reduced by using an instrument that has a scale with smaller scale divisions. For example, if you use a ruler with a centimetre scale then the uncertainty in a measured length is likely to be ‘give or take a centimetre’. A ruler with a millimetre scale would reduce the uncertainty in length to ‘give or take a millimetre’.
It is important not to confuse the terms ‘error’ and ‘uncertainty’. Error refers to the difference between a measured value and the true value of a physical quantity being measured. Whenever possible we try to correct for any known errors: for example, by applying corrections from calibration certificates. But any error whose value we do not know is a source of uncertainty.
Measurement errors can arise from two sources:
- a random component, where repeating the measurement gives an unpredictably different result;
- a systematic component, where the same influence affects the result for each of the repeated measurements.
Every time a measurement is taken under what seem to be the same conditions, random effects can influence the measured value. A series of measurements therefore produces a scatter of values about a mean value. The influence of variable factors may change with each measurement, changing the mean value. Increasing the number of observations generally reduces the uncertainty in the mean value.
Systematic errors (measurements that are either consistently too large, or too small) can result from:
- poor technique (e.g. carelessness with parallax when sighting onto a scale);
- zero error of an instrument (e.g. a ruler that has been shortened by wear at the zero end, or a newtonmeter that reads a value when nothing is hung from it);
- poor calibration of an instrument (e.g. every volt is measured too large).
Whenever possible, a good experimenter will try and correct for systematic errors, thus improving accuracy. For example, if it is known that a balance always reads 2 g greater than the true reading it is perfectly possible to compensate for that error by simply subtracting 2 g from all readings taken.
Sometimes you can only find a systematic error by measuring the same value by a different method.
Errors that are not recognized contribute to measurement uncertainty.
ASE/Nuffield booklet: The Language of Measurement
In 2010, following a series of meetings with Awarding Organisations, the ASE and Nuffield Foundation jointly published a booklet to enable teachers, publishers, awarding bodies and others in England and Wales to achieve a common understanding of key terms that arise from practical work in secondary science. Order a copy or see extracts from the booklet
This webpage is based on the National Physical Laboratory's Good Practice Guide: A Beginner's Guide to Uncertainty of Measurements written by Stephanie Bell.
Rough and ready measurements
To many students, the image of science is one of exactness and perfection. And yet, good scientists make rough estimates again and again, sometimes without ever making a precise measurement. It is important to teach students that rough measurements are respectable.
Of course, high precision is of the essence in many cases. A modern mass spectrograph must yield measurements of high precision if tiny mass-differences between one atomic nucleus and another are to be interpreted as energy-differences using E = mc 2.
Yet when Chadwick measured the nuclear charges of copper, silver and platinum, by alpha scattering in 1920, relatively rough measurements showed Rutherford's atomic model was correct. Chadwick showed that the nuclear charge (in electron units) is just equal to the
atomic number, the number of the element in the periodic table, a series arranged in order of atomic masses. Those answers were suspected from the general pattern of theory and had to be whole numbers since a complete atom (of nucleus plus outside electrons) is neutral. Much more precise measurements were neither needed, nor at the time, possible. Even before that, the first hint of atomic number measurements came in 1906, from Barkla's attempt to measure the number of electrons in a carbon atom by scattering X-rays. His measurements suggested a number of about 6 electrons per atom, in fact somewhere between 5 and 7, yet this rough estimate enabled the founding of atomic theory to proceed.
Galileo made the roughest measurements for his test of constant acceleration down an incline. He knew he was right in his simple summary of natural behaviour. He just wanted to convince some people by quoting an experiment.
Rough estimates are not just a misfortune peculiar to early, clumsy experimenters. They are the right thing in some parts of a growing science. Nuclear physicists and some cosmic ray physicists make very precise measurements. In other cases, they seek only a rough estimate to settle an essential point in the progress of their knowledge.
You cannot give the above examples to students if they do not know the science. In that case, the following may be some help.
"An invading army is about to go into a foreign land and the general wants to know the size of the enemy's forces. He learns that it is 18 000. Does it matter much to his plans if it is 19 000 or 15 000? What he wants to know is that it is about 18 000 and not 30 000. If he waits for his staff to carefully sift through reports and add up the guesses and check them and find that the enemy really has 18 473 men, then the general may set out too late to win the battle."
Other examples include:
- estimating how many snow ploughs are needed to clear a snowfall in the middle of the night;
- the Chancellor of the Exchequer makes a clever guess on the number of road vehicle licences which will be paid in the next year;
- a rough guess that the Sun is 300 000 times as massive as the Earth suffices to tell astronomers that the Earth is not massive enough to affect the orbit of the planet Venus, significantly.
Straight line graphs
Drawing straight line graphs
Once you have plotted the points of a graph, checked for any anomalies and decided that the best fit will be a straight line:
- To select the best fit straight line, take a weighted average of your measurements giving less weight to points that seem out of line with the rest.
- Use a ruler to draw the line.
Interpreting straight line graphs
Proportionality: A straight line through the origin represents direct proportionality between the two variables plotted, y = mx. If the plotted points (expressing your experimental results) lie close to such a line, then they show the behaviour of your experiment is close to that proportionality.
Linear relationships: In many experiments the best straight line fails to go through the origin. In that case, there is a simple linear relationship, y = mx + c. Historically, one of the most far-reaching examples is the graph of pressure of gas in a flask (constant volume) against temperature. The intersect on the temperature axis gives an absolute zero of temperature, and an estimate of its value.
Identifying systematic errors: In some experiments, all measurements of one quantity are wrong by a constant amount. This is called a ‘systematic error’. (For example, in a pendulum investigation of T against l all the lengths may be too small because you forgot to add the radius of the bob. Plotting T2 against l will still give a straight line if every value of l is too short by the radius but the line does not pass through the origin.) In such cases, the intersect can give valuable information.
Checking for constancy: Consider the acceleration of a trolley. If you plot s against t2, where s is the distance and t is the total time of travel from rest, then you hope to get a straight line through the origin. [A straight line through the origin shows that s = constant t2]
In fact we know that s is proportional to t2 for any case of constant acceleration from rest. Simple mathematics lead from the statement that Δv / Δt = acceleration, giving s = 1/2at2 providing a is constant. [Δv = change of velocity, Δt = time taken.]
IF a is constant, THEN s = 1/2at2 because logic does that. So why might you plot the graph? To find out whether the trolley moved with constant acceleration.
Experiments that involve changing the temperature of a material and measuring that change are necessarily subject to energy transfers between that material or materials and the surrounding environment. These transfers will often not be accounted for and can cause inaccuracies. If the temperatures used are within 10°C or so of the surroundings, the inaccuracy is unlikely to be significant compared to other school laboratory errors. However, if you really want to make the correction, a number of methods can be used, all based upon Newton's law of cooling.
In some cases it is possible to cool an object before starting the experiment. You can arrange this so that its temperature difference with the surroundings is equal (but opposite in sign) after heating. It is then reasonable to assume that any energy transfer away from the object when it is above the temperature of its surroundings is countered by a energy transfer into the object when its temperature is below. This technique can be employed when mixing liquids, or when measuring the specific thermal capacity of metal blocks.
The formal Newton's law method assumes that the rate of loss of heat to the surroundings is proportional to the temperature excess above the surroundings, i.e.
dQdt = k(T-Troom)
- Where Q is the quantity of energy transferred in a time t,
- T Troom are the temperatures of the cooling object and the surroundings respectively,
- and k is a constant of proportionality.
Measure the temperature of the object (block, calorimeter, etc.) at the time of start of the heating, t0. Read the temperature at about 30-second intervals until the maximum temperature has been passed and for a significant time after. The longer this time, the more accurate the correction.
Plot the temperature against time on graph paper. On the graph (indicated in diagram below), select times t1 and t3, equal times either side of the maximum temperature at t2. The energy transfer between t2 and t3 is given by integrating the equation above between these values to give:
Q = k∫k(T-Troom)dt
The right-hand side of this equation is proportional to the area under the curve of k(T-Troom) versus t, denoted by A2 in the diagram below.
The left-hand side, Q, the energy transferred to the surrounding in the interval (t3-t2, is proportional to ΔT3, the drop in temperature during this time interval.
Remember that Q = mcΔθ, where m is the mass of the cooling body, c is its specific thermal capacity, and Δθ is the drop in temperature.
Therefore Δt3 = KA2, where K is another constant.
Similarly, the drop in temperature due to cooling in the time interval between t1 and t2, is given by Δt2 = KA1. (Note that, since the mechanism by which cooling takes place is the same for times between t1 and t2 and between t2 and t3, the constant of proportionality will be the same for both regions.)
So ΔT2ΔT3 = A1A2
If T2 is the temperature observed at time t2, the temperature which the object would have reached had there been no thermal transfer to the surroundings is:
T2 + ΔT2 = T2 + ΔT3 (A1A2)
A1 and A2 can be measured by counting squares on graph paper.
Image courtesy of www.upscale.utoronto.ca/IYearLab/heatcap.pdf
If you are using a heater, a simpler method is as follows (courtesy of Frank Grenfell on the CAPT email discussion list):
- Observe, (there is no need to record) the temperature as it rises, starting a tt0. Turn off the heater and record the time t1. You need this anyway to find the energy transferred to the object.
- Keep the clock running.
- Observe the temperature as it continues to rise, and reaches its maximum value (temperature Tmax) at time t2. Keep the clock running.
- Record the temperature ( T ) after another 0.5 t2 (i.e. half as long again as it took to reach the maximum temperature).
- The cooling correction to be added is (Tmax-T).
- Reasoning. The rate at which energy is transferred to the surrounding while the block is being heated is roughly half what it is at Tmax. So if you observe the temperature drop from Tmax in a time interval equal to half t2, that should be about right.