Motion Graphs
Forces and Motion

A language for measurements

Teaching Guidance for 14-16 PRACTICAL PHYISCS

What is a measurement?

A measurement tells you about a property of something you are investigating, giving it a number and a unit. Measurements are always made using an instrument of some kind. Rulers, stopclocks, chemical balances and thermometers are all measuring instruments.

Some processes seem to be measuring, but are not, e.g. comparing two lengths of string to see which one is longer. Tests that lead to a simple yes/no or pass/fail result do not always involve measuring.

The quality of measurements

Evaluating the quality of measurements is an essential step on the way to sensible conclusions. Scientists use a special vocabulary that helps them think clearly about their data. Key terms that describe the quality of measurements are:

  • Validity
  • Accuracy
  • Precision (repeatability or reproducibility)
  • Measurement uncertainty

Validity: A measurement is ‘valid’ if it measures what it is supposed to be measuring. What is measured must also be relevant to the question being investigated.

If a factor is uncontrolled, the measurements may not be valid. For example, if you were investigating the heating effect of a current ( P = I 2R) by increasing the current, the resistance of the wire may change as it is heated by the current to different temperatures. This would skew the results.

Correct conclusions can only be drawn from valid data.

Accuracy: This describes how closely a measurement comes to the true value of a physical quantity. The ‘true’ value of a measurement is the value that would be obtained by a perfect measurement, i.e. in an ideal world. As the true value is not known, accuracy is a qualitative term only.

Many measured quantities have a range of values rather than one ‘true’ value. For example, a collection of resistors all marked 1 kΩ. will have a range of values, but the mean value should be 1 kΩ.. You can have more confidence in a number of measurements of a sample rather than an individual measurement. The variation enables you to identify a mean, a range and the distribution of values across the range.

Precision: The closeness of agreement between replicate measurements on the same or similar objects under specified conditions.

Repeatability or reproducibility (precision): The extent to which a measurement replicated under the same conditions gives a consistent result. Repeatability refers to data collected by the same operator, in the same lab, over a short timescale. Reproducibility refers to data collected by different operators, in different laboratories. You can have more confidence in conclusions and explanations if they are based on consistent data.

Measurement uncertainty: The uncertainty of a measurement is the doubt that exists about its value. For any measurement – even the most careful – there is always a margin of doubt. In everyday speech, this might be expressed as ‘give or take…’, e.g. a stick might be two metres long ‘give or take a centimetre’.

The doubt about a measurement has two aspects:

  • the width of the margin, or ‘interval’. This is the range of values one expects the true value to lie within. (Note this is not necessarily the range of values one might obtain when taking measurements of the value, which may include outliers.)
  • confidence level’, i.e. how sure the experimenter is that the true value lies within that margin. Discussion of confidence levels is generally appropriate only in advanced level science courses.

Uncertainty in measurements can be reduced by using an instrument that has a scale with smaller scale divisions. For example, if you use a ruler with a centimetre scale then the uncertainty in a measured length is likely to be ‘give or take a centimetre’. A ruler with a millimetre scale would reduce the uncertainty in length to ‘give or take a millimetre’.

Measurement errors

It is important not to confuse the terms ‘error’ and ‘uncertainty’. Error refers to the difference between a measured value and the true value of a physical quantity being measured. Whenever possible we try to correct for any known errors: for example, by applying corrections from calibration certificates. But any error whose value we do not know is a source of uncertainty.

Measurement errors can arise from two sources:

  • a random component, where repeating the measurement gives an unpredictably different result;
  • a systematic component, where the same influence affects the result for each of the repeated measurements.

Every time a measurement is taken under what seem to be the same conditions, random effects can influence the measured value. A series of measurements therefore produces a scatter of values about a mean value. The influence of variable factors may change with each measurement, changing the mean value. Increasing the number of observations generally reduces the uncertainty in the mean value.

Systematic errors (measurements that are either consistently too large, or too small) can result from:

  • poor technique (e.g. carelessness with parallax when sighting onto a scale);
  • zero error of an instrument (e.g. a ruler that has been shortened by wear at the zero end, or a newtonmeter that reads a value when nothing is hung from it);
  • poor calibration of an instrument (e.g. every volt is measured too large).

Whenever possible, a good experimenter will try and correct for systematic errors, thus improving accuracy. For example, if it is known that a balance always reads 2 g greater than the true reading it is perfectly possible to compensate for that error by simply subtracting 2 g from all readings taken.

Sometimes you can only find a systematic error by measuring the same value by a different method.

Errors that are not recognized contribute to measurement uncertainty.

ASE/Nuffield booklet: The Language of Measurement

In 2010, following a series of meetings with Awarding Organisations, the ASE and Nuffield Foundation jointly published a booklet to enable teachers, publishers, awarding bodies and others in England and Wales to achieve a common understanding of key terms that arise from practical work in secondary science. Order a copy or see extracts from the booklet

The Language of Measurement


This webpage is based on the National Physical Laboratory's Good Practice Guide: A Beginner's Guide to Uncertainty of Measurements written by Stephanie Bell.

A Beginner's Guide to Uncertainty of Measurements

Motion Graphs
can be used to represent the quantity Time Velocity Distance Acceleration Speed
Limit Less Campaign

Support our manifesto for change

The IOP wants to support young people to fulfil their potential by doing physics. Please sign the manifesto today so that we can show our politicians there is widespread support for improving equity and inclusion across the education sector.

Sign today