Share this article:
Radiating from source to absorber - Physics narrative
- Vibrations that travel
- How vibrations travel
- Absorption ends the journey
- Predicting beams
- Lenses and mirrors with rays
- Imaging with clocks
- The Doppler effect
- Remote sensing of velocity
- The speed of light connects time and distance
- Why these rays?
- Engineering with trip times
- Signalling with vibrations
- Amplitude modulation
- Frequency modulation
- Signal and noise
- Vibrating - radiating - absorbing
Radiating from source to absorber - Physics narrative
Physics Narrative for 14-16
A Physics Narrative presents a storyline, showing a coherent path through a topic. The storyline developed here provides a series of coherent and rigorous explanations, while also providing insights into the teaching and learning challenges. It is aimed at teachers but at a level that could be used with students.
It is constructed from various kinds of nuggets: an introduction to the topic; sequenced expositions (comprehensive descriptions and explanations of an idea within this topic); and, sometimes optional extensions (those providing more information, and those taking you more deeply into the subject).
Core ideas in the Radiations and Radiating topic:
- Travelling vibrations
- Multiple contributions
- Nuclear sources
- Ionising radiations
The ideas outlined within this subtopic include:
- Sources to medium to detector.
- Vibrations–continuously varying displacement of particular frequency and amplitude.
- Vibrations coupled to surroundings–do like me, but later, where later is set by trip time.
- Power in a beam accumulates or depletes energy in store(s).
- Absorption and detectors.
- Ray model accounts for phenomena of refraction, reflection, propagation.
- Predicted by rays: multiple paths, least time paths underly rays.
- Doppler phenomena and model expansion of the universe.
- Signal determines carrier by varying amplitude or by setting frequency of carrier.
- Transmission to detector.
- Detector vibration decoded to get signal
Physics Narrative for 14-16
Developing a description of vibrations that travel: radiations
Many things oscillate – that is, move to and fro about an origin. In the SPT: Sound topic, some of these oscillations were shown to be the source of sounds: things we hear. For that to be the case, three conditions had to be satisfied:
- The amplitude of the vibrations had to be large enough.
- The vibrations had to be within the range of frequencies that we can hear.
- The vibrations had to travel from the source to the ear.
If all three are satisfied the sound radiates from the source to our built-in detector – you can hear a sound. The source-medium-detector model was central to the model of sound.
It's a good and useful starting point. We'll build on this model of radiations as travelling oscillations, of particular frequency and amplitude. The idea that all such radiations terminate at a detector is too restrictive. So the source-medium-detector model is generalised to a source-radiating-absorber model. Some absorbers are detectors, but not all. Some radiations travel through a tangible medium, but not all.
Learning from SPT: Light and SPT: Sound
The vibrations that result in you hearing a sound, because of their nature, travel as variations in physical density, so these vibrations rely on a medium made of particles connecting the source to the detector. Other vibrations, very similar in nature, could still travel out from the source but not be detected. They could be above or below the frequency we can hear (so-called infra-sound or ultra-sound) or the amplitude could be just too small: so not enough power in the radiating pathway for the detector to respond.
The sound we hear is part of a family of vibrations: having a similar mechanism of transmission but varying in that different detectors respond differently. What is infra-sound for a bat is only sound to us. This idea of a family of radiations will be important. The SPT: Light topic places light as one of a family of electromagnetic radiations, selected from the family by being that range of frequencies detectable by our eyes.
How vibrations travel
Physics Narrative for 14-16
Periodic variations that travel: do like me, but later
Do like me, but later, is the fundamental instruction that allows influence to propagate from source to detector. People don't much like spooky action-at-a-distance, so you might reasonably ask how this instruction really happens. Then you'd have a more convincing story.
You know the instruction can arrive, bringing a signal with it if that was encoded before the signal left the source – more on that later on in this episode. For now, every time you turn on a radio, or, more simply, listen to recorded music from a loudspeaker, you verify this functionality.
You'll also know that such an influence can cause an energy store to be depleted, associated with changes at the source, and another to be filled, associated with changes at the detector. Feel the Sun on your cheek on a summer's day or float amongst the waves just off a beach, and you'll be able to think of the stores being emptied and filled by a kind of remote working. The physical changes leading to the local store being filled are close, but the physical changes resulting in the distant store being filled may be a significant distance away – from metres, through hundreds of kilometres, to 8 light-minutes away in these examples.
This is all very abstract – and therefore of wide applicability, as it describes all kinds of radiating – without reference to the physical basis of whatever is vibrating.
Yet sometimes we'd suggest its worth knowing about the mechanisms, even if this adds complexity, as such detail makes links to other bits and pieces of knowledge and helps show how they are all connected together. Seeing these interconnections is part of doing physics, and often the most creative and fruitful part, as such connections often help to illuminate new topics.
So, what's doing the vibrating – that is, what is the source setting into repetitive motion, where it is sensible to be able to say that it has both a frequency and an amplitude? Let's start with light and with sound. These are very different – physically different. One needs tangible particles to vibrate; the other needs only an electromagnetic field. Some of the properties of these two kinds of vibrations were elucidated in the SPT: Sound and SPT: Light topics. These two can serve as prototypes: sound for mechanical vibrations; light for the whole electromagnetic spectrum.
We'll deal with the mechanical first, but we'll still need to think about two different situations – fluids and solids. In these two the forces between the particles are rather different, so setting some particles moving and then seeing how that movement is imparted to adjacent particles might be expected to be different.
Linked masses and springs: modelling the essentials of things that vibrate
Choose a blade of grass, or a branch, and deflect it. It waggles to and fro, or vibrates, a few times before coming to rest. For it to vibrate like this there must be a force that restores the object to its resting location, starting the repetitive movement with an acceleration as it is released: this force is provided by the branch or blade. But when it reaches the resting location it overshoots, continuing past its undisturbed location. So there must also be some mass, so that, once moving, it keeps moving – this is again provided by the blade or branch.
There are two effects here, both apparently as a result of a single object. To build a simple model, we suggest assigning each effect to a separate element within the model: these will be provided by a spring and a mass. So we re-imagine the blade or branch as a mass between springs. You could get away with a single spring, but many people find it easier to appreciate the symmetry of the situation with a more symmetrical (although still one-dimensional) model.
This system of a mass-between-springs is our prototype vibrator or oscillator. Many, many systems can be modelled using this as a basis, on all kinds of scales (from absorption of particular frequencies by ammonia clouds in deep space, through human locomotion, to the response of buildings to earthquakes, and well beyond), but our concern here is to show how it can form the physical basis for one kind of wave. One such mass-between-springs system must be connected to another, so that what one does, another does later:
do like me, but later is the key to a propagating radiation. Then we'll have the
how of one kind of travelling vibration.
All that is necessary is to link one vibrator to another, so that the deflection of one causes a deflection in the other. So when one deflects, it exerts a force on another. This sounds just like a job for a spring (again – why invent a new idea when you can reuse an old one with which you are familiar?): a change in distance between two things connected by a spring results in a force exerted by one on another. So add extra springs linking the two. And because the second vibrator in the chain takes time to accumulate a velocity and time to accumulate a displacement, there is automatically a delay built in. There's more detail on this in the SPT: Force and motion topic.
This now looks remarkably like the mass and spring model used in the SPT: Forces topic to explain warp forces. It's not surprising that solids can support radiations – or sounds, as we'd normally call them. These are longitudinal vibrations, as we've drawn here, resulting in density variations propagating through the solid. Sounds are travelling, or propagating, periodic variations in density.
There is nothing to prevent solids from supporting travelling transverse vibrations as the springs still provide a restoring force if we pluck an individual mass, so displacing it vertically. Exactly the same is true if we generalise to two and three dimensional lattices of masses and springs.
Plotting out propagating changes
You can plot the variations in density over time, or take a snapshot of the propagating changes. That is what we have done here, choosing to plot a variety of things. You can see a time trace being plotted out in the SPT: Light topic.
Now start with a completely different situation – a pair of charged balloons – as a model of two charged particles a long way apart. These are initially in equilibrium, so there are other electrical forces acting on the balloons. Again, there is a force between the balloons, so that moving one exerts a force of a different value on the other. (More on the electric force in the SPT: Forces topic).
There are always lots of electrical forces around as matter is electrical, so here we'll concentrate just on how the changes in the location of one balloon affect the forces on the second.
The force between the pair is radial, joining the centres. Now move the first ballon (the source) up and down, as if it is vibrating. What difference does this vibrating have on the detector (the second balloon)?
The effect on the second balloon will be that the electrical force points in a new direction. While this change is happening, there is a kink that travels outwards, as the information about the change in direction of the force moves out from the source. (The electric field was in one direction: now it's in a different direction. The kink is the change in the field.) This kink gets sharper the further you get from the source, as the radial component gets smaller and smaller. (Do you remember that the electrical force decreases as you get further from the charge?) So the kink becomes closer and closer to being at right angles to the line joining the balloons. The propagating change is at right angles to the original force. Waggling a charge up and down sends out a whole series of such kinks, and results in a series of transverse vibrations in the electric field that propagate outwards from the source, affecting charged particles as they go. This is an electric wave.
Linked changes: electromagnetic waves, electric forces and magnetic forces
Electrical changes are always linked to magnetic changes – just how was explained in the SPT: Energy and electricity topic. So a spreading electrical kink in the lines of electrical force – and so electrical waves, will always be linked to a spreading magnetic kink in the lines of magnetic force – magnetic waves. These kinks spread at the speed of light: they are electromagnetic radiations. These radiations can have many different frequencies, set by the source, as ever, but they are all members of the same family, because they all travel or propagate with the same mechanism.
The working you are doing in waggling one charge here shifts energy as another charge waggles a long way off. Does this sound familiar? It should do: in electric circuits, the charged particles in the bulb are worked on at a distance by the battery. The electromagnetic wave is a development of the treatment of alternating circuits.
There is another intriguing facet here: the electric vibrations must be at right angles to the direction of propagation, but that still leaves many different planes available for the vibrations – the full 360 degree. This leads to an account of the phenomenon of polarisation, in episode 03.
Fluids: describing the vibrations as variations in density
Now for fluids. Take a bicycle pump. Put one finger over the end and push in the plunger. You have deflected the plunger just as you deflected the blade of grass. Release the plunger and it will spring back driven by
the spring of the air. As it does so, so there is a rush of air particles, thus a moving mass. The pump will
damp down any oscillations, but if there were no friction you could imagine that the rush of air particles would not stop at the resting location but move through that until pulled back by the accumulated effect of the force acting on these particles (another
spring of the air – but this time more like a spring under compression rather than under tension).
So gases, and also liquids, can vibrate, and these vibrations can be moved from one region of the fluid to another. The displacement of the particles in the fluid from their resting position results in a change in density – more particles are packed into a smaller volume. This increased density leads to an increase in pressure. This variation in pressure from the normal provides a restoring force (see the SPT: Machines topic for more on the relationship between the pressure in a fluid and the forces acting on surfaces surrounding that fluid). This restoring force then causes the accelerated particles to overrun, resulting in a lower than average density, and so lower than average pressure. This variation in pressure from the normal results in another restoring force, still directed towards the resting position.
This rushing to and fro affects the surrounding particles, and so the vibration is communicated to the neighbours. A simple presentation of the end results can be found in the SPT: Sound topic.
However, because of the mechanism outlined here, fluids can only support push–pull, or longitudinal, waves. The variations in density only cause restoring forces in the direction of propagation of the radiation: the movement of particles and propagation is all in one plane.
Absorption ends the journey
Physics Narrative for 14-16
Three possibilities as a beam strikes a material: reflection, absorption and transmission
When a beam strikes a block of material, there are three possibilities: reflection, absorption and transmission. All three may happen; you usually have to idealise to get only one. Which processes actually happen depends on the block: the material it is made from, and the surface treatment (and perhaps the internal structure as well); the frequency of the radiation striking the block; and the angle at which the beam strikes the block.
How complex structures within the material interact with particular frequencies makes for a fascinating study, accounting for iridescence on butterfly wings, among other things, but here we suggest building up a comparatively simple model, assuming that the block of material is solid and homogeneous. For beams that strike the surface head-on, this leaves only interactions at the surface (reflections): depending on how smooth the surface is and on the frequency of the radiation; and interactions in the body of the material, which determine the absorption. The mechanisms behind both are complex, and are still, in many cases, not completely understood. However, at the phenomenal but quantitative level, the patterns of more or less complete reflection and constant fractional absorption are well understood, and both of these are explored in this topic.
Detectors are cunningly chosen absorbers
Vibrations travel out from a source (some need a particle-based medium and some do not) and after a journey these radiations get absorbed.
A description of absorption is, at root, based in thinking about energy. You should develop an energy description. To recap, one store of energy at the source is emptied and another store at the absorber is filled. For light, and other electromagnetic radiations, these two processes are linked by the heating by radiation pathway (Sound has a somewhat different mechanism: speakers shunt air to-and-fro, and the air, somewhat later, shunts our ears to-and-fro. That suggest a mechanical working pathway as the most helpful way to think about the power radiated in
sounding). That is, radiating is a form of remote working: we can shift energy without having to shift matter, or without having to be adjacent to the location associated with the store. The Sun can warm you on a summer's day without having to be next to you – it's about 500 light-seconds away. The nuclear store of the Sun is emptied, and the thermal store of your cheek is filled. That's not the only thermal store: there are many that are filled by the radiations from the Sun. So although we wrote source-medium-detector and used it as the basis for the SPT: Light and SPT: Sound topics, a more encompassing description would be source-radiation-absorbers. The one-to-many alteration is one change, recognising that heating by radiation is dissipative, spreading the energy from one store to many. Then there is the more subtle change from absorber to detector.
Detectors are a subset of absorbers. The radiation, or at least a fraction of it, must come to a sticky end in order to be detected. But that is not enough. Both singing in the shower and singing into a microphone result in the radiations being absorbed. In the case of the microphone, there is something a bit special designed in: a designed-for change, exploiting the interaction of the vibrations with certain parts of the microphone. All detectors are like this, only in some cases, particularly ears and eyes, the design has evolutionary rather than intentional origins. But all detectors, whatever the design origins, are devices, switching from one pathway to another, and indeed many are transducers, switching to the electrical working pathway.
Many detectors need to select not only one physical vibration but also only a small range of frequencies. That is, they need to be differential absorbers. Why some materials absorb only certain frequencies is a topic that we return to in the next episode. For now, just note that we need to choose the material carefully, and perhaps do some engineering of the material to make a device that can function as a detector.
Physics Narrative for 14-16
We are informed about our surroundings by reflections
Reflections are all around – most of the things we see are as a result of reflections: luminous objects are greatly outnumbered by non-luminous objects. But that pervasiveness does not make it any less intriguing, or at least with a little thought reflection can be made so.
In the SPT: Light topic, the rule about reflection from flat shiny surfaces was introduced and extended to show how everyday things can be seen from all kinds of angles, rather than just one special angle, where the angle of incidence is equal to the angle of reflection. There the connection was made to many angles of incidence and to many angles of reflection. The many angles are introduced as each surface is thought of as being made up of many small flat surfaces, but not all lined up. You can make a flat, shiny surface by lining them all up, or by polishing the surface to remove all the lumps and bumps. Aligning the surfaces to achieve special effects we'll come back to later, but the smooth polished surface, while rare, is simple, and so is a good place to ask questions to which you might get simple answers.
Here is a really simple question. A beam of radiation strikes such a surface. The angle at which it strikes the surface (the angle of incidence) is always the same as the angle at which it leaves the surface (the angle of reflection). Because it is a surface, this angle is measured from the normal, as this makes the angle unique. Find more detail on the need for this particular measurement technique in the SPT: Light topic. Just why is the reflected angle equal to the incident angle?
One answer is to appeal to the rules about rays. That's just how they're drawn, and once you have the ray, it predicts the beam. We hope that you do not find this a satisfying explanation, but want to know more. In particular, how is this behaviour linked to the rules for drawing rays that predict refraction? After all, they're both rules for the same kind of things – for radiations – so there really ought to be some commonality of behaviour.
As a beam is refracted, so the beam is deflected
Here you should focus on a single beam that is neither reflected nor absorbed but is transmitted from air, through a different material (not air), and back into air again, with a constant power in its pathway. So the beam changes the medium in which it is travelling twice: from one material (often air) to another (often glass or plastic) and back to air again.
Even with all these simplifying constraints, there is still an interesting phenomenon to explore here: refraction. A beam is typically bent if it strikes the surface between the two materials at anything other than a right angle. This bending is called refraction, and there are two angles to specify: the angle at which the beam approaches the interface and the angle at which it leaves. These are, of course, the angle of incidence and the angle of refraction. But, just as with reflection, you need to be careful about how the angle is measured. The surface between the two media where the beam strikes is a small plane; and the adjacent planes may be at an angle to the current plane. One place where refraction is certainly important is at the surface of lenses, and constructing the surface of a lens from many small planes requires that the angles between adjacent planes changes in complex ways as you move across the surface. As a result of this complex change, measuring the angle between the surface and the beam can give many different values: measuring the angle between the normal and the beam gives only one, as each plane has only a single normal. You do need to work with normals: add these to the place where the beam strikes as the first step in constructing a diagram to account for refraction.
Both the angle of incidence and the angle of refraction are measured from this normal. Then there is another apparently
brute law, unrelated to the law of reflection: the law of refraction. As a beam goes through a surface it will bend, so the angles of incidence and refraction will not be equal, unless both are zero. For common materials, as a beam goes from a medium where the measured speed of propagation is higher to one where it is lower, so the beam is bent towards the normal. That is, the angle of incidence is greater than the angle of refraction. This relationship between the angles is reversed as the beam makes its way from a low-speed medium to a high-speed medium: the beam bends away from the normal. And it's also frequency dependent: over the optical range, blue bends best (and so red bends least). OK, so that is the brute fact. You could reasonably ask two further questions to refine your understanding:
How, exactly, is the angle of incidence related to the angle of refraction?
Why does a beam or a ray bend like this – is there an underpinning reason?
The first is a matter of empirical investigation, the second a matter of inventing an explanatory model. The first is not trivial, nor is the relationship straightforward, so it has been named: Snell's law.
For any pair of materials, there is a constant relationship between the angles of incidence and refraction. Take the sines of these angles and divide one by the other.
The answer is a constant:
sine(refracted angle)constant = sine(incident angle)
That is Snell's law.
Changing the pair of materials results in a new constant, so this is a property of the pair of materials. The greater the difference in speed between the materials, the greater the constant.
Rays are used to predict beams
There are simple rays that can be drawn, using simple geometrical rules, to predict what the beams will do. In each case we're working on one plane – defined by the surface and the normal.
Here's a checklist for reflection:
- Select a surface.
- Draw in the normal.
- Position the source.
- Draw in the inbound ray from the source to the point where the normal meets the surface, which allows you to measure the angle of incidence.
- Draw in the outbound ray so that the angle of reflection is equal to the angle of incidence.
Here's a checklist for refraction:
- Select a surface.
- Draw in the normal, right through the surface.
- Position the source.
- Draw in the inbound ray from the source to the point where the normal meets the surface, which allows you to measure the angle of incidence. Take the sine of this angle and divide it by the refractive index.
- Draw in the outbound ray so that the sine of the angle of refraction is equal to the angle of incidence divided by the refractive index.
All of these steps are shown in the SPT: Light topic.
These rules are exploited whenever programmers need to
light a scene for a computer game: it's all about finding the normals, setting up the sources and then applying the rules to see how much lighting to display for that point of view. This is all fine, but why do these rules work – why is nature that way? That's a good question and it demands a deeper level of explanation.
Lenses and mirrors with rays
Physics Narrative for 14-16
A lens: a carefully and systematically shaped block of refracting material
A lens is a simple block of glass through which light passes: it's refracted as it passes. The block must be very cunningly shaped in order to ensure that beams that strike the lens further from the centre line – the optical axis – are bent more than those which strike the surfaces closer to this axis. Usually this shaping is done by grinding off excess material – a slow process that used to require both patience and skill (Newton was a skilful lens grinder). Now it's done by machines, but lenses can be both beautiful and complex, so as to ensure perfect imaging. For any serious photographer, the major part of their investment is in the lenses that attach to the camera body, and these still call for very careful design and precise manufacturing. However, a camera lens is a compound of many single lenses: here you'll only study the very simplest of lenses.
How are these lenses built up? So far you have looked at refraction through rectangular blocks (and through prisms in the SPT: Light topic). Perhaps the best way to approach the finished product in our minds is to start with prisms – after all, these deflect beams of light: rectangular blocks can, at best, only introduce a sidestep in the beam. And the sharper the angle of the prism, the more the deviation. This suggests that the lens can be thought of as a series of prisms, symmetrically arranged about the optical axis, assembled so that those with the sharpest angles are furthest from the axis. To tidy the lens up, slice off the unneeded tops and bottoms of the prisms to get a set of trapezoids. Glue these together and you have a conventional converging lens. Of course, if you don't glue them together you have a Fresnel lens, which gives much the same effect, only without anything like as much glass. You'll find these in lighthouses and telescopes, as these are both places where the more complex cost of manufacturing such an intricate shape is offset by the lower cost of keeping such a large mass of glass exactly where you want it. (Glass is not a good structural material, so large lenses may sag as a result of the force of gravity acting on them. This goes part of the way in explaining why most large astronomical telescopes depend on mirrors, and not lenses.)
If the lens can be moulded from plastic then it can be very lightweight, and the manufacturing costs come right down. Hill walkers may have a Fresnel lens in their pocket, for use as a magnifying glass, to discern the finer details on a map.
It is also possible to make diverging lenses. Here again you need a set of prisms, whose apex angles vary systematically as you move away from the optical axis. But now the beams striking the surfaces must be bent away from the optical axis; so flip each prism about its asymmetric axis. Top and tail the prisms as before, then glue to get a conventional lens, or space carefully to get a Fresnel lens.
Systematically varying the angles between the sides of the prisms, for a lot of prisms, is exactly equivalent to varying the curvature of the surface of the lens. (You may remember making circles from polygons in the programming language LOGO, by shortening the step size and reducing the angle through which the turtle turned.) Manufacturing lenses of variable curvature allows the light to be bent more or less. This has recently provided cheap spectacles for Africa that can be adjusted by the wearer, simply by pumping more fluid into a flexible bag. Some of the finer subtleties of the lens-grinder's art cannot be reproduced by this technique, but a bit of very simple lateral physics-based thinking is making a huge difference to thousands with this very appropriate technology.
Cunningly chosen rays predict lens behaviour
From the many, many rays that could be drawn that do strike a lens, there are three that are very easy to predict.
The first is one that goes straight through the centre of the lens. As the lens here is just about a parallel sided slab, you can predict that the ray will not be deviated, but might be offset a little. The thinner the glass here, the less the offset.
The second and third are related and are fixed by the curvature of the lens. A lens is designed so that all rays parallel to the axis will deviate so that their exit line passes through the principal focus. This is therefore a rather special point on the principal axis, so we can use it with confidence: it is what defines the lens.
For a converging lens, these rays actually pass through the principal focus; for a diverging lens they deviate so that they seem to have come from the principal focus.
Modelling lenses with selected rays
Here are only a few examples of modelling with rays, and the predicted beams. There are two different sets of outcomes.
In the first set the rays – and so the beams – actually meet. At these points there will be a bright spot: the radiation will focus. Place a detector at that point and you'll detect a high power, dissipating the energy. These are real images.
In the second set the rays only appear to come from a single point. No radiation will be found at a detector placed at that single point. These are virtual images, just like the image in the plane mirror in the SPT: Light topic.
Using rays to predict the action of mirrors of different shapes
For mirrors there is only one rule for specular reflection: find the normal, then use the relationship between the angle of incidence and the angle of reflection. So far, so simple. However, if we bend the mirror systematically, the normals will no longer be parallel to one another and we might get all kinds of interesting shapes of mirror, producing all kinds of patterns.
Here are three simple shapes:
- A spherical concave mirror.
- A spherical convex mirror.
- A parabolic mirror.
A focusing concave mirror can be curved so that it has a single principal focus, just like the lens. Here the curve is very simple – it's a part of a circle, or in three dimensions a part of a sphere.
The converse situation (get this by just flipping the mirror over, if it's shiny on both sides and so reflects from both the front and the rear surfaces) also has a principal focus.
This first pair of mirrors is easy to make predictions for because, just like lenses, there are three rays on which we can rely:
- Two rays are inbound parallel to the principal axis and outbound along a line that passes through the principal focus.
- One ray reflects about the principal axis, as its normal lies on that axis.
Parabolic mirrors – useful but not easy to draw rays for
The final case is a parabolic mirror, which is useful for generating or focusing parallel beams. In this case the curvature is more complex, and so finding simple rules for rays to make useful predictions is hard. To be useful, these predictions must contribute to showing the overall focusing pattern.
But there is a trade-off: these mirrors focus light from beams with a greater range of incident angles than the circular mirrors. Later we'll show you an easier line of attack for this problem.
Imaging with clocks
Physics Narrative for 14-16
Making pictures by timing
It's possible to build up a picture of the world just from a series of times. That's what bats do, by a process of active sensing: a bat sends out a pulse, then times how long until the echo arrives back. The longer the delay, the further away the object that produced that delay. (That's not all bats do – they can also map the velocities of things from the altered frequency of the pulse, but that's another story – they use the Doppler effect. More on that later.)
Ultrasonic rulers perform much the same simple trick. A pulse of sound is sent out, and the time between this emission and the detection of the echo provides information about the distance. You could use electromagnetic waves, but then the times would be much much shorter, and so you'd need a very accurate clock, unless the distance was very long. So radar is used to find out how far away the Moon is, and the distance to Venus. Much beyond that and the reflections are too dim: the amplitude is too small to be detected by any instruments we have so far managed to build.
If you do have a very accurate clock then you can use a series of electromagnetic pulses to build up a map of any planet, so long as you can put a satellite in orbit around it. Just time the interval between the emitted pulse and the reflection arriving back at the satellite to get information about the distance between the satellite and whatever is doing the reflecting (the planet's surface?). As you can use any part of the electromagnetic spectrum for the pulses, you can choose to image even where there is continuous cloud cover: that is how a topographic map of Venus was made.
Medical ultrasonic imaging is another example of active imaging, where the time between a pulse and its echo is interpreted as a distance, thus building up a map of what lies in front of the probe, which contains both source and detector.
In all these cases, you use distance travelled = speed × elapsed time.
Geological sensing is less subtle, as the emitted pulse is often a controlled explosion. However, the analysis required to interpret these collected reflections is considerable as there may be many different materials through which the vibrations have travelled, all with different speeds of propagation.
If only there were a universal speed that converted distance to time at a constant rate. There is: the speed of electromagnetic waves, which is a constant – always and everywhere. So any distance can now be mapped as a time. That's the beginning of Einstein's theory of relativity. More later. Passive imaging is not so good at revealing distance information: you simply don't know when the pulses were emitted. Unless you can do some kind of echolocation, more indirect methods have to be used. So for estimating the distances to the galaxies, for example, the reflected pulses would simply be too low in amplitude to be detected, and, as it turns out, you'd have to wait several years for such a pulse to return (a deliberate understatement – it may well be several million years). The distances turn out to be huge.
How do we know? Justified guesswork: we see how bright things appear and then make assumptions about the brightness of the source. Then the inverse square law can be used to figure out just how far away the object is.
The Doppler effect
Physics Narrative for 14-16
The whistle of moving steam locomotives may appear to change pitch
Stand on the footplate of a speeding locomotive and, no matter what the speed at which the countryside passes, you'll always hear the same note from the whistle. But stand outside, trackside, in that same countryside and you'll hear something very different: the whistle will be of higher pitch as the train approaches and of lower pitch as it recedes. This change in pitch is the result of the Doppler effect, and you get the same shift in pitch from all radiations where there is relative movement between detector and source. The change in pitch depends on the relative speed of the source and detector, so the effect is not easy to detect until you have rather high speeds: walking won't do for sound, for example. Perhaps this explains why trains are so much a part of the history of the Doppler effect and why its discoverer, Christian Doppler (1803–1853), was alive during the railway age. In the 21st century there are so many high-speed objects that once pointed out, you'll notice Doppler effects everywhere.
Red shifting – pitch of vibrations depends on relative movement
As you move towards a source, or it moves towards you, so the pitch of the vibrations increases. As the source and detector separate, so the pitch of the vibrations decreases. This effect is not specific to sound: you can find exactly the same effect in electromagnetic radiations; and you can detect the same effect with ultraviolet, infrared, or X-ray radiation. If the detector and source are moving apart (there is no especially privileged point of view – no absolute axes – so it does not make sense to decide which is
actually moving), the detected frequency of the radiation is lower than the emitted frequency.
If it is light, then it will appear redder. All frequencies are shifted – by the same ratio – towards the red end of the spectrum. Such an effect is called redshift: all frequencies are lower than they were.
Looking out into the universe – things appear redder
If, on average, whichever direction you look in, things appear redder than expected, then you might reasonably guess that those same things are moving away from you. If things are moving away from you in all directions, and you don't think that there is anything special about where you are standing (so if you moved a few paces, or perhaps light-years, to the left you'd see the same effect), then the best guess has to be that everything is moving apart from everything else. This is the chain of reasoning that leads to the belief that the universe is expanding.
Notice that it is not simple or straightforward: for example, there is quite a lot hidden in the statement
things appear redder than expected. On what basis do we have this expectation? It's quite an extrapolation from what you can observe in the laboratory to speculate how red we'd expect the emissions from a particular part of the universe to be with no relative movement between us and that part.
A physical analogue of the Doppler effect
In a bicycle time trial, the riders set off from the start at regular intervals and arrive at the finish at irregular intervals, because they travel at different speeds. Not so for photons, which always travel at the same speed, so if they set off from the source at regular intervals, they'll arrive at the detector at those same regular intervals. In fact this constancy of interval will also apply to any travelling radiations, since these also travel at a constant speed: blips from adjacent pulses leaving at a fixed interval will arrive with that same fixed interval intact. Unless something odd happens to the start or finish lines, the emitted frequency will be the same as the detected frequency.
Some bicycle races are run under handicap (the parents' race at school sports day is another example – often having a start line set by the decade of the starter): the start line, or the finish line, is not in the same place for all: not all riders cover the same distance, and this too can affect the interval between riders. This applies also to photons, even though they always travel at the speed of light, so covering the distance from source to detector at a constant rate.
Explaining the Doppler effect
One can even imagine races where the start line or finish line moves at a constant rate, so that the interval between photons, or blips, is increased or decreased by a constant amount. The larger the relative speed between source and detector, the greater this increase or decrease. This steady alteration of the distance that the radiations must traverse between source and detector is exactly what happens in the Doppler effect.
Remote sensing of velocity
Physics Narrative for 14-16
Finding the velocity of remote emitters
As the frequency shift is set by the velocity of the source, measurements of the frequency shift allow you to determine that velocity. So with a little calibration you can measure velocity rather directly, without having to measure the distance covered or time taken.
So one can determine the velocity of emitters, even if they are half a universe away, without having to lay down a tape measure of any kind to determine a distance. In fact you need only time events. Measuring the frequency of the radiations arriving requires skill, but you also need to have a reasoned guess at the emitted frequency, before being shifted by the motion – that is, what the frequency would be if the measurement was done by an experimenter moving along with the emitter.
So this kind of velocity measurement relies on a network of theories about the universe, including those about emission.
On knowing the source frequency: necessary assumptions
There is a further complication. The emitters of photons are often atoms or molecules, and these move quite quickly and in random directions, so this needs a further layer of interpretation, as any measurement of radiations emitted by these atoms will catch some from the atoms heading towards you, some from those heading away from you, and some heading in random directions in between.
All of the discussion so far measures the velocity of the emitter, so it seems that we must be limited to determining the velocity only of things that radiate. However, you can direct radiations of known frequency at a moving target, and infer the velocity from the frequency of the waves reflected back off that moving target. The target need not itself be an emitter, only a reflector.
Remote and passive sensing of velocity demands that we know the frequency at which the radiations are emitted.
If you cannot go there (maybe it's inside the human body, or outside the range of spacecraft) then you have to make your best guess. What will your guess be? Either the universe is pretty homogeneous, so the same processes happen over there as over here, or it is not, and we just don't know what is happening over there. The only guess on which you can construct any knowledge is the first one, so choose this. It's also the way to keep things simple: a principle of simplicity, if you will. Only one explanation is needed for very different places in the universe (a kind of symmetry under translation) through space or time, for it takes a long time for the information to travel the light-years from far away places: in fact years. So you don't see things as they are now but only as they were when the radiations were emitted. As we have noticed, it may even be that the frequency has changed. That is how we detect the velocity of the object, after all.
Space is expanding: a natural way to explain redshift
As we look out in any direction, the galaxies are, on average, rushing away from us. We know that because the frequency of the light is shifted – you see the same patterns occurring as in the laboratory but at lower frequencies. These patterns are thought to be similar because we believe it's the same atoms and molecules producing the radiations – whether in the distant galaxy or the local laboratory. All the patterns shuffle along towards the red end of the spectrum. But there is another feature of this pattern: the farther away things are, the faster they are receding. This is Hubble's law, discovered in the 1920s.
Could there be a natural way to explain this?
Well, yes. The space between the galaxies is expanding uniformly. There is nowhere special in the universe, so you can start in any galaxy: from that viewpoint, galaxies further away from you are receding faster: it's just geometry.
Cosmic redshift: pulses arrive further apart because space is stretching
But there is another consequence to this geometrical stretching: if space is stretching, and the speed of light is constant, then successive pulses will arrive further apart. With fewer pulses a second, the frequency will be lower. This is a redshift as a result of the stretching of space, not as a result of the relative velocity of the source and detector. It is called the cosmic redshift:
redshift because again everything ends up shuffling down the spectrum a little;
cosmic to remind us that it is not the same as Doppler redshift, caused by the relative movement between bodies. As space has expanded a lot since some of the oldest light was emitted, so its frequency has dropped significantly: the frequency is now in the microwave part of the electromagnetic spectrum.
The speed of light connects time and distance
Physics Narrative for 14-16
Time, straight lines and light
The speed of light is constant. It is the same everywhere, and is the top speed: a universal speed limit. The way we measure distances now depends on this speed: the speed of light has been defined as 299,792,458 metre inverse second since 1983. So a metre is now how far light travels in 1299,792,458 second.
In more easily visualisable terms, one foot (approximately a 30 centimetre ruler) is very nearly the distance light covers in a nanosecond (one thousand millionth of a second, or 1 × 10-9 second).
But light is also used to define straight lines. In laser surveying, for example, beams of light are laid out across an archaeological site to define a regular grid. All other things being equal (that is, if the refractive index of the medium does not change), a beam of light reliably shows a straight line.
So the fastest way from A to B is always by electromagnetic waves, as these travel along a straight line, and at the top speed. That is, the shortest trip time is always the one taken by light or any other electromagnetic wave. Any path other than a straight line will take longer, as distance is converted to time at a constant rate, by the universal converter, the speed of light. This all seems far too obvious, and fine in a vacuum (where
all other things are always equal), but there are two problems.
How does the light
know what the straight line path will be before it arrives? That is, why does light travel in straight lines?
Under some circumstances, light clearly does not travel in straight lines. When all other things are not equal, it certainly does not. You might, for example, put a prism, or mirror in the beam. What then happens to the principle of shortest trip time?
These are both good questions as they seek yet deeper reasons for things that you may not have previously questioned.
You might suspect that trip times will turn out to be more important, as these link what happens at the source with what happens at the detector.
After all, a wave is essentially a regular vibration of given frequency and amplitude, that is connected to other vibrations, so that the pattern at the source propagates at a certain speed: the delay between the source and the mimicry of the source by the detector is set by the trip time.
Why these rays?
Rules about rays: where do they come from?
the angle of incidence = the angle of reflection? One answer explored earlier is an appeal to the rules about rays. That's just how you draw them, and once you have the ray, it predicts the beam.
how, but does not really even attempt to answer the
why? question. That requires another level of enquiry: why are the rays drawn in just that way, and no other? One way of approaching this is to try the alternatives and see what happens. Perhaps playing out what does not happen in our imagination will give an insight that'll suggest why it does not happen when played out in the physical world.
In refraction, just why is this true? Again, just following the rule tells us how but not why.
Light travels in straight lines: that is, it takes the shortest path from A to B. How can we identify a straight line? Just picking up a ruler may not be reliable enough. One way is to try out a few paths along which the light could travel, and then choose the one with the shortest time – after all, that builds on the rather secure knowledge that the speed of light is constant.
Maybe it's worth imagining doing just this.
Here we're suggesting a series of theoretical explorations to find out what's special about the rays from source to detector, involving propagation, a reflection or a refraction. So we'll draw many possible paths, seeing what varies as we do so. You already know that the trip time is important for radiating, because it gives the delay between what the vibrations at the source are up to and the time when the detector will undergo the same vibration. The trip time is, of course, set by the speed at which the radiations propagate and the distance that they travel, so it is set by the geometry of the situation. The geometry defines the paths by linking the source, a single moveable waypoint and the detector. You could, of course, define more complex paths to try by adding as many waypoints as you like, but the principle would not change.
It turns out that the rays are always drawn along a path that corresponds to the shortest trip time. You could, of course, ask in turn why that is the case and so why the path of exactly least time is selected from amongst the many possible paths. That's indeed a good question. Perhaps it's just because nature is lazy, or maybe there is a deeper unifying principle at work. That's how it goes in physics – every good answer breeds further questions. We'll show you a bit more of the answer to this one in episode 03.
Predicting rays for reflections: why are there certain rules about rays?
Now model a mirror. Will light still follow the path of least time and, if it does, will this account for the rule that you've used without subjecting it to analytical enquiry? One way to answer the question is simply by drawing: a mathematical experiment. Draw a source, a detector and a mirror. Draw in as many paths to try as you like.
From source to mirror will be a straight line, as will from mirror to detector, as these are both straightforward propagation, and you've already established that light travels in straight lines under these circumstances – you don't need to try any of the many curves that link these pairs of points. But the point on the mirror is not pre-ordained, so there are still many pairs of lines to try. For each, measure a distance, and then convert this distance to a trip time, using the universal converter of the speed of light. Based on what we found out about propagation, the best guess would be that there is a pair of lines showing the rays where the angle of incidence is equal to the angle of refraction, and that this pair has a minimum trip time for the journey between source and detector. Then you can repeat for a variety of different locations for source, detector and mirror, so producing an overwhelming quantity of empirical evidence that minimising the trip time in these circumstances is indeed the underpinning principle that accounts for the brute fact. You might reasonably enlist the help of a computer for this kind of repetitive calculation. For proof, you'd need to enlist a mathematician who could do differential calculus and show that the minimum trip time for any situation would always predict the angles correctly. Maybe you have a colleague or clever A-level mathematician who needs a challenge…
Rules about rays used to predict refractions – but where do the rules come from?
Thinking about refraction raised two questions, of which only the first was answered.
- How, exactly, is the angle of incidence related to the angle of refraction?
- Why are the angles related in this way?
Let's start with the empirical rule again.
The greater the difference in speed between the materials, the greater the constant. Sines are trigonometric functions, and so connect the angles to distances in triangles. So you have something comparing distances, comparing speeds. Maybe trip times will again provide a unifying principle – they have already provided an account of the brute facts for both propagation and for the law of reflection. Maybe a further unification will be possible. Time for further simple experiments with a pencil and paper, or perhaps for enlisting the assistance of a computer in doing the repetitive computations. Draw up a source and a detector and draw a surface representing a change of medium between them. Set different speeds of propagation in the two media. Then try out many different waypoints on the surface, one at a time. Calculate the trip time from source to detector. The path from source to waypoint can be a straight line, as can the path from waypoint to detector, as these are both propagation, and you already know that the shortest time for these legs will be a straight line. Find the waypoint and corresponding paths that make the trip time a minimum. Ink this pair in, making them rays. These rays will exactly predict the passage of the beam. The principle of minimum trip time has accounted for another
Snell's law is a consequence of least time paths
Snell's law is a simple consequence of the least time principle. Check this by calculating the ratio of the sines of the angles of incidence and refraction for a range of different source and detector locations, so exploring a range of angles of incidence. Try some modelling using a computer. You'll always set a single constant for the chosen pair of speeds for the two different media. Remember that this choice corresponds to the beam traversing the surface between two materials. You could extend this to a situation where a beam traverses a block of a different medium, so there are two possible waypoints and three paths to consider. You'll get exactly the same mathematical prediction – that minimising the trip time can only be done by selecting paths that lie on the rays for which Snell's law is true. If the sides of the block are parallel, the incident beam will simply be offset from its original line but not deviate from this line.
Next, minimise trip times for a beam traversing a block of material where the sides through which the beam enters and exits are not parallel (e.g. a prism). Here the prediction is that the beam deviates. You could have predicted this from applying the brute fact of Snell's law. The new geometry, where the two normals (one for entry and one for exit) are not at the same angle, has the consequence that the beam will deviate. As this is frequency dependent, and the change is being applied in the same sense both times – that is the deviation is rotating the beam in the same direction on entry and on exit (so there is no hint of undoing what was done) – you might also reasonably expect dispersion: the different frequencies present in the single incident beam now travelling out in different directions from the prism.
Engineering with trip times
Physics Narrative for 14-16
Bending mirrors to make all path times the same
Angling two mirrors can focus two beams. You can predict where it will be brighter by drawing rays from two sources, and using the rules about reflection. That gives a rule-based approach to accounting for the behaviour. But earlier we started to suggest that least-time paths were all important in predicting beams – that is they predicted the rays. Now you've got a slightly different challenge in that the effect of two rays is being predicted (it could easily be more, as we might have a multi-segmented mirror, perhaps even an infinite number of segments, to give a smooth parabola, or any other shape that we choose, by altering the angle between the segments). Earlier, multiple paths were considered as possibilities, leading to a unifying principle: rays are to be drawn along the line where the path trip time is least.
This diversion is something of a preview of episode 03, where multiple paths contribute, and also a bit of underpinning for the rules introduced for drawing ray diagrams for mirrors (here) and lenses (to follow).
Let's assume that the vibrators are in step at the sources (that's a good plan, because we could just be modelling a spread out beam, by selecting the extremes of the beam). How can we guarantee that the beams will add up to a large contribution at the detector? Simple – arrange the paths so that the delay injected between source and detector is the same or nearly the same. And how do you do that? Make sure the trip times are the same. The interactive allows you to do just that, and see a parabolic mirror emerge
as if by magic. Only it's not magic: it's only working with the essence of waves – as
do like me, but later – and geometry, which sets the trip times.
Shaping a glass block to make all path times the same
Lenses are ground, and in a particular way, so that the trip times from source to detector are constant, if the detector is placed at the focus. That is, lenses are ground so that the contributions from each path are all in step. This happens when the trip times from source to detector are equal, or very nearly so.
If we had a slightly imperfect lens, or one that exhibited chromatic aberration, then one frequency of the radiations might be in step, and another not quite. But – near enough, perhaps, for the eye to be fooled. These kinds of judgements must be made in the craft of making high-quality lenses that have to work over a range of frequencies.
Here you can model the construction of a simple lens, for only a single frequency. The apparent speed of light in the glass varies with frequency, and this gives rise to differing trip times for the different frequencies, for a given curvature of the lens. That variation in speed cannot be modelled here, but you can magically assemble a lens by fixing the thicknesses of the different prisms that can be thought of as making up the lens.
Apart from the pleasure of seeing new connections – exploiting the essence of waves (
do like me, but later) and geometry to set the trip times – there is another important message here. Where the differences between trip times are small, there interesting things happen. Hold that thought for episode 03, where contributions from different paths are essential to a discussion of superposition, held by many to be the real signature effect of waves.
Signalling with vibrations
Physics Narrative for 14-16
Signals enable information to move from source to detector
Signals communicate information, and the need to get that information over long distances and through adverse conditions has led to the use of a number of different solutions to the carrier problem. Carriers have ranged from running humans, through somewhat swifter carrier pigeons, to using the carrier that travels at the ultimate speed – light and other members of its electromagnetic family.
Just shouting loudly only gets the information so far, and cannot shift a message very rapidly. There is a limit to both the number of words a second that can be crammed into a shouted message, and still have the message be intelligible, and to how long it takes that message, once set on its way, to arrive at its destination. This trip time to destination is rather simply calculated, once we have chosen the kind of vibration.
Coding information using a vibration brings in new ideas (you must encode the information somehow) and the choices you make will determine how much information can be delivered by each second of the vibration. It will not determine how long that second of vibration (the carrier) takes to travel from source to detector. The signal is a physical manifestation of the information that needs transmitting; the carrier is the vibration that is modified to enable that transmission to occur.
Smoke signals use light but are digital: either there is smoke or there is not. Sender and receiver will need a shared code for the communication to work. For analogue signals one can think of having different brightnesses of light transmitted and that the brightness of the light could be made to follow the changes in the loudness of speech. In that way the speech could be encoded in the light signal. As light signals can often travel farther than sound, one problem of communication is solved. This is a possible solution, but light is obstructed by clouds, snow, fog, rain, trees and tall buildings. So this could work if we had a good pipe down which to send the light (e.g. an optical fibre). For fixed point to fixed point communication, where there is lots of information to be sent (e.g. changing pictures as well as changing sound), this is the preferred solution. However, the sender of the signal does not always know where the recipient will be, leading to the common use of electromagnetic waves to make a signal widely available. Here a frequency is chosen that is not blocked by rain, snow and buildings: radio and television frequencies are used to broadcast signals.
Each station has its own frequency that a radio or television can tune in to: the signal for that station is transmitted only at that frequency. This frequency is the carrier frequency: the charged particles in the transmitter vibrate at this frequency when there is no signal. The signal then modifies this vibration. Vibrations are characterised by amplitude and frequency, so the two straightforward possibilities are to allow the signal to alter the amplitude, or alter the frequency, but not by too much in either case, if the communication is to be successful.
Physics Narrative for 14-16
Encoding a signal by changing the amplitude
Amplitude modulation is one way of encoding a signal on a carrier wave.
The carrier frequency remains constant but the amplitude of the carrier is modulated.
To encode digital signals on the carrier, an agreement that full amplitude is the code for a
1 and zero amplitude is the code for a
0 is sufficient. Any binary signal can then be carried from the source to any detector, where it can be decoded back into a string of ones and zeros.
To encode analogue signals, the carrier amplitude must vary smoothly. Do this by multiplication: the carrier amplitude (a constant) is multiplied by the signal amplitude to give the modulated amplitude. To avoid difficulties, the amplitude of the signal is first normalised so that it lies between 0 and 1.
Then a non-stop multiplication is carried out in the encoder.
You can see the process at work, encoding either your own signal, or a prepared signal, using the interactive. By looking at the model you can see the relationship between the input (the signal and carrier) and the output (the amplitude modulated carrier).
More technical details (optional extra depth on the mathematics)
The carrier amplitude can then alter between 0 and the carrier amplitude. However, transmitting for long periods where the signal is zero might well be confused with receiving no signal at all. So in practice this sum is modified by introducing another factor, which determines the depth of the modulation – again lying between 0 and 1. A value of 1 reproduces 100 % modulation. A value of 0.5 is much more useful.
This is the new sum, again carried out continuously at the encoder.
And that's it.
Digital signals by amplitude modulation
Although changing both the amplitude and the frequency of a carrier can be used to encode a digital signal, here we have chosen to show amplitude modulation only, simply because it is easier to show what is going on.
There are two essential steps in digitising the signal to the final transmittable form.
Step 1: the signal is sampled to convert the pattern into a vibration as a string of numbers.
Step 2: these numbers are converted to binary, resulting in a string of ones and zeros.
Once you have a string of such numbers, simply modulate the amplitude of the carrier, as before. Only this time you need only two levels: the first to represent a
zero, and the second to represent a
Physics Narrative for 14-16
Encoding a signal by altering the frequency
Frequency modulation is another way of encoding a signal on a carrier wave.
The carrier amplitude remains constant but the frequency of the carrier is modulated. This change is usually constrained, so the carrier frequency changes over a small range.
To encode digital signals on the carrier, an agreement that the carrier frequency is shifted just enough is the code for a
1 and the carrier frequency itself is the code for a
0. Any binary signal can then be carried from the source to any detector, where it can be decoded back into a string of ones and zeroes.
To encode analogue signals, the carrier frequency must vary smoothly. Do this by addition – the carrier frequency (a constant) is added to the signal frequency to give the modulated frequency. To avoid difficulties, the amplitude of the signal is first normalised.
So a non-stop sum is carried out in the encoder.
And that's it.
You can see the process at work, encoding either your own signal, or a prepared signal, using the interactive. By looking at the model, you can see the relationship between the input (the signal and carrier) and the output (the frequency modulated carrier).
More technical details (optional extra depth on the mathematics)
As the carrier frequency is added to the signal frequency to find the modulated frequency, this frequency can alter between the limits of (carrier–signal) and (carrier + signal). To control the range over which the frequency can swing, the signal frequency is normalised (mapped so that it swings between +1 and –1) and a new parameter is added. Then the sum is modified by introducing another factor that determines the swing of the modulation, setting a fraction of the carrier frequency by which the modulated frequency can alter. All this careful calculating restricts the range of frequencies that must be detected so that the full signal is available to decode. That's why FM (frequency modulated) radio stations cannot be too close in frequency – each needs a small range of frequencies, rather than one very well defined frequency, to transmit the information.
This is the new sum, again carried out continuously at the encoder.
Signal and noise
Physics Narrative for 14-16
Getting the message through: the consequences of different encodings
There are choices to be made when encoding a signal on a carrier wave. You can alter the vibrations of the source in two ways:
- By using the signal vibration to alter the frequency of the carrier vibration.
- By using the signal vibration to alter the amplitude of the carrier vibration.
There is a further choice to be made, in how the signal is encoded before the carrier wave is altered:
- The signal can be represented as many different levels (analogue).
- The signal can be represented as a series of ones and zeroes (digital).
That allows four possible combinations: AM or FM; digital or analogue.
The encoded signal then has to travel from the transmitter to the receiver, often through adverse conditions. Under these adverse conditions the encoded signal may be altered by being combined with other vibrations. So the amplitude or the frequency, or both, may be altered by noise being added to the signal.
The success in getting the information through is different for each of these four different options. That is, the robustness of the transmission as conditions get less favourable depends on the encoding. The extent to which the signal is still recoverable, or recognisable, as more and more noise is added, is one good measure of robustness. As general rules: digital is more robust than analogue, because you only have to judge between two different levels; frequency modulation is better than amplitude modulation because noise affects amplitude more readily than frequency.
Vibrating - radiating - absorbing
Physics Narrative for 14-16
A vibration is mimicked elsewhere: its amplitude and frequency are copied, but with a time delay
Something vibrates. Later, somewhere else, something else mimics that vibration. The two objects are linked somehow. A wave theory is one way to provide an account of the links between the two. A wave is just a whole chain of things, vibrating one after the other. In the SPT: Sound topic and the SPT: Light topic lots of phenomena were presented, and some work done on vibrations and what a time trace of the vibrations might look like. For sound it was rather easy to imagine a whole chain of links between source and detector, in light rather less easy. In this episode, you've made a much more thorough and rigorous study of the connections between source and absorber, rather than just asserting
It's a wave. You've been restricted to a single beam: that'll change in episode 03. You've also been restricted to a pre-1900 view: that'll change in episode 02.
The central ideas:
Everything that vibrates, or oscillates, has a frequency and an amplitude.
Many vibrations are propagated from the source, often at a characteristic family speed, to their eventual terminus at an absorber. This is called radiating.
Such radiating can be a way of shifting energy from the source to the absorber: the heating by radiation pathway.
Some absorbers can function as detectors, because there is some change that we can interpret as the arrival of radiation.
Rays can be used to predict beams.
The behaviour of optical devices (lenses and prisms; mirrors, both plane and curved) can be accounted for using rules about rays.
Careful thinking about steadily changing trip times and frequency accounts for the Doppler effect.
Waves can be modulated (frequency or amplitude) to carry a signal, and so information.
Considering many paths, and looking for a minimum in trip time, allows you to predict the rays.