[Relativity FAQ] - [Copyright]
Original 12-Oct-1995 by Michael Weiss
People sometimes argue over whether the Lorentz-Fitzgerald contraction is "real" or not. That's a topic for another FAQ entry, but here's a short answer: the contraction can be measured, but the measurement is frame-dependent. Whether that makes it "real" or not has more to do with your choice of words than the physics.
Here we ask a subtly different question. If you take a snapshot of a rapidly moving object, will it look flattened when you develop the film? What is the difference between measuring and photographing? Isn't seeing believing? Not always! When you take a snapshot, you capture the light-rays that hit the film at one instant (in the reference frame of the film). These rays may have left the object at different instants; if the object is moving with respect to the film, then the photograph may give a distorted picture. (Strictly speaking snapshots aren't instantaneous, but we're idealizing.)
Oddly enough, though Einstein published his famous relativity paper in 1905, and Fitzgerald proposed his contraction several years earlier, no one seems to have asked this question until the late '50s. Then Roger Penrose and James Terrell independently discovered that the object will not appear flattened [1,2]. People sometimes say that the object appears rotated, so this effect is called the Penrose-Terrell rotation.
Calling it a rotation can be a bit confusing though. Rotating an object brings its backside into view, but it's hard to see how a contraction could do that. Among other things, this entry will try to explain in just what sense the Penrose-Terrell effect is a "rotation".
It will clarify matters to imagine two snapshots of the same object, taken by two cameras moving uniformly with respect to each other. We'll call them his camera and her camera. The cameras pass through each other at the origin at t=0, when they take their two snapshots. Say that the object is at rest with respect to his camera, and moving with respect to hers. By analysing the process of taking a snapshot, the meaning of "rotation" will become clearer.
How should we think of a snapshot? Here's one way: consider a pinhole camera. (Just one camera, for the moment.) The pinhole is located at the origin, and the film occupies a patch on a sphere surrounding the origin. We'll ignore all technical difficulties(!), and pretend that the camera takes full spherical pictures: the film occupies the entire sphere.
We need more than just a pinhole and film, though: we also need a shutter. At t=0, the shutter snaps open for an instant to let the light-rays through the pinhole; these spread out in all directions, and at t=1 (in the rest-frame of the camera) paint a picture on the spherical film.
Let's call points in the snapshot pixels. Each pixel gets its color due to an event, namely a light-ray hitting the sphere at t=1. Now let's consider his & her cameras, as we said before. We'll use t for his time, and t' for hers. At t=t'=0, the two pinholes coincide at the origin, the two shutters snap simultaneously, and the light rays spread out. At t=1 for his camera, they paint his pixels; at t'=1 for her camera, they paint hers. So the definition of a snapshot is frame-dependent. But you already knew that. (Pop quiz: what shape does he think her film has? Not spherical!) (More technical difficulties: the rays have to pass right through one film to hit the other.)
So there's a one-one correspondence between pixels in the two snapshots. Two pixels correspond if they are painted by the same light-ray. You can see now that her snapshot is just a distortion of his (and vice versa). You could take his snapshot, scan it into a computer, run an algorithm to move the pixels around, and print out hers.
So what does the pixel mapping look like? Simple: if we put the usual latitude/longitude grid on the spheres, chosen so that the relative motion is along the north-south axis, then each pixel slides up towards the north pole along a line of longitude. (Or down towards the south pole, depending on various choices I haven't specified.) This should ring a bell if you know about the aberration of light: if our snapshots portray the night-sky, then the stars are white pixels, and aberration changes their apparent positions.
Now let's consider the object--- let's say a galaxy. In passing from his snapshot to hers, the image of the galaxy slides up the sphere, keeping the same face to us. In this sense, it has rotated. Its apparent size will also change, but not its shape (to a first approximation).
The mathematical details are beautiful, but best left to the textbooks [3,4]. Just to entice you if you have the background: if we regard the two spheres as Riemann spheres, then the pixel mapping is given by a fractional linear transformation. Well-known facts from complex analysis now tell us two things. First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit. (If you know about the double-covering of SL(2), that also comes into play. [3] is a good reference.)
[1] and [2] are the original articles. [3] and [4] are textbook treatments. [5] has beautiful computer-generated pictures of the Penrose-Terrell rotation. The authors of [5] later made a video [6] of this and other effects of "SR photography".
Adendum, 4 May 1998 by Chris Hillman
The above article on Penrose-Terrel rotations mentions in passing the fact that every Lorentz transformations act on the celestial sphere the same way that the corresponding Moebius transformation acts on the Riemann sphere, but it is not very explicit. This addendum is an expanded entry on the following fascinating facts, which to my mind are the most interesting thing about the Lorentz group:
1. Vectors in R4 with Minkowski inner product, i.e. E(1,3), are in bijection with two by two Hermitian matrices according to the prescription
(t,x,y,z) <---> [t+z x+iy] [x-iy t-z]
Lorentz transformations can be represented by SL2(C) matrices
Q = [a b] [c d], ad - bc = 1
where you can read off the Lorentz transformation L(Q) corresponding to Q by looking at X -> QX(Q*) where * is conjugate transpose. Going in the other direction, L(Q) can be represented by either Q or -Q but we have a bijection
+/-Q <---> L(Q) = L(-Q)
which gives a group homomorphism between the Lorentz group SO(1,3) and PSL2(C), the group SL2(C) modulo its "center", +/- the identity matrix.
In particular,
exp p/2 [1 0] = [ exp(p/2) 0 ] = boost by p along z axis [0 1] [ 0 exp(-p/2) ] exp p/2 [i 0] = [ exp(ip/2) 0 ] = rotation by p about z axis [0 -i] [ 0 exp(-ip/2) ]
The appearance of "half angles" p/2 is characteristic of the two-fold "spinorial covering" of the Lorentz group by SL2(C). For instance, matrices of the second form above yield a "one parameter subgroup" of SL2(C) as we let p vary, which "covers" the one parameter subgroup SO(2) of PSL2(C) or SO(1,3) the same way that the circular edge of a Moebius band covers its central circle.
2. A Moebius transformation, or linear fractional transformation, of the
complex numbers, or more properly of the Riemann sphere (C augmented by a
"point at infinity") is given by
w -> (aw+b)/(cw+d). Moreover, the
bijection
+/-Q <---> M(Q) = M(-Q)
where M(Q) is the Moebius transformation
a w + b w -> -------, ad - bc = 1 c w + d
and
Q = [a b] [c d], ad - bc = 1
gives an isomorphism between PSL2(C) and the Moebius group. So, PSL2(C), the Lorentz group, and the Moebius group are all isomorphic as abstract groups.
3. To understand the how the appearance of the night sky (apparent positions of stars and galaxies on the celestial sphere) is altered by a Lorentz transformation, we must look at null lines (one dimensional subspaces spanned by null vectors). Such null lines represent the world lines of photons coming from distant stars and striking the viewers retina "here and now". Every such null line passes through a unique point on the sphere t=-1, x2 + y2 + z2 = 1, an "equator" of the light cone, and thus corresponds to a unique point on the celestial sphere, i.e. on the Riemann sphere. With one exception, every null line can be represented by a unique Hermitian matrix of form
N = [ |w|2 w ] [ w* 1 ]
where w is a complex number which can be associated with a unique location on the Riemann sphere by inverse stereographic projection. Then, a simple computation shows that QN(Q*) has the effect of the Moebius transformation M(Q)
a w + b w -> ------- c w + d
In other words, L(Q) transforms the celestial sphere (changes the appearance of the night sky) in exactly the same way that M(Q) transforms the Riemann sphere.
4. Lorentz transformations may be classified into four types according to their geometric effect on the night sky:
Q = exp t/2 [0 1] = [1 t/2] [0 0] [0 1 ]which corresponds via X -> QX(Q*) to
t -> t + t (t2 + 4x - t z)/8 x -> x + t (t+z)/2 y -> y z -> z + t(t2 + 4x - t z)/8Physically speaking, such a parabolic transformation may be realized by following a boost along say the x axis with a rotation about say the z axis; if we "mostly boost" we get a hyperbolic transformation; if we "mostly rotate" we get an elliptic transformation; if we "balance" the amount of rotation and boosting just right we obtain the desired parabolic transformation.
5. An excellent, very readable, quite elementary, and widely available reference for the Moebius group is Konrad Knopp, Elements of the Theory of Functions, Dover, 1952.