## Tuesday, September 20, 2005

### Faster than light or not

I don't know about the rest of the world but here in Germany Prof. Günter Nimtz is (in)famous about his display experiments that he claims show that quantum mechanical tunneling happens instantaneously rather than according to Einstein causality. In the past, he got a lot of publicity for that and according to Heise online he has at least a new press release.

All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics.

So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength.

Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light.

Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality.

For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie.

The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip.

(These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit.

What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things.

Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not.

## Friday, September 16, 2005

### Negative votes and conflicting criteria

Yesterday, Matthijs Bogaards and Dierk Schleicher ran a session on the electoral system for the upcoming general election we are going to have on Sunday in Germany. I had thought I I know how it works but I was proven wrong. Before I was aware that there is something like Arrow's impossibility theorm which states that there is a certain list of criteria your electoral system is supposed to fulfill but which cannot hold all at the same time for any implementation. What typically happens are cyclic preferences (there is a majority for A over B and one for B over C and one for C over A) but I thought all this is mostly academic and does not apply to real elections. I was proven wrong and there is a real chance that there is a paradoxical situation coming up.

Before explaining the actual problem, I should explain some of the background. The system in Germany is quite complicated because it tries to accomodate a number of principles: First, after the war, the British made sure the system contains some component of constituency vote: Each local constituency (electoral district for you Americans) should send one candidate to parliament that is in principle directly responsible to the voters in that district so voters have something like "their representative". Second, proportional vote, that is the number of seats for a party should reflect the percentage of votes for that party in the popular vote. Third, Germany is a federal republic, so the sixteen federal states should each send their own representatives. Finally, there are some practical considerations like the number of seats in parliament should be roughly 600 and you shouldn't need a PhD in math and political science to understand your ballot.

So this is how it works. Actually, it's slightly more complicated but that shall not bother us here. And I am not going into the problem of how to deal with rounding errors (you can of course only have integer seats) which brings with it its own paradoxes. What I am going to cover is how to deal with the fact, that the number of seats has to be non-negative:

The ballot has two columns: In the first, you vote for a candidate from your constituency (which is nominated by its party). In the second, you vote for a party for the proportional vote. Each voter makes one cross in each column, one for a candidate from the constituency and one for a party in the proportional vote. There are half as many constituencies as there are seats in parliament and these are filled immediately according to majority vote of the first column.

The second step is to count the votes in the second column. If a party neither gets more than five percent of those nor wins three or more constituencies their votes are dropped. The rest is used to work out how many of the total of 600 seats each of the parties gets.

Now comes the federal component: Let's consider party A and assume the popular vote says they should get 100 seats. We have to determine how these 100 seats are distributed between the federal states. This is again done proportionally: Party A in federal state (i) gets that percentage of the 100 seats that reflects the percentage of the votes for party from state (i) of the total votes for party A in all of Germany. Let's say this is 10. Further assume that A has won 6 constituencies in federal state (i). Then, in addition to these 6 candiates from the constituencies, the top four candidates from party A's list for state (i) are send to Berlin.

So far, everything is great: Each constituency has "their representative" and the total number of seats for each party is proportional to its share of the popular vote.

Still, there is a problem: The two votes in the two columns are independent. And as the constituencies are determined by majority vote, except in a few special cases (Berlin Kreuzberg where I used to live before moving to Cambridge being one with the only constituency winner from the green party) it does not make much sense to vote for a constituency candidate that is not nominated by one of the two big parties. Any other vote would likely be irrelevant and effectively your only choice is between the candidate of SPD or CDU.

Because of this, it can (and in fact often does for the two big parties) happen that a party wins more constituencies in a federal state than it is entitled to for that state according to the popular vote. In that case (because there are no negative numbers of candidates from the list to balance this) the rule is that all the constituency winners go to parliament and none from the list of that party. The parliament is enlarged for these "excess mandates". So that party gets more seats than their proportion of the popular vote.

This obviously violates the principle of proportional elections but it gets worse: If that happens in a federal state for party A you can hurt this party by voting for it: Take the same numbers as above but assume A has won 11 constituencies in (i). If there are no further excess mandates, in the end, A gets 101 seats in the enlarged parliament of 601 seats. Now, assume A gets an additional proportional vote. It is not impossible that this does not increase A's total share of 100 votes for all of Germany but increases to proportional share for the A's candidates in federal state (i) from 10 to 11. This does not change anything for the represenatives from (i), still the 11 constituency candidates go to Berlin but there is no excess mandate anymore. Thus, overall, A sends only 100 representatives to a parliament of 600, one less than with the additional vote!

As a result, in that situation the vote for A has a negative weight: It decreases A's share in the parliament. Usually, this is not so much of a problem, because the weights of votes depend on what other people have voted (which you do not know when you fill out your ballot) and chances are much higher that your vote has positive weight. So it is still save to vote for your favourite party.

However, this year, there is one constituency in Dresden in the federal state of Saxony where one of the candidates died two weeks before election day. To ensure equal chances in campaining, the election in that constituency has been postponed for two weeks. This means, voters there will know the result from the rest of the country. Now, Saxony is known to be quite conservative so it is not unlikely that the CDU will have excess mandates there. And this might just yield the above situation: Voters from Dresden might hurt the CDU by voting for them in the popular vote and they would know if that were the case. It would still be democratic in a sense, it's just that if voters there prefer CDU or FDP they should vote for FDP and if they prefer SPD or the Greens they should vote for CDU. Still, it's not clear if you can explain that to voters in less then two weeks... I find this quite scary, especially since all polls predict this election to be extremely close and two very different outcomes are withing one standard deviation.

If you are interested in alternative voting systems, Wikipedia is a good starting point. There are many different ones and because of the above mentioned theorem they all have at least one drawback.

Yesterday, there was also a brief discussion of whether one should have a system that allows fewer or more of the small parties in parliament. There are of course the usual arguments of stability versus better representation of minorities. But there is another argument against a stable two party system that is not mentioned often: This is due to the fact that parties can actually change their policies to please more voters. If you assume, political orientation is well represented by a one dimensional scale (usually called left-right), then the situation of icecream salesmen on a beach could occur: There is a beach of 4km with two competing people selling icecream. Where will they stand? For the customers it would be best if they are each 1km from the two ends of the beach so nobody would have to walk more than 1km to buy an icecream and the average walking distance is half a km. However, this is an unstable situation as there is an incentive for each salesman to move further to the middle of the beach to increase the number of customers to which he is closer
than his competitor.

So, in the end, both will meet in the middle of the beach and customers have to walk up to 2km with an average distance of 1km. Plus if that happens with two parties in the political spectrum they will end up with indistinguishable political programs and as a voter you don't have a real choice anymore. You could argue that this has already taken place in the USA or Switzerland (there for other reasons) but that would be unfair to the Democrats.

I should have had many more entries here about politics and the election like my role models on the other side of the Atlantic. I don't know why these never materialised (vitualised?). So, I have to be brief: If you can vote on Sunday, think of where the different parties actually have different plans (concrete, rather than abstract "less unemployment" or "more sunshine") and what the current government has done and if you would like to keep it that way (I just mention the war in Iraq and foreign policy, nuclear power, organic food as a mass market, immigration policy, tax on waste of energy, gay marriage, student fees, reform of academic jobs, renewable energy) your vote should be obvious. Mine is.

Update:
The election is over and everybody is even more confused than before. As the obvious choices for coalitions do not have a majority one has to look for the several colourful alternatives and the next few weeks will show us which of the several impossibilities will actually happen. What will definitely happen is that in Dresden votes for the CDU will have negative weight (linked page in German with an excel sheet for your own speculations). So, Dresdeners, vote for CDU if you want to hurt them (and you cannot convince 90% of the inhabitants to vote for the SPD).

## Wednesday, September 14, 2005

### Natural scales

When I talk to non-specialists and mention that the Planck scale is where quantum gravity is likely to become relevant sometimes people get suspicious about this type of argument. If I have time, I explain that to probe smaller length details I would need so much CM energy that I create a black home and thus still cannot resolve it. However, if I have less time, I just say: Look, it's relativistic, gravity and quantum, so it's likely that c, G and h play a role. Turn those into a length scale and there is the Planck scale.

If they do not believe this gives a good estimate I ask them to guess the size of an atom: Those are quantum objects, so h is likely to appear, the binding is electromagnetic, so e (in SI units in the combination e^2/4 pi epsilon_0) has to play a role and it comes out of the dynamics of electrons, so m, the electron mass, is likely to feature. Turn this into a length and you get the Bohr radius.

Of course, as all short arguments, this has a flaw: there is a dimensionless quantity around that could spoil dimension arguments: alpha, the fine-structure constant. So you also need to say, that the atom is non-relativistic, so c is not allowed to appear.

You could similarly ask for a scale that is independant of the electric charge, and there it is: Multiply the Bohr radius by alpha and you get the electron Compton wavelength h/mc.

You could as well ask for a classical scale which should be independent of h: Just multiply another power of alpha and you get the classical electron radius e^2/4 pi epsilon_0 m c^2. At the moment, however, I cannot think of a real physical problem where this is the characteristic scale (NB alpha is roughly 1/137, so each scale is two orders of magnitude smaller than the previous).

Update: Searching Google for "classical electron radius" points to scienceworld and wikipedia, both calling it the "Compton radius". Still, there is a difference of an alpha between the Compton wavelength and the Compton radius.

## Thursday, September 08, 2005

### hep-th/9203227

Reading through the arxive's old news items I became aware of hep-th/9203227 for which the abstract reads

\Paper: 9203227
From: harvey@witten.uchicago.edu (J. B. Harvey)
Date: Wed 1 Apr 1992 00:25 CST 1992

A solvable string theory in four dimensions,
by J. Harvey, G. Moore, N. Seiberg, and A. Strominger, 30 pp
\We construct a new class of exactly solvable string theories by generalizing
the heterotic construction to connect a left-moving non-compact Lorentzian
coset algebra with a right-moving supersymmetric Euclidean coset algebra. These
theories have no spacetime supersymmetry, and a generalized set of anomaly
constraints allows only a model with four spacetime dimensions, low energy
gauge groups SU(3) and spontaneously broken SU(2)xU(1), and three families
of quarks and leptons. The model has a complex dilaton whose radial mode
is automatically eaten in a Higgs-like solution to the cosmological
constant problem, while its angular mode survives to solve the strong CP
problem at low energy. By adroit use of the theory of parabolic cylinder
functions, we calculate the mass spectrum of this model to all orders in
the string loop expansion. The results are within 5% of measured values,
with the discrepancy attributable to experimental error. We predict a top
quark mass of $176 \pm 5$ GeV, and no physical Higgs particle in the spectrum.
\

## Tuesday, September 06, 2005

### Local pancake and axis of evil

I do not read the astro-ph archive on a daily basis (as any astro-* or *-ph archive) but I use liferea to stay up to date with a number of blogs. This news aggregator shows a small window with the heading if a new entry appears in the logs that I told it to monitor. This morning, it showed an entry form Physics Comments with the title "Local pancake defeats axis of evil". My first reaktion was that this must be a hoax, there could not be a paper with that title.

But the paper is genuine. I read the four pages over lunch and it looks quite interesting: When you look at the WMAP power spectrum (or COBE for that matters) you realize that for very low l there is much less power than expected from the popular models. Actually, the plot starts with l=2 because l=0 is the 2.73K uniform background and l=1 is the dipole or vector that is attributed to the Doppler shift due the motion of us (the sun) relative to the cosmic rest frame.

What I did not know is that the l=2 and l=3 have a prefered direction and they actually agree (althogh not with the dipole direction, they are perpendicular to it). This fact was realised by Copi, Huterer, Starkman, and Schwarz as I am reminded). I am not entirely sure what this means on a technical level but it could be something like "when this direction is chosen as the z-direction, most power is concentrated in the m=0 component". This could be either due to a systematic error or a statistical coincidence but Vale cites that this is unlikely with 99.9% confidence.

This axis encoded in the l=2 and l=3 modes has been termed "axis of evil" by Land and "varying speed of light and I write a book and offend everybody at my old university" Magueijo. The in the new paper, Vale offers an explanation for this preferred direction:

His idea is that gravitational lensing can mix the modes and what appears to be l=2 and l=3 is actually the dipole that is mixed into these higher l by the lensing. To first order, this is effect is given by

(Jacques is right, I need TeX formulas here!) where T is the true temperature field, A is the apprarant temperature field and Psi is a potential that summarizes the lensing. All these fields are function of psi and phi, the coordinates on the celestial sphere.

He then goes on an uses some spherical mass distribution of twice the mass of the Great attractor 30Mpc away from us to work out Psi and eventually A. The point is that the l=1 mode of A is two orders of magnitude stronger than l=2 and l=3 so small mixing could be sufficient.

What i would like to add here is how to obtain some analytical expressions: As always, we expand everything in spherical harmonics. Then