Where should SETI look for little green men?

Scientists at SETI are always refining the way they go about attempting to communicate with aliens. One problem is that there are just too many planets in our galaxy: 100 billion+ by one reasonable estimate. They need to find ways to whittle down the possibilities in order to target the planets where their efforts are likely to be most fruitful.

Little green man alien trope.

The whole business of finding little green men is complicated by something called the SETI Paradox. Broadly, this points out that an alien race has to be transmitting a signal in order for us to be able to receive it. That in itself is obvious, but it made scientists wonder which sorts of alien technologies should be transmitting.

An astronomer at Manchester University (UK) called Eamonn Kerins has come up with a bright idea that suggests how to determine where we should be looking and which of any two given civilisations has an onus to transmit a signal.

Hmm, we’re anthropomorphising when we talk about an alien civilisation having an ‘onus’ to do something. We have no idea if an alien species is driven by similar motivations to us. That’s obvious even just on this planet. Humans have different motivations to pygmy marmosets, for example. One could argue that motivations are a consequence of intelligence — and thus an alien species of sufficient intelligence might have similar motivations to us — but I think that’s still an anthropomorphic leap.

Nevertheless they have to start somewhere and, as we’re the only technologically advanced species we know about, they may as well use us as a starting point.

Kerins suggests (link to his paper) we use something called Common Denominator Information (CDI) to filter out our targets. This amounts to things that both planets would recognise regardless of their technological advancement. To qualify that, it assumes the civilisations in question are advanced enough to send and receive signals and willing to communicate. This is a reasonable assumption to make even if one has doubts about its accuracy; if we don’t assume that, we might as well not bother with SETI. That’s a counterpoint to the reservations I have about anthropomorphism I mentioned above.

The particular example Kerins offers is the (tiny) amount of starlight blocked when a planet transits across its star. We can determine that for the Earth and the Sun, and aliens could determine it for their planet and star but, crucially, we could also determine it for their system and vice versa. This makes the information mutually accessible and it can therefore act as a common denominator.

The light blocked by a transiting planet depends on the size of the planet and the duration of its transit, so one civilisation will gain more information than the other. The civilisation gaining more information would be classed as the superior civilisation.

The idea, then, is to point our telescopes at parts of the sky where planets could share this mutual information with us (they could determine our transit and we their’s). As we both know one another’s intrinsic transit information and which one of us has the most information, we could both determine which of us is superior, and must therefore transmit, and which is inferior, and must therefore receive.

The whole idea is based on game theory and Kerins proves his point with various mathematic squiggles. What we will have done is to filter down our choices as to where to look for intelligent life, and any we find will be cut in half because we only need to listen to planets that are superior to us. Presumably we should be sending signals to those that are inferior.

It amazes me that astronomers might listen for signals from a tiny dot trillions of miles away, particularly since O2 can’t get a reliable 3G signal a few miles down the road to me.

I always worry about what aliens might make of us after observing us for a while: