In 1966 military sound engineer Frank Watlington heard something weird while recording underwater explosions.
Whale song.
Frank passed the recordings off to biologist Roger Payne. After a few listenings, he discovered these weren’t random sounds but complex vocalizations by creatures possibly as smart as humans. Recordings weren’t the only data Payne shared with the world, He printed out sonograms of whale song, illustrating their structure as units, phrases, and themes.
Ever since Payne’s discovery of whale song’s properties, humans fascination with whales has flourished. If it wasn’t for his discovery, these great beasts could’ve become a fond memory, hunted to extinction. Fortunately whales still swim among us, singing to each other, tantalizing us with the prospect of interspecies communication.
Eerily, the sonograms resembled the sheet music for Gregorian chants. These neumes evolved into today’s musical notes. Now David Rothenberg and Mike Deal have standardized whale song notation for human consumption.
The top row contains individual examples of each unit. The colored glyphs below were created by tracing the “averaged” shapes that resulted from overlaying the many occurrences of the same unit across Knapp’s recording.
Because standard musical notation is, in essence, made of timelines of note symbols plotted against a vertical axis of pitch frequencies, we can match the whale sounds to their corresponding frequencies on the musical staves. Hopefully this gives the whale sound shapes a more familiar context.
This isn’t humanity’s first attempt to put whale song into an anthropomorphic context. Marc Fischer uses wavelets, a mathematical function used in signal processing, to visualize sound. Over at Aguasonic Acoustics, he’s imaged whale and dolphin song into gorgeous mandalas like the one below. Best thing about them, they still show “rhymes” and the units of speech that excited Payne.
Going a step further into the fringe, look at the soundwaves in the blue whale song video. If you squint, you can make out a face in parts of the sonogram. This might be a clue to the method of communication between whales. Whales use sound not only to communicate, but also to hunt and navigate with active sonar. Sonar is the use of sound waves and listening for the echo to “see” the world. Sonar’s pretty sensitive, as dolphins can differentiate fish with their clicks and whistles.
But what if these vocalizations aren’t language as we know it, but images or sonic holograms.
Each moan, groan, click, and whistle, adjusted for pitch, rhythm and tempo, could generate an image or animation. Instead of saying “A pod of orca killed ol’ Humphrey”, the witnesses would create the scene in a song. As the song propagates through pods, variation does occur.This might be evidence of whales collaborating, embellishing, or entropy akin to a game of Telephone. That’s a huge leap of logic, but how could one test the hypothesis of whale song as an image?
Putting whale song back into a human context, consider each unit of whale song as a pixel. With enough pixels, an image will form, but only if one knows the correct pattern for the raster. Take the Arecibo message as an example. It’s 73 rows by 23 columns, making up 1679 pixels. If earthlings didn’t give those dimensions to aliens, they might screw up the image as below.
In this case, the correct dimensions are just transposed rendering the message as gibberish. If audio engineers play with the whale song, tuning it to whale-specific frequencies, an image might emerge. In short, humans need to think like a whale rather than a human brain in a whale’s body. If we are able to communicate with cetaceans, this’d be a huge step for SETI should we ever intercept their communications.