There is a moment when Finding Nemoin which Dory makes a deep voice and says that she can talk balleno. In the real world, the chances that a Paracanthurus hepatus I would like to ask a whale for help to find a clownfish are remote, but thanks to generative Artificial Intelligence, We are not that far from a fish speaking to us with the voice of Anabel Alonso.
Until now, everything we knew about animal communication we had learned with human eyes, human ears and human brain. We have even submerged ourselves, ourselves and our junk, in the ocean and in the jungle for weeks and years to approach the impossible of perceiving the world as a gorilla or a sperm whale.. “If the lion could speak”, said the philosopher Ludwig Wittgenstein, “we would not understand it”, by the sidereal distance between our human mind, and our human concepts, in front of the sensorial and conceptual landscape of characters like an octopus or a bat.
Artificial Intelligence aspires to become the missing link. In the tool capable of finding points of convergence between our language and any other, in the same way that we would ask if an alien ship arrived on earth. If it is helping us to translate Babylonian texts, why is it not going to serve to convert a 34-million-year-old language and culture like that of the sperm whale, into our phonemes of barely 2.5 million?
Dozens of projects are currently investigating the potential of artificial intelligence to make us a Google Translator animal, collecting acoustic, chemical, electrical, chromatic signals, vibrations, group dances and all at once; putting cryptographers, linguists, marine biologists and robotics specialists to work together. And all knowing that we will find holes that seem insurmountable like the ultraviolet range of the visual spectrum of some bees or birds, or the ultrasonic range of bats, dolphins and dogs.
The Earth Species Project (ESP) is cataloging, among others, the calls of Hawaiian crows and the sound of beluga whales in the St. Lawrence River; Project CETI (Cetacean Translation Initiative), aims to decipher the language of sperm whales, DeepSqueak is a software that uses learning algorithms to identify, process and categorize the ultrasonic sounds of rodents; Communication and Coordination Across Scales (CCAS), has focused on the information flows of populations of meerkats, coatis and hyenas, which have been filled with biorecorder collars. There is also the Vocal Interactivity in-and-between Humans, and Animals and Robots (VIHAR) Project and Interspecies Internet. In a workshop of the latter in 2019, Roger Payne, one of the discoverers of humpback whale song, assured that soon we could ask a dolphin if it is afraid of boats or sharks, and if his mother has it too, or which shark scares him the most. Even, he explained, “we could find out if dolphins lie regularly like humans… I’d be surprised if they don’t.”