Welp, they are already beginning to decode what bird's brainwaves mean, and can use AI algorithms to reconstruct a bird's song using its brainwaves. How long until they can do this to humans? Human's vocal muscles probably move in sync with our thoughts, similar to the birds.
“We are moving away from focusing completely on the brain and neurons, and also paying attention to biomechanics, where the information of the nervous system is processed,” says Doppler. “This is a powerful idea. In some cases, looking at the biomechanics can give you insights that are not so clear in the nervous system.”
[...]
In the second study, a team led by Alan Bush, also a physicist in Mindlin’s group, found that they could manipulate the birds into flexing their vocal muscles by playing them versions of their own songs while they slept—a form of harmonic hypnosis. Bush was eager to study the patterns of how muscles actually fired in slumber. To him, the vocal organ is not just a puppet carrying out the brain’s set of master instructions—rather, it’s a creative liaison between brain and behavior that may add its own bells and whistles to the final product. “A lot of the complexity of the system is actually coming from the periphery, where the muscles are,” he explains.
Bush and his colleagues discovered that when the muscles are coaxed into activity, they behave in an all-or-nothing fashion. When played snippets of themselves trilling out tunes, the birds’ muscles would reliably twitch. Even synthetic versions of these songs, remixed in the lab, could sometimes elicit vocal organ responses. Often, the muscles were still—but when they were goaded into flexing, they would carry out the complete firing sequence of a vocalization.
https://www.smithsonianmag.com/science-nature/zebra-finches-dream-little-dream-melody-180969925/This study describes an AI algorithm they used to be able to 'decode' a bird's brainwaves into a song. (At the end of the paper there are sound files demonstrating the algorithm).
Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill. In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates. Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird’s own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity.
https://www.cell.com/current-biology/fulltext/S0960-9822(21)00733-8----
Of course, who needs to read thoughts when they can just make everything up in order to frame or discredit an individual. First, Western Civilization gave us machine algorithms to make "deepfakes", then it made machine algorithms to detect "deepfakes", and next someone will study the detector's algorithm to make new algorithms that are undetectable by it. Or just bribe whichever companies own the detector algorithms to 'certify' whether a fake thing is real or a real thing is fake.
https://www.npr.org/2021/06/17/1007472092/facebook-researchers-say-they-can-detect-deepfakes-and-where-they-came-from