Intel has been known for its invention of microprocessors along with developing advanced technology. The recent attempt of the company to give voice to the world’s first Cyborg was a game changer. In November 2016, Dr. Peter Morgan was diagnosed with the incurable disease known as (MND) Motor neuron disease, the same one which took the life of Stephen Hawking.
The disease impairs the nerve cells that enable us do actions, but it physically paralyses the person while the brain is still active. That means, the individual will now be stuck with a body that doesn’t function to their stimulus. Peter was given the hope of two years to live, but his yearn for life made him subdue the predictions of his medical condition. Peter was trained in robotics which he converted into a career in academia before his diagnosis. He had achieved the first PhD granted by a robotics faculty in UK and successfully published a book titled ‘The Robotics Revolution’.
The idea for Peter 2.0
With his experience, he visualized himself as a cyborg known as Peter 2.0 which would stay alive while also flourishing. His design of a Cyborg had reached a new level of technology, where the patient would escape the starvation by piping nutrients into the stomach, to breathing in oxygen through a tube to avoid suffocation. The AI Machine would allow the paralyzed face of the patient to be transformed into an avatar he/she chooses, and the body would be wrapped with exoskeleton. Even the voice of the avatar could be changed according to the preferences.
In 2019, when peter was giving a speech at a conference, among the listeners was Lama Nachman who is the head of Intel’s anticipatory Computing La Credit. Lama has had her own experience with MND Patients along her journey in Intel. Her team were the ones who had powered the iconic Stephen Hawking’s computerized voice.
How Intel helped Peter 2.0 get voice
Intel attached an infra-red sensor to Hawking’s glasses that detected movements from his cheek, which he used to select characters on a computer. Over time, the system learned from Hawking’s diction to predict the next words he would want to use in a sentence. As a result, Hawking only had to type under 20% of all the characters he needed to talk. This helped him double his speech rate and dramatically improve his ability to perform everyday tasks, such as browsing the web or opening documents.
Intel named the software the Assistive Context-Aware Toolbox (ACAT). The company later released it to the public as open-source code, so the developers could add new features to the system. Hawking had famously chosen to keep his synthetic voice. “I keep it because I have not heard a voice I like better and because I have identified with it,” he said in 2006.
Finding a voice
But Peter wanted to replicate the sound of his biological speech. Dr Matthew Aylett recorded Peter saying thousands of words, which he would use to create a replica voice. Peter would then use his eye movements to control an avatar that would speak in his own voice. Aylett had limited time to work. As Peter would soon need a surgery that would allow him to breathe through a tube emerging above his chest. But the operation meant that he could never speak again.
Three months before Peter was due to have surgery, the clone was ready, and Aylett gave Peter a demo of it singing a song: “Pure Imagination” from the 1971 film Willy Wonka & the Chocolate Factory. The operation was a success; But Peter would remain mute until his communication system was ready. The system soon arrived. It came with a keyboard he’d control by looking at an interface, and an avatar synchronized with his speech. Peter 2.0 was ready.
Upgrading the cyborg
There was another big difference between Peter and Hawking’s visions for their systems. While Hawking wanted to retain control over the AI, Peter was more concerned about the speed of communication. Ideally, Peter would choose exactly what the system said. But the more control the AI is given, the more it can help. However, ceding control to the AI could come at a big human cost if it risks sacrificing a degree of Peter’s agency.
Over time, the system starts to move in a certain direction, because you’re reinforcing that behavior over and over again. One solution is training the AI to understand what Peter desires at any given moment. Ultimately, it could take temporary control when Peter wants to speed up a conversation, without making a permanent change to how it operates. Lama aims to strike that delicate balance in the next addition to Peter’s transformation: an AI that analyzes his conversations and suggests responses based on his personality. The system could make Peter even more of a cyborg which is exactly what he wants.