Controlling a computer or any other device with only thought is now possible, as Tan Le, co-founder and CEO of the Emotiv company, explains. The possibility of converting thoughts into orders supposes in our digital environment to be able to manage almost any type of object that has the appropriate technology and connectivity, because in the same way that in the Internet of Things (IoT) machines communicate with each other, the human mind can connect to them without the need for physical contact.
For years, work has been done to develop a semantic control technology aimed at electronic devices obeying written or spoken orders, however, the future seems to be much closer to brain interfaces which, among other things, directly overcome language barriers or, what is much more important, reduced user mobility.
The solution created by Emotiv consists of a five-channel encephalogram headband that collects signals from the central and temporal lobes. The central ones are responsible for executive decision-making, attention and planning, and the temporal ones, above all, process auditory information, but also memory and language processing. Sensors at the back of the head detect signals from the parietal and occipital lobes that deal with sensory processing and vision. Together they cover important brain areas and record their natural electrical signals.
Through algorithms, the machine learns to distinguish the different signals and associate the patterns with different thoughts or ideas. When we make a certain movement, our brain is sending those orders to our body and when detected by the device, the algorithm learns from our signals and knows how to convert them directly into orders for a certain mechanism the next time without the need for us to make the movement real.
Beyond decisions, feelings.
The ability to measure brain function in our behaviors also allows us to recognize emotions. This, which is normally associated with its use as neuromarketing techniques, has a much more friendly aspect with its application to video games and virtual reality. Knowing how you feel in certain situations or stimuli, the entertainment can interact according to your tastes and sensations, changing the content on a personalized basis.
Medical applications also have their chance thanks to this technology to solve sleep problems, monitor comas and prevent various pathologies with the advantage of its affordable price, much lower than the original clinical and hospital equipment Where does this technology come from? Complemented by M2M systems that connect directly with remote computers, the limit of this advance seems to be only in our imagination.
With just one look or less.
Until now, the most widespread interface that allowed communication between machines and people without mobility or physical possibilities, was based on a visual monitoring system widely used by paralysis patients, such as the one used by the famous physicist Stephen Hawking formed by an infrared sensor installed in his glasses that detects the movements of the cheek that allows him to select the characters on a screen. A predictive software similar to that of mobile phones completes the sentences by learning the way they express themselves and a speech synthesizer converts it into sounds. Curiously, the physicist has expressly refused to change the characteristic metallic tone of primitive science fiction, considering it his hallmark. Recently, he announced that Thanks to the collaboration of Intel, they had achieved an improved version that they would offer in open source for free distribution.
The software used until now, called EX Keys, also allowed Hawking to control the Windows mouse and manage other applications on his computer.
Stephen Hawking in Cambridge by Doug Wheller (CC)
The possibility of converting our brain waves into complex orders that can be executed by machines has no limits in its applications. Eduardo Miranda, a composer who was shocked to meet a musician who had lost his mobility, surprised the world with an ingenuity capable of making music thanks to a helmet with sensors connected to a computer.
Although what differentiates your project from others is the human factor. Testing one of the prototypes, the health personnel explained that what patients really wanted was to interact with other people, not with machines. Thanks to this, he evolved towards a system in which while a musician generates his composition, another is capable of interpreting it, as he himself put into practice: «My last composition is for a string quartet in which eight people interact . Four of them generate the music and the others interpret it as it is generated, following the score on a monitor »he states for CNN.
The inconvenience or limitation in these solutions was that even to manage these interfaces, a minimum of mobility and, above all, vision is required. Thanks to the technology led by Tan Le, a new obstacle is removed.
You can find out more about these interfaces in the interview conducted for Vodafone One published in El País.
[youtube]https://youtu.be/O9nxjS2ypcE[/youtube]