Skip to content

How Chatbots and AI Could Play Out in the Smart Future

robots, AI, singularity

In the week that Elon Musk talks about the progress in his Neuralink brain-machine-interface business, discussion about AI and where it will take us is raging. Will robots rely on human smarts? Will people tolerate robots in their homes and how will chatbots evolve beyond text on the screen?

Robots in the media have been big news over the last few years. With Humans, the AMC and Channel 4 hit drama depicting how they will fit into our lives, be treated (and mistreated) by people, and perhaps develop their own wants and needs.

In the PlayStation 4 game, Detroit: Become Human, players go on a similar interactive adventure with a series of droids trying to find meaning and freedom in a world where humans are rebelling against the loss of jobs and starting to push back against the rise of AI and technology.

All of which would be even more interesting and create further dilemmas if the machines in these tales were powered in some way by access to human brain-power and circuitry. That’s what Elon Musk is trying with Neuralink, when he’s not busy launching space rockets to Mars, filling the roads with self-driving electric cars and trying to tunnel his way to public transport Hyperloop success.

Check out the livestream replay of Musk’s quirky NeuraLink event here, which explains the basics of the technology to us mortals. Is Musk really a modern-day Howard Hughes? That question will be left to history to decide. But, for now, he is diving into a future where the booming world of artificial intelligence, led by chatbots, self-driving cars and smarter digital services could take the next step.

How will the arrival of human-brain-machine interfaces shake up this landscape?

Computers Talking With Brains

Musk’s experiments have already been successful with rats and monkeys, and there’s already been plenty of brain-control interface experiments with people controlling drones, artificial limbs and other objects using their minds.

Currently, the system uses threads, like nanowires, to interface with the brain via a simple procedure. The early use cases include helping people with brain damage, neurological conditions and similar cases, where AI could help “calm” brain down, workaround damaged areas to help patients suffering from epilepsy or Parkinson’s Disease. Future plans also include preserving and improving the human brain, creating a new definition of lifespan.

Once those medical “wow” moments have taken place, assuming it works, assuming NeuraLink and other companies in this field don’t go bust, then we will have access to a system that can link brains to machines. This could be for information sharing, analysis, collaboration and so on.

There’s a long way to go with the big idea, and making it work, proving it is legal and ethical, and then developing viable use cases may take decades. Potential roadblocks including recording and storing neural data and long-term risk from any implants, But, step forwards a decade and chatbots could link to directly to a human when needed, locating information, or asking a question with no need for a clumsy text or mobile interface.

Robots could store useful parts of a human worker’s brain data and use that to answer complex questions from a customer, without the need to rush off to the cloud for some deep research. There are many examples, both practical and fanciful – imagine being able to ask Elon Musk’s (or other scientists, philosophers and similar luminaries) brain questions in 100 years time

More realistically, people could be paired with bots for work, seeing what each other does, learning and responding. Imagine a fire/rescue worker, using a drone without the need for a touchscreen control and the issues they create in dangerous environments. They can share imagery, analyse risk and investigate and rescue faster than current methods. A chatbot-style conversation directly streaming commands and questions through a brain link would be the fastest way to work, removing the need for controls or typing commands.

And in the short-term future, modern cars use cameras to monitor drivers’ eyes to check if they are falling asleep. That could soon be replaced by a brain monitoring systems that warns against distractions and refuses to let them drive if intoxicated or otherwise impaired, going to self-driving mode.

If humans can work at the speed of robots, that gives us more value in the robot economy that the game Detroit predicts. That speed could also help drive augmented humans to work directly with machines be they space probes, construction tools and others where a pure machine might not have the speed to react to fluctuating systems.