Skip to content

Artificial Intelligence: What Is AI & Where Is It Going?

Artificial Intelligence: What Is AI & Where Is It Going?

When you ask someone about AI, you tend to get the same responses. Killer robots. Evil computers. Cyborgs. Unfortunately, Hollywood has gotten out in front of actual applied AI, and warped the average person’s perception of the technology. What is actually an extremely useful and world-changing idea has been turned into something negative.

Here, we’ll be examining what AI is, how it started, how we’re interacting with it, and where it’s headed. As you’ll see, the future of AI is much brighter than what the Silver Screen has predicted.

What is AI?

Artificial Intelligence, or AI for short, is a technology that has evolved and changed through the years. So too has its definition. Upon its inception, it was referred to as the science and engineering of making intelligent machines. Now, it is more commonly thought of as the science of creating machines and technology that can think as a human does.

To be more specific, AI is less of a technology itself and more so a collection of various frameworks and methods that work together. The core components  include:

Neural networks: Inspired by human brains, neural networks are systems that aim to process information in the same fashion as a human. This is done by constructing a series of connected nodes in multiple “layers”.

First is an input layer, where the data to be used is inserted. Next are a number of hidden layers, each of which analyzes a different factor. For example, if you were to input a data set consisting of pictures of pizzas, the first layer could look for common colours, while the second layer looks for common shapes, and so forth. As more data is processed by the hidden layers, the model will get better at determining the colours and shapes that make up pizza. Finally, labels can be applied to the data that is output. This network can now be used to identify what is and isn’t a pizza.

Machine learning: This technology is what allows neural networks to process data and try to make sense of what it is. To break it down as simple as possible, machine learning labels things. It does this by utilizing the nodes and layers of the neural network and developing a common image of what something is based on shared elements.

There are 4 commonly recognized types of machine learning. These include supervised, semi-supervised, unsupervised and reinforcement. Each type has unique uses that are selected based on the task you are looking to complete and the data set you are using.

Deep learning: The most powerful and complex of these technologies, deep learning is a little more difficult to define. Put simply, it is an ever-improving algorithm that is able to make its own rules and operate on those rules to produce an output. The algorithm can operate independently, and does not need human-created shortcuts and labels to solve problems.

Though powerful, it remains difficult to access for a few reasons. First, it requires an extremely large data set to work with. Second, it also requires enormous amounts of computing power.

How did AI begin?

AI was coined as a term in 1955 by Professor John McCarthy. This was surrounded by a conference at Dartmouth College that aimed to bring together like-minded academics with an interest in simulating human thought. After the conference ended, the name persisted and is still the go-to identifier.

Interestingly, the idea of a self-driving car, which remains a hot trend today, was one of his first applications of this technology. As McCarthy continued his work in this field, he also created LISP, the programming language used for design and research in artificial intelligence.

Beyond popularizing AI as a term and creating LISP, McCarthy was a continuing proponent in driving the technology forward. He was a founder of 2 first-generation AI labs – the Artificial Intelligence Lab at M.I.T. and the Stanford Artificial Intelligence Lab at Stanford. These labs went on to become major players in the advancement of important technologies like robotics.

Am I interacting with AI?

It may surprise you to learn that you’re probably already interacting with AI on a daily basis.

Imagine a day where you’re awoken by the alarm you set with Siri. You hail an Uber, which follows the directions provided by Google Maps. During your ride, you scroll through your Facebook feed, and check your Gmail inbox. It’s not even 9 AM, and you’d have already used 5 services incorporating AI.

Clearly, AI is not something we should be talking about as the future. It is the present.

Of the services listed above, Siri is perhaps the most obvious piece of AI. It is a 1-to-1 interface designed to execute commands and deliver information to the user while simulating a human nature. From classic characterizations like HAL 3000 to modern incarnations like Samantha in Her, this is what most people think of when they hear AI.

These other services utilize AI in ways that are not quite as obvious.

Uber: Machine learning is leveraged to predict rider demand and attempt to supply the correct amount of drivers.

Google Maps: Data on vehicle movement is anonymized and used to find faster routes avoiding traffic.

Facebook: Analyzes faces in photos to suggest friends you should tag.

Gmail: The process of sorting your emails into priorities is handled by AI.

These are clear, everyday uses. The fact that this AI exists in massively used customer-facing applications demonstrates the power AI wields.

Where is AI going?

Given AI’s massive potential for growth and the fact that 60+ years in we seem to just be getting up and running with it, the simple answer is: everywhere.

As AI proliferates, it will become more and more intertwined with global digital channels like those listed above. Efficiencies will be realized, and learnings shared throughout the different parts of that business. It will become less of a specialized tool, and shift towards a general application.

At the same time, it should logically become more accessible to smaller organizations and the general public. To help speed the evolution of AI technology, leading firms like Microsoft, Google and more have released open source AI libraries and frameworks. These allow smaller players and hobbyists to experiment and build upon the technology. As a result, the big companies are able to see new uses, while everyone else is able to learn the technology and put it to use for themselves.

Chatbots are an area in which AI has already been introduced, and is likely to thrive. These Chatbots are the semi-automated programs you’ve likely interacted with when you message a brand on social media, or receive help from the digital agent on a website. This style of Chatbot is popular and growing, with over 300,000 deployed on Facebook Messenger alone.

Currently, Chatbots rely mainly on pre-written scripts. When the script is followed, they are extremely useful at getting users the information they need quickly. However, human beings are prone to unpredictably, and will often sway from the pre-set route either accidentally or intentionally. Thankfully, Chatbot build platforms like SnatchBot have already implemented AI tech that allows developers to deal with this. As AI advances, so too should the capabilities of these Chatbots.

Ultimately, we needn’t fear Skynet or the other dire predictions made by those who seek to use a fear of AI to amplify their own voices. Why? Think of the predictive text function on your phone. This is a piece of technology that sees virtually every single message you as an individual send. It is built on cutting edge software that leads the market. And how often is it correct? Rarely. If you don’t believe that, have a look at this Twitter thread. So, before we worry about AI building an army of killer robots, let’s first get to the point where it can write a competent sentence.

Based on all the above, it’s safe to say that AI is a general application technology that will play a vital role in all of our lives going forward, whether we recognize it or not. It’s our job to find the best implementations, and use it to bring the most good to our world.