Algorithms as the basis for AI are not new, but the massive improvement in data processing and storage capabilities is.
The insights and predictions machines can make are starting to have a big influence in high impact areas like mobility, healthcare, energy and climate informatics.
Yet, a high level of uncertainty remains about how Artificial Intelligence is applied to solve real-world problems. As the technology is ‘democratised’ and made available to growing numbers of us, so the need to dispel the basic myths surrounding the subject grows.
So let’s take a step back, to go forward.
Mathematics not magic
There are several factors behind the renewed interest in Artificial Intelligence. Algorithms as the basis for AI are not new, but the massive improvement in data processing and storage capabilities is.
The other factor is the power of compute; when we first defined these mathematical models we couldn’t apply them for use by standard enterprises, it required super compute capabilities.
Compute
Then there’s the advent of cloud.
Amazon launched their cloud service in 2006 which provided highly elastic and scalable compute to anyone with a credit card. While this provided compute on demand, in parallel research was being carried out on whether special purpose HW could be used to accelerate the processing required for AI.
In 2009 Professor Andrew Ng from Stanford University used Graphical Processing Units (GPU), rather than Central Processing Units, to train deep-belief models over 70 times faster than CPUs. We have the gaming community to thank for GPUs, a specialised computer chip used to render 3D graphics found to be equally suited to the vector and matrix mathematics required for training neural networks.
Ultimately, at the core of machine learning is a mathematical model. It’s about machines having the power to run algorithms with the capacity to take millions of data and run them through a neural network. Without GPUs that could take months, or even years.
Despite this advance though, AI was still the preserve of the big tech companies.
Availability of Frameworks
In the last few years, Amazon, Google, Microsoft and Facebook open sourced a number of machine learning frameworks like TensorFlow, PyTorch, and the Cognitive Tool Kit. The combination of these technologies and frameworks, GPUs and data capabilities made AI accessible to all.
You might ask ‘why did they do that?’ Ultimately, the big tech providers are trying to sell cloud services and giving these tools away requires the use of their services to implement AI.
Losing control
There is danger in getting distracted by developments that propagate certain myths around AI, like the fear of losing control of computer-based systems.
This dynamic is evident in the case of Google Duplex, an AI System for accomplishing real world tasks over the phone i.e. making telephone appointments. Based on the user’s instructions, the cloud-based app dials a phone number and holds a voice conversation with a person at the other end who cannot tell they are speaking to a machine.
In reality, AI works best in narrow circumstances when applied to specific problems. If in that conversation the person taking the appointment said, ‘did you watch the match last night’ the machine wouldn’t be able to answer because it does not have general intelligence.
Creating an Artificial Intelligence system that is broad, i.e. really good at text to speech or natural language processing and image recognition requires different AI architectures. This introduces complexity, so in practical terms we tend to focus on specific use cases i.e. Narrow AI.
When businesses apply AI, they really should consider the use case, then make sure they pick the neural network architecture and technique best suited to it.
Better and more convenient
Taking financial services as an example, given the information banks have about us, what they offer is quite general – it’s not specific to us as individuals.
They could on the other hand use the information from various data points to make life more convenient by spotting patterns in our transactions. The application of AI to our data might help us pay off our mortgages quicker or get better returns on spare cash.
In fact, one growth area is robo-advisor, which uses algorithms to democratise wealth management techniques previously only available to high net worth individuals.
Conclusion
AI can help companies become smarter and deliver real business value, but it’s time to move beyond the hype.
At EY, Artificial Intelligence is impacting both our clients’ industries and our own organisation –which is why we focus on applying AI to solve specific business problems.