Artificial intelligence is changing the world

By Ashley Davis
CTO Sortal

Have you heard about artificial intelligence and are wondering what all the hype is about? In this post I’ll explain (briefly) to you what it is, where it’s at, where it’s come from and what to expect in the future.

Over the past decade, time, resources, energy and investment has been poured into AI by the world’s largest companies and institutions. In case you hadn’t noticed AI is already here. It’s in the software we use every day and it’s used in software that’s making decisions about us and on our behalf.

If you missed the rise of AI, I can’t blame you. It’s only been since 2012 that the use of AI in business has exploded off the back of new developments and breakthroughs in the machine learning space.

Why all the excitement? Why only now? Well, in 2012 a new type of deep learning algorithm had just won the yearly ImageNet challenge. Alex Krizhevsky won the competition in 2012 with his convolutional neural network that came to be known as AlexNet. This neural network broke new ground in terms of accuracy and error rates. Every winner since 2012 has also been a deep neural network and the algorithms and models continue to grow in complexity and achieve new levels of predictive accuracy year on year. Such rapid progress has generated continued interest, excitement and most importantly - funding for ongoing research.

  Improving accuracy of machine learning models over time.    Source:    https://chaosmail.github.io/deeplearning/2016/10/22/intro-to-deep-learning-for-computer-vision/

Improving accuracy of machine learning models over time.

Source: https://chaosmail.github.io/deeplearning/2016/10/22/intro-to-deep-learning-for-computer-vision/

But what exactly is a neural network? A neural network, or more technically an artificial neural network (ANN), is a decision making or predictive model that is constructed in a way inspired by how biological brains work. Unlike normal computer programs, which we explicitly program to follow a sequence of rules, we must teach neural networks to do complex tasks such as image classification (this is what Sortal does).

  How a neuron is modeled as a mathematical function.    Source:    http://cs231n.github.io/neural-networks-1/

How a neuron is modeled as a mathematical function.

Source: http://cs231n.github.io/neural-networks-1/

ANNs learn through a mathematical and statistical training process, the aim of this is to produce a model that is highly accurate in the predictions or classifications that it can make.

Ultimately, an artificial neural network just boils down to a pile of numbers and mathematical operations. It’s a series of matrix multiplications and dot products and we must interpret some result from the output.

Number crunching of this magnitude requires raw computing power. Fortunately, the AI industry has been able to co-opt the power of graphics processing units (GPUs), devices originally intended to power applications hungry for graphical horsepower. For a long time it has been the demand for better and better video games that has driven the development of ever more powerful GPUs.

So why is the AI industry booming now? In many ways it feels like it is only just starting to achieve prominent success - but if you look at the current academic achievements in AI, you can definitely trace significant discoveries and key points all the way back to the 1950’s. So this has truly been a multi-generational effort to get to where we are now.

Progress was slow in the beginning, but is moving very fast now. Why is that? Key advances in the science behind artificial neural networks in the last decade have unlocked the rapid pace of technology that we are currently experiencing.

big data.png

It is the combination of academic progress and other enabling elements that are now moving us forward much more quickly than ever before. In the 1950’s the internet was scarcely dreamed about, let alone cloud computing. But today we have the world’s knowledge at our fingertips and affordable access to computing power.  Also to be mentioned, the world is collecting more raw data than ever before and we need to make sense of this data. Artificial intelligence can help here - but more importantly, we need access to significant amounts data to train machine learning models to make accurate predictions.

  Source: https://aboutdigitalcertificate.wordpress.com/2014/04/21/inner-big-data-3-vs-volume-velocity-and-variety-and-advantages/

Source: https://aboutdigitalcertificate.wordpress.com/2014/04/21/inner-big-data-3-vs-volume-velocity-and-variety-and-advantages/

There’s also been the open source revolution in the computing industry, where participants have been willing to freely share program code and knowledge allowing us to build off the work of the academic giants in the field and this has allowed small startups to compete and has helped convert the stream of innovations into more of a flood.

In recent years we have redirected the mathematical ability of GPUs from physics and 3D rendering (as used in computer games) to be used for training and evaluating machine learning models. Access to computing power means that models can grow larger and more complex. In 2012 the most advanced neural network had 8 layers. Now they routinely have hundreds of layers. In fact the complexity of neural networks are now pushing the boundaries of current GPUs. Will it turn out that continued investment in AI will now drive the future enhancement of GPUs much like the games industry has done in decades past? Or will we see a new generation of hardware that’s explicitly made for machine learning (AI processing units)? It’s always exciting to be on the cutting edge of technology.

computing power.png

Access to computing power means that models can grow larger and more complex. In 2012 the most advanced neural network had 8 layers. Now they routinely have hundreds of layers.

There’s more breakthroughs to come, but we already see the fruits of the revolution before us. Self driving cars are being trialled everywhere. Robots are now doing the jobs that are too dangerous for humans. Autonomous computer programs are identifying spam, preventing fraud and detecting criminal activity.

The technology that underpins artificial intelligence is moving quickly and it’s going to move quicker than you think. Currently we have only enough computing power to simulate a fraction of a human brain and there is much debate over when we might actually achieve full brain simulation. But it’s worth noting that we have a history of underestimating technological advances, here Moore’s law can help give us some kind of guideline for the trajectory of computational technology.

It predicts that our computing power doubles every two years or so. It doesn’t require that many doublings before we have the raw computing power required to emulate a full human brain.

Of course having the raw computational power doesn’t necessarily mean we understand how to make it happen, in addition we may need to invent a new computing paradigm to get there - so there are still many hurdles.

But that day will come - and probably sooner than you think. Until then, we have serious questions to consider on the ethics of artificial intelligence and what it means to be human.


About the author

Ashley Davis has over 20 years experience in software development with many years in apps, web apps, backends, serious games, simulations and VR. He makes technology work for business by building bespoke software solutions that span multiple platforms. He is the creator of Data-Forge www.data-forge-js.com