In simple terms, AI is a form of intelligence that operates independently of humans – it's about creating programs or machines that can think, or at the very least, mimic cognitive functions to perform advanced tasks.
For many of us, probably the most famous example of AI are the exploits of chess-playing supercomputers like IBM's Big Blue, or more recently, AlphaZero, the creation of Google's AI research subsidiary DeepMind.
Over the years, these much-publicised chess games have pitted human against machine in a battle of strategy. They give us a glimpse into how far AI has progressed, how intelligent it has become. These mind games have also given us pause to reflect on more existential questions: What happens when these machines are so smart they don't need us anymore? What happens when they truly start thinking for themselves?
This description from an article in Wired of AlphaZero’s evolved intelligence indicates how far AI has come:
AlphaZero taught itself chess (as well as go and shogi) starting with no knowledge about the game beyond the basic rules. It developed its chess strategies by playing millions of games against itself and discovering promising avenues of exploration from the games it won and lost. It also searches far fewer positions than Stockfish [an earlier chess-playing AI machine] when it plays. The result was a chess player of superhuman strength with a style that is human-like.
At a more mundane level, AI is now part of our everyday lives at work and in the home. When we search for something on Google, AI-powered search bots look for what we want. AI powers digital assistants like Siri, Cortana and Alexa. An AI chatbot might ask us if we need help with something when we open our banking app. And of course, AI is used in industries like mining, manufacturing and agriculture to improve efficiency, product quality and the safety of employees.
The accelerated advances in AI over the past ten years have also led to increased awareness and discussion about its possible economic, societal and ethical repercussions. Some of the smartest people on the planet, like Elon Musk and Bill Gates, have issued dire warnings about a future in which humans become secondary or subservient to AI.
“With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out,” Musk said in 2014. Recently, Gates likened AI to nuclear energy in its potential to be weaponised. "The world hasn't had that many technologies that are both promising and dangerous — you know, we had nuclear energy and nuclear weapons."
With such a big and complex topic, it can be hard to cut through the noise. Getting a level-headed perspective is helpful, especially when media coverage seems more inspired by Hollywood's nightmare scenarios than a rational appraisal of what's going on.
So it was refreshing to hear Professor Jon Whittle speak at the most recent round of CEO Circle breakfast events in Melbourne and Sydney.
Professor Whittle is the Dean of the Faculty of Information Technology at Monash University. He is a world-renowned expert in software engineering and human-computer interaction (HCI), with a particular interest in IT for social good. Earlier in his career, he was a lead researcher at NASA on software engineering challenges for space systems.
Professor Whittle started by telling the audience he wanted to "cut through some of the hype about AI". He signalled he would focus on some of the ethical considerations raised by AI for businesses and society generally.
“You can barely pick up a newspaper, listen to the radio or go on social media these days without seeing a story about how AI is transforming the world. But I think it’s fair to say there’s a lot of hype around AI and certain promises about what AI will do in the future,” he said.
Professor Whittle gave a brief history of AI, from its beginnings with the Turing Test devised in 1950 and the invention of the first neural network in 1954 through to the high-velocity developments into areas like Deep Learning. Along this development path, Professor Whittle says, there have been AI 'winters'; times when AI hit a peak in its hype cycle only to go dormant again.
According to Professor Whittle, three factors are currently driving the momentum of AI:
He says different types of AI do different things. While the media has focused on advances in Deep Learning, we also see AI in forms like Language, Search, Planning, Modelling, and Computer Vision.
For business leaders, he reiterated that at enterprise-level it is crucial to focus on the particular problem you want to solve. He said, at this stage, AI performs very well with specific and defined tasks, but not so well if the problem is too general.
"The successes we are seeing with AI in certain industries are where they are identifying the part of their business that is most relevant to be transformed by AI. So, for example, in financial services, incorporating AI into risk management is an obvious area where you would apply AI," he said. "Know what problem you are trying to solve."
Professor Whittle’s most critical message is that leaders have a moral duty to think about the ethics of AI as it applies to their business.
"Be responsible in your application of AI. You've got to think about issues of bias and ethics. That's not for somebody else to deal with … It's for everybody to worry about."
AI is here with us now. It's no longer science fiction, even if some of the scenarios we see in the media appear fantastical. It's being used in our businesses now, and it's already changing the way we live. As with the emergence of any profound technology, it is in our hands to decide how we use it.
By thinking deeply and listening to experts like Professor Whittle, business and community leaders will be better placed to use AI for our greater success, and the greater good.
You can learn more about Professor Whittle and his work at the RHS Monash Data Futures website.