A recent opinion piece I read on Wired called for us to stop labelling our current specific machine learning models AI because they are not intelligent. I respectfully disagree.

AI is not a new concept. The idea that a computer could ‘think’ like a human and one day pass for a human has been around since Turing and even in some form long before him. The inner workings the human brain and how we carry out computational processes have even been discussed by great philosophers such as Thomas Hobbes who wrote in his book, De Corpore in 1655 that “by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.” Over the years, AI has continued to capture the hearts and minds of great thinkers, scientists and of course creatives and artists.

The Matrix: a modern day telling of Rene Descartes’ “Evil Demon” theorem

Visionary Science Fiction authors of the 20th century: Arthur C Clarke, Isaac Asimov and Philip K Dick have built worlds of fantasy inhabited by self-aware artificial intelligence systems and robots, some of whom could pass for humans unless subject to a very specific and complicated test. Endless films have been released that “sex up” AI. The Terminator series, The Matrix, Ex Machina, the list goes on. However, like all good science fiction, these stories that paint marvellous and thrilling visions of futures that are still in the future even in 2016.

The science of AI is a hugely exciting place to be too (I would say that, wouldn’t I). In the 20th century we’ve mastered speech recognition, optical character recognition and machine translation good enough that I can visit Japan and communicate, via my mobile phone, with a local shop keeper without either party having to learn the language of their counterpart. We have arrived at a point where we can train machine learning models to do some specific tasks better than people (including drive cars and diagnostic oncology). We call these current generation AI models “weak AI”. Computers that can solve any problem we throw at them (in other words, ones that have generalised intelligence and known as “strong AI” systems) are a long way off. However, that shouldn’t detract from what we have solved already with weak AI.

One of the problems with living in a world of 24/7 new cycles and clickbait titles is that nothing is new or exciting any more. Every small incremental change in the world is reported straight away across the globe. Every new discovery, every fractional increase in performance from AI gets a blog post or a news article. It makes everything seem boring. Oh Tesla’s cars can drive themselves? So what? Google’s cracked Go? Whatever…

If you lose 0.2Kg overnight, your spouse probably won’t notice. Lose 50 kg and I can guarantee they would.

If you lose 50kgs in weight over 6 months, your spouse is only going to notice when you buy a new shirt that’s 2 sizes smaller or notice a change in your muscle when you get out of the shower. A friend you meet up with once a year is going to see a huge change because last time they saw you you were twice the size. In this day and age, technology moves on so quickly in tiny increments that we don’t notice the huge changes any more because we’re like the spouse — we constantly see the tiny changes.

What if we did see huge changes? What if we could cut ourselves off from the world for months at a time? If you went back in time to 1982 and told them that every day you talk to your phone using just your voice and it is able to tell you about your schedule and what restaurant to go to, would anyone question that what you describe is AI? If you told someone from 1995 that you can buy a self driving car via a small glass tablet you carry around in your pocket, are they not going to wonder at the world that we live in? We have come a long long way and we take it for granted. Most of us use AI on a day to day basis without even questioning it.

Another common criticism of current weak AI models is the exact lack of general reasoning skills that would make them strong AI.

DEEPMIND HAS SURPASSED the human mind on the Go board. Watson has crushed America’s trivia gods on Jeopardy. But ask DeepMind to play Monopoly or Watson to play Family Feud, and they won’t even know where to start.

That’s absolutely true. The AI/compsci definition of this constraint is the “no free lunch for optimisation” theorem. That is that you don’t get something for nothing when you train a machine learning model. In training a weak AI model for a specific task, you are necessarily hampering its ability to perform well at other tasks. I guess a human analogy would be the education system.

If you took away my laptop and told me to run cancer screening tests in a lab, I would look like this.

Aged 14 in a high school in the UK, I was asked which 11 GCSEs I wanted to take. At 16 I had to reduce this scope to 5 A levels, aged 18 I was asked to specify a single degree and aged 21 I had to decide which tiny part of AI/Robotics (which I’d studied at degree level) I wanted to specialise in at PhD level. Now that I’m half way through a PhD in Natural Language Processing in my late 20s, would you suddenly turn around and say “actually you’re not intelligent because if I asked you to diagnose lung cancer in a child you wouldn’t be able to”? Does what I’ve achieved become irrelevant and pale against that which I cannot achieve? I do not believe that any reasonable person would make this argument.

The AI Singularity has not happened yet and it’s definitely a few years away. However, does that detract from what we have achieved so far? No. No it does not.