What is AI?

AI, a brief introduction by not a robot

Jul 22, 2019 | Article

So what is AI? Quick stab: Artificial Intelligence is when machines mimic human intelligence to solve problems that would take us too long. Something like that?

Here is the definition by John McCarthy from his Stanford University paper in 2004, who coined the term ‘Artificial Intelligence’ back in the 1950s: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Getting clearer? A little maybe.

When thinking of AI, my initial gut-reaction is that we should avoid the Judgement Day scenario from Terminator films. As long as that doesn’t happen then surely AI is okay?

For the most part I’m sure it is. Probably even overwhelmingly so given that AI is incredibly useful in so many cases- think: online shopping, advertising, googling, cars, cybersecurity, health, manufacturing, farming, global warming etc.

But AI is growing fast and it’s tough to keep up with what it is doing to us. The more complex and pretend clever AI systems get, the less transparent their decision trees seem to become. And the longer we live with AI, the more we retrospectively understand how it has affected our lives.

From self-driving cars that need to decide on who to avoid and who to run over, to Facebook and Google engagement algorithms that have changed the way our news is delivered to us. AI has an impact on our lives and we are consistently one step behind in realising what that impact is. This leads us to the many discussions that this subject is worthy.

Mentioning the words ‘autonomous’ and ‘AI’ together tends to get the ball rolling. Science-fiction has taken care of that with plenty of scary depictions of Super AIs declaring war on the human race. Fortunately, for the most part autonomous robots are mundain and boring things that usually have something to do with manufacturing. Still, the idea that something can take control out of your hands for the promise of a more trustworthy decision, or a more responsible, efficient, productive, or safer one- is a scary idea. An idea that requires trust in the AI and trust in the group mentality that made it. If social science has taught us that we cannot trust our own objectivity as much as we like to think, then in how far are AI decision systems simply reinforcing our own preconceptions and discriminatory biases? This is a trust with wildly varying repercussions depending on the system in question.

You may well be aware that Google translate can produce amusingly inaccurate results sometimes. No doubt you can find some interesting ‘AI poetry’ examples online. They usually arise by teasing a translation AI with a barrage of copy and pasting between random languages to produce something wildly inaccurate.

These wild inaccuracies are the result of Neural Machine Translation (NMT) systems. An older translator would have simply been called a Machine Translation (MT) system. The difference is the use of an AI neural network system that constructs its intelligence from recognizing and building upon patterns just like facial-recognition systems do.

It is a system based on the operation of neurons in the human brain that uses a deep learning process to create and transform as many connections as it can into an output. The result of many, many of these connections can be a very insightful or true output. Yet it takes a lot of time, data, and computing to reach this stage, and until this happens the system’s output is nonsense.

Of course we are not really talking ‘intelligence’ here, it’s more like impressive tricks masquerading as intelligence. As such, an AI neural network system has no way of actually knowing whether it is at the ‘clever girl’ or ‘total nonsense’ stage.

So, the output of a complex neural system has little decision-tree transparency and the AI cannot actually detect whether its output makes any sense or not. Yet by recognizing patterns in data, any number of combinations of input can be transformed into a desirable output by a neural AI.

This gives us very powerful ‘intelligent’ systems that are incredibly useful (neural networks have been able to beat human performance at image recognition tasks since 2012), but their method of being clever inherently contains flaws that should be of concern when it comes to autonomous moral decision making robots.

The output that we are trusting suffers from a lack of reliability, memory, and common sense. This means that the output could be incredibly wrong or incredibly right without knowing which is the case. It also means the AI has no concept of context and hence cannot extrapolate contextual knowledge from prior connections, nor can it assess whether its output is in the realm of likely answers or impressively incorrect.

For most purposes, such as translation AIs, neural machines are not making moral decisions for us. Leave such a machine to build connections for long enough and you can verify that it is working by checking the final translation. This output is either correct or not. It’s not so simple with predictive systems such as accident avoidance in autonomous cars.

Although autonomous robots may sound scary, they are usually quite boring and safe. They often amount to things like assembly robots, or cobots that are designed to work together with humans. The decisions they make are relatively dull and straight forward and have little to do with morality.

But should AI be allowed to make moral decisions? Aside from self-driving vehicles, the classic image you may have of life and death decisions being made by robots doesn’t really exist. However things can change, and this subject is full of interesting debates and the call for protocols about this very scenario. Machines are becoming more intelligent every year and some researchers even believe they could possess some human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create other, even more powerful artificial intelligences, known as Super Intelligences.

If self-driving cars become commonplace then robots (a.k.a. non-moral agents) all over the world will be making moral decisions for us every day. This is not something that everyone is comfortable with. And with the rapid implementation of facial recognition systems in many countries, we are putting trust in a system that turns out to work much better on white faces than on people of colour simply due to the inherent group-think dynamics and flaws of those who made it.

With more than 37% of businesses today using AI in some way, there is a lively debate about artificial intelligence ethics and the moral consequences of AI systems. So, putting autonomy aside there are plenty more things of interest regarding the role of AI in our lives.

For example, the engagement algorithms used by social media companies probably seemed like a logical and relatively innocent idea upon their conception and implementation. In retrospect, their impact on our collective behaviour must not be underestimated.

The engagement AI systems that filter and sort our daily media feed from the internet have changed the way in which we experience news. In essence, they may have even changed its definition a bit.

By building an information dissemination system that rewards engagement, algorithms effectively discourage boring news in favour of news that travels well. Where truth has no inherent agenda, a lie will often sounds better because it can tell you want you want to hear. This is what Facebook AI is doing to your information diet. Making sure that niche opinions receive more than their fair share of attention and structurally misrepresent the actual balance of people that share them.

This drives sensationalist and fake-news towards the unsuspecting while self-fortifying its own value as real news. False narritives and disinformation that would ordinarily be embraced by a just tiny percentage of a population are thus catalyzed into major influencers for the masses.

This mechanism lends itself to all aspects of our news digestion and public dabates, with unsettling consequences that are many and varied. From conspiracy theories like QAnon that are much bigger than they should be, to political disinformation and election manipulation. (Think Brexit, Cambridge Analytica, US elections, Russia etc.).

Giving sensationalist and marginal narratives a daily hormone injection is not a good idea. Debates threaten to become overly focussed on polarization as the concept of trusted and diligent journalism is put increasingly under threat by normalized nonsense that contends with equal or even greater validation.

Simply put, the effect of engagement algorithms, by Facebook in particular, cause damage to our global society and continue to do so to this day. They steadily serve to undermine the democratic process and erode trust in public institutions, science, and academia.

That niche opinions are a fact of life and freedom of speech an essential human right, does not detract from the dangers of a robot system that is changing the polarity of representation of these opinions in our society.

In fact, this social media amplification has an even more devastating effect on countries that have typically struggled with freedom of speech or economic stabilty. In many countries such as Myanmar, Sri Lanka, Etheopia, and India, hateful and racist rethoric has seen significant increase and is incorrectly portayed as a reflection of what the population actually think. Knowing the bias that our information delivery system is creating doesn’t seem to help though, and as is the nature of our engagement with information it may only a matter of time before normalized sensation and false narratives start self-fulfilling their prophecies and become part of our lives.

To compound this mechanism we also have a market system that thrives on inequality. This gives us a number of catalysts that reinforce each other in creating, rewarding, and exaggerating discontentment on a global level.

This ‘unintended’ repercussion of AI for profit does not bode well for the stability of democracies. (At least it was arguably unintentional upon its conception, but has now been common knowledge for a long time). Our political offerings seem all too happy to extort and manipulate these ‘profitable’ emotions in their efforts to appease a growing disgruntled electorate. Populist parties are one of the main beneficiaries of engagement algorithms with their agendas embracing the marginal stand-points of just a tiny portion of the democracy they pretend to represent.

It is disconcerting to watch this amplification process reinforce itself and impact on the quality of our debates, and promote populistic politics as an escape valve for feelings of injustice that move further away from reflecting on their true causes.

This is just one example of the profound repercussions that an AI system can have on the world we live in. And although in this case we cannot blame autonomous robots, we could blame an autonomous Mark Zuckerberg.

There is plenty to discuss in the world of AI. If you are interested in learning more then I would recommend following Politico AI for the latest developments and conversations. For more information on neural translation AI systems, click here.

Text & illustrations: Steen Bentall

• More  Articles