Skip to content Skip to footer

Is AI robbing us of our humanity?

Image: Adobe Stock / Connect world

As is tradition, yesterday I attended IP EXPO at the London ExCel. Normally at these sort of events I don’t have the time to make it to any of the various seminars taking place, but this time I made a conscious effort to make some time and I am pleased that I did. I managed to partake in a very interesting talk from Dr Hannah Fry titled: How to be human in the age of the machine.

As I bumbled into an already packed out Keynote Theatre, flat white and notepad in hand, I wasn’t really sure what to expect. Tech isn’t the sexiest thing in the world, but Dr Fry made the whole talk incredibly engaging with jokes, anecdotes, visuals and a healthy dose of audience participation.

Fry began by playing us two samples of music: Bach. One piece of music was composed by Johann Sebastian Bach himself, performed by an orchestra – the real deal. The other, was created using an AI algorithm that Fry told us works much like the predictive text on your mobile phone, anticipating which chord is most likely to come next and spiriting up its own version of things.

An amusing example of Fry playing the ‘predictive text’ game supplemented this. For the uninitiated, that is where you make a sentence on your phone purely by continually hitting the middle predictive text button in your message app. Part of Fry’s result happened to be, ‘I know you haven’t got my emails and I don’t have time for that.’ Perhaps a load of nonsense, perhaps a psychologist’s dream, who knows.

Anyway, back to the music. Fry then asked for a show of hands. Could the real Johann Bach please stand up? Was piece number one the real thing, or piece number two? The results from the audience were pretty much a 50/50 split, basically, no one had a clue. Although it turned out to be option two, for which I raised my hand – obviously my ears are classically attuned, and I will not be thwarted by computer voodoo.

The likeness between the two pieces was alarmingly uncanny however, and it was incredibly clever that an algorithm could achieve results of such accuracy, given the complexity of the music.

After this little experiment Fry told us that when people hear the word ‘algorithm’ 85% of us want to “gouge our eyes out” while the remaining 15% end up “mildly aroused”, make of that what you will. I’m personally with the 85%.

What this experiment tells us, is that simple algorithms like the one used in this case are very very effective, and can certainly fool an audience into questioning what’s human and what’s not.

That said, Fry did highlight that as clever as algorithms can be, they need a lot of thought, and prior to their creation we need to be asking ourselves, ‘How are they going to be used by humans?’

For example, when the police used algorithms to help quash the London riots, as useful as they might have been for helping the cause, equally, they could have ended up silencing perfectly peaceful protests. They simply can’t distinguish between the two.

Fry then moved on to an example of algorithms and artificial intelligence used in a court room setting. With enough data, you can essentially predict how likely it is that a criminal will reoffend. To illustrate the point, an algorithm used in court was displayed on the big screen presenting a series of statements answerable on a scale of ‘strongly disagree’ to ‘strongly agree’. For instance:

  • I am really good at talking my way out of problems.
  • I have played sick to get out of something.
  • I have got involved in things I later wish I could’ve gotten out of.

I mean, haven’t we all done these things at least once in our lives? Okay, just me then. But does that make us bad people by algorithm standards? I don’t care for your algorithmic judgement.

But this brought us on to an interesting question: If you were on trial, would you prefer a human judge, or an algorithm to decide your fate? This question was again put to the audience and again, rendered an almost 50/50 split. I aired on the human side, due to the fact humans might have a bit of well, humanity. The general consensus among those choosing the path of ‘algorithm judge’, was that it’d be easier to cheat.

But this scenario was of course used to underscore a point. Humans, by our very nature aren’t good at successive, rational, unbiased decisions. In fact, we’re awful at it. Studies show that in America, a judge is more likely to refuse bail if their local football team recently lost. Truth be told there is a lot of personal baggage that gets in the way of our judgement. However, my thinking is that surely a judge is trained in putting that aside? More so than the average Joe anyway, but I digress.

On the other side of the coin, algorithms make mistakes, massive mistakes. There was an example of a rape case, wherein the accused was 19 years of age, his victim was 14. Due to his young age, he was classed by an algorithm as a ‘high risk’ offender and subsequently given 18 months in jail. However, had the perpetrator been 36, 22 years older than the girl in question (which I personally think is worse) the same algorithm would’ve classed him as ‘low risk’ and he’d have escaped jail. Where’s the logic?

Algorithms have no sense of context or understanding of the world, so how and why do we put such blind faith in them? And we do. Perhaps not quite to this extent, but a Japanese couple were on a road trip, and as you do, plugged their intended destination into their trusty sat nav. Unfortunately, what the couple didn’t realise, was the sat nav decided to direct them as the crow flies, which happened to be straight across a huge body of water. However, instead of questioning the technology, the couple decided to drive straight into the water, consequently submerging their car, nearly drowning and requiring rescue.

The lack of context within algorithms was further cemented with image recognition technology, wherein a photograph of a hilly, green, typical countryside scene was displayed, the caption: ‘Herd of cattle grazing on a green field’. There were no cattle in sight. Which begs the question, does AI actually know what a farm yard animal looks like? Did it just manifest imaginary cows? So, it was put to the test. A series of images were presented of animals ‘out of context’. For example, those weird goats that climb trees? AI caption: ‘Birds in a tree’. A sheep on some stairs? ‘Cat on stairs.’ It doesn’t have a clue.

And then there is the utterly ridiculous, Fry showed us a website called ‘Faception’, which apparently uses AI to analyse a face and determine whether or not they’re a terrorist. What? She also told us that when she recently went appliance shopping, there was a fridge with a sticker stating it was ‘AI ready’, what does that even mean?

One the one hand, we have algorithms that manipulate people’s trust, con artists of the technology world. On the other, we have those that are really quite promising for society. We have algorithms that can detect cancer, and algorithms that can aid spectacularly in medical research, producing invaluable data in the process. Medically in fact, they can quite literally predict what the future holds for us.

But the take away from the entire talk was that as soon as something is labelled as ‘AI’ or an ‘algorithm’ it gains an air of authority that a lot of the time, as a standalone concept it quite simply doesn’t deserve. For AI and algorithms to actually aid us, they need us to work in tandem with them, playing to both their strengths and ours. To secure success, AI and algorithms need taken off their pedestal and more importantly, need a human touch. 

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.