The History and Background of AI

Blog
01.11.23
Back to insights

Artificial Intelligence has recently become ingrained in the wider public consciousness, having been publicised and hyped in the mainstream media. The latest incarnation showcases the Large Language Models; LLMs more commonly recognised in the popular application ChatGPT, the Chat Generative Pre-trained Transformer which we will examine later.

Artificial Intelligence news headlines usually come in two flavours; either fear based, predicting in the ‘near future’ the eradication or obsolescence of certain jobs, and taken to a point of absurdity, the eradication or obsolescence of humans themselves. The other, is a more optimistic tone, where AI integrates harmoniously and brings about increased productivity and releases people from dull, often repetitive tasks… but not the elusive creative ones…

 

Consciousness

Before we examine Artificial Intelligence, we must first start with consciousness. I will tread very carefully here, because far greater philosophers have failed to define or explain what consciousness is, let alone demonstrate that it even exists; so this short chapter won’t even attempt to scratch the surface of this very complex metaphysical area. But I do want to use the idea of consciousness as a benchmark, to demonstrate how limited or dare I say disappointing the current iteration of AI is compared to human consciousness and intelligence, or even compared to something less conscious and ‘simple’ as a plant.

Humans are self aware, or as Descartes stated: ‘I think, therefore I am’, or should it have been ‘I am, therefore I think’. Debates aside, It is this knowledge of self, that makes us different from any machine or inert matter, which computers of course are. To some degree, we as reasoning humans, all have free will, and can.. and do make choices; mostly assumed to be rational or reasonable at the time, some less reasonable after some reflection or hindsight (or hangover). Computers in their current form are not conscious, they have no free will and can only execute commands from a set of instructions created by the programmer. Since the late 1950s computers have certainly got a lot faster and smaller, but their inherent architecture hasn’t really changed since then, even with the advent of AI and also Quantum computers; which are cumbersome, take many hours to ‘set-up’ for even one herculean computational task, and require specialised cool environments because they run so hot.

 

A brief history

The idea of ‘Artificial Intelligence’ is certainly not new, it has been around for as long as man has dreamed about mimicking and surpassing the almost limitless capabilities of humans. From the Ancient Greek legend of ‘Talos’, the bronze man built by Hephaestus the Greek god of invention, to the fictional stories of Frankenstein, to Hollywood films such as the 1927 masterpiece Metropolis by Fritz Lang, and more recently Stanley Kubrick’s 2001 Space Odyssey with the paranoid ‘HAL’; or should that be ‘IBM’(!) stuck in a Möbius loop, to Philip K Dick’s Blade-Runner Replicants, and the Spielberg film AI; not to mention the growing genre of dystopia science fiction where ‘technology’ in the wrong hands is used for malevolence.

Hollywood films

Science fiction has influenced and warped our common sense understanding of what Artificial Intelligence is…what it can be…and more importantly… what it cannot ever be. I would argue that Hollywood has implanted intellectual blind spots in people who would otherwise act rationally and debunk a lot of the sensational and speculative nonsense. They would realise that the majority of these fears are irrational and not built on a sound foundation of understanding, and end up sounding more like conspiracy theories. We will examine some of these fears, but before we do, let’s start by clarifying the terms ‘Artificial’ ‘Intelligence’.

 

Definitions

The Oxford English Dictionary definition of Artificial:

Adjective: ‘made or produced by human beings rather than occurring naturally, especially as a copy of something natural.’

The Oxford English Dictionary definition of Intelligence:

Noun: ‘The ability to acquire and apply knowledge and skills.’

The above definition of ‘Artificial’ gets to the heart of the AI confusion. Humans are the creators and, operators or ‘prime movers‘ of anything artificial. The word Artificial has it’s root in the word ‘Art’; which is mankind’s effort at imitating nature. For those who believe in a divine creator, the idea that a creator could create a better version of themselves, is similar to the idea that man could create something more intelligent than him or herself; this would of course lead to an impossible infinite outer loop. Humanity has natural boundaries, limits or constraints, similar to the laws of nature, that have to be respected and cannot be broken. Humans can invent machines that fly, but cannot fly. They can invent machines that aid us in thinking, but cannot create machines that think for themselves. Humans may not yet have met the limits of their creativity, but these limits do and must exist. Nature works in cycles, not in linear infinite growth models. We are seeing this confusion play out in the west in many spheres at the moment as we try to outwit the laws and constraints of nature.. and fail.

 

The Turing Test:

Part of the misplaced fear of AI among the general public, is the uncanny ability of modern AI including chat ‘bots’, to fool the end user into thinking the algorithm on the other end of the human interaction is another human. Outside of the target sector or conversational area, AI can be fooled often with comic or sometimes embarrassingly erroneous results for the companies involved.

This reminds me of the scene in the James Bond film ’From Russia with Love’, where the Russian double agent pretends to be an English Intelligence Officer, but mistakenly gives himself away, by making the cardinal sin of ordering red wine with a fish main course, this alerts the impeccably mannered James Bond to the double agent, and the game has been given away. Similar tactics can be employed against chat bots and Large Language applications like ChatGPT.

 

More recently…

For those who are a bit greyer and older ‘Deep Blue’ was one of the first computer generated moral panics. The computer giant IBM, at the peak of its powers in 1997 released ‘Deep Blue’, a computer playing chess machine, that would play and attempt to beat the world chess champion Garry Kasparov. After a six-game match: two wins for IBM, one for the champion and three draws. The match lasted several days and received massive media coverage around the world. It was the classic plot line of man versus machine. The problem with this narrative, is that behind the scenes, there were a team of computer engineers and former chess champions, helping with the algorithm. So what was billed as man versus machine, was actually the best chess player in the world against numerous teams of chess players, engineers and computing power. It’s actually incredible that Garry Kasparov did as well as he did, in this very one sided competition.

It’s interesting to note and not to forget(!) that after the competition, Garry Kasparov may have driven his car home, or made a sandwich or read a novel, or pondered his own existence… something that Deep Blue could not do, even with a team of engineers and IBM’s vast computing power.

 

The growing prominence of Artificial Intelligence, particularly Large Language Models, has left an indelible mark on societal discussions. From ancient myths to Hollywood portrayals, humanity’s urge to replicate or exceed its abilities is longstanding. However, while AI can mimic specific human tasks, it remains far from human consciousness. It’s essential to differentiate between an AI’s ability to emulate tasks and true human cognition. As we explore this AI age, understanding and dispelling myths becomes crucial, recognising AI as a testament to human innovation, not its replacement.

 

STAY UP TO DATE

Sign up to our monthly newsletter The Pulse to stay up to date on all things S&S . Or follow us on LinkedIn and Twitter to stay up to date about all things transformation.

Daniel Wright
Written by Daniel Wright
Read more from the author