In today’s age of digital innovation, Artificial Intelligence (AI) is hailed as the crown jewel of technological advancement. It’s often associated with futuristic aspirations and sometimes, even apocalyptic fears. But as we stand in awe or tremble in apprehension, it’s crucial to question: How well do those driving the AI narrative truly understand its intricacies? The titans of tech, for all their business acumen, often lack a deep-rooted understanding of the code and algorithms that underpin AI. This blog delves into the profiles of tech moguls, the real nature of AI, and attempts to demystify some of the misconceptions around this transformative technology. Dive in, as we separate fact from fiction, and explore the world of AI from the eyes of an informed skeptic.
I wonder how many of the suited executives, pundits or ‘visionaries’ of some of the large technology companies, who promote AI and more importantly, speculate as to it’s future potential, have actually been software engineers, statistical modellers or even programmed a computer?
Let’s take a casual glance at some of these executives:
- Bill Gates – A far better business man and licensing lawyer than coder. Yes he was involved in MS-DOS the first primitive operating system for Microsoft, but by his own admission, it was purchased from another company, Seattle Computer Products as QDOS and then licensed it to IBM, a very shrewd business move!
- Steve Jobs of Apple – Never coded.. but was a visionary designer
- Larry Ellison of Oracle – another shrewd business man who never coded
- Elon Musk of Tesla – an average coder by his own admission, but a great promoter, opportunist, showman and business man
Do you see the pattern? Most of these now middle aged executives had very little exposure to machine-learning statistical models, or the kind of technology developed and used in AI today. They are akin to the racing driver who doesn’t really understand the engine management system, but tries to get the most out of it.
Have any of these advocates ever really pondered over some of the problems that computer engineers attempt to solve today? Writing code is a serious discipline, that takes many thousands of hours to become accomplished in. Experienced programmers are fully aware that even relatively ‘simple’ programs can go hopelessly ‘wrong’ or more accurately; not as actually intended. Software bugs can lead to all sorts of unexpected outcomes and may require vigorous rounds of testing to resolve, at the unit, functional and user acceptance level, before being rolled out into production.
Garbage in…garbage out
Whatever goes into an algorithm, such as an open ended question, the result may not come out as expected. Programmes that attempt to answer questions with increasing complexity, have a far greater likelihood of not performing as intended. Multiply this complexity by all of the other things that a human can do, or even something as ‘simple’ as an ant or a plant can do, and AI is nowhere near close to this even today. I’m not knocking AI, but to worry about it taking over the world any-time soon is plainly ridiculous.
Free Will and Agency?
Computers have what is called a BIOS. This stands for Basic Input Output System or for an Apple Mac it is called UEFI. I could end this article here, as this gives the game away that computers really have no agency or intelligence at all, and it is indeed ‘artificial’ at best. When a computer is switched on, either by a human or by a program that was created by a human.. that again… needs to be told what to do by a human… you can see where I’m going here… the computer BIOS executes a standard set of procedural commands that checks and turns on, all the vital systems it needs to operate in a logical order, called the operating system.
MS-DOS is Microsoft’s operating system, Apple IOS is their operating system. Once these procedures have all been performed, including making sure all the input devices, such as the keyboard, mouse, screen etc. are up and running, it then finally goes into a wait loop….it waits…. and it waits… and it waits until the intelligent human commands or wills it to do something. In terms of free will, this is as ‘good as it gets’ for any computer and AI and ChatGPT are no different.
All Artificial Intelligence machines simply run computer programmes or algorithms:
The Oxford English Dictionary definition of Algorithm:
Noun: ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.’
Large Language Models
LLM algorithms, usually consist of vast databases containing structured language rules and word groupings, or word associations. These databases have taken many years to build, by thousands of computer scientists and engineers and are continuously being tuned and refined today.
Open AI was a LLM project started in 2015, scouring, structuring, organising, grouping and ‘tokenising’ or breaking up sentences, paragraphs and large chunks of text from the internet and classifying and storing them in a large indexed language database. The algorithms are programmed to identify words such as nouns, verbs, prepositions, pronouns and all the other grammatical words you learned at school, but have forgotten, and ‘learn’ which other words are commonly associated with them, like a more complex or predictive version of a spell checker.
The company recently released a public version of ChatGPT for free, and allowed the unwitting general public to unknowingly test it for free! Those same testers found lots of problems and quirks or ‘edge cases’ as they are known in software user acceptance testing. The likelihood of these algorithms ever doing anything other that what they have been programmed to do above, is as likely as a tornado assembling a jumbo jet from a junk yard pile of debris.
Everything generated or output by these models, eventually tends to the mean or mediocrity. The statistical models, by design, have to be safe and ‘confident’ that what they are sending to the user is ‘as expected’ or has gone before. This is what statistical confidence models do. They predict the likelihood of something happening, by finding consensus, based on the most voluminous results that are similar in key attributes, and not the likelihood of something unexpected happening. For example if you ask the AI model a question, it will look at all of the answers in it’s database, and see the volume or count and popularity of the potential answers. The same is true of generated images or music. The consensus view is what is returned to the user, after it has been filtered and sense checked or ‘verified’ by further databases and algorithmic rules.
So to conclude, can AI be truly creative or inventive, conceiving a new idea from scratch? The answer sadly is no. AI can use existing media, which were originally created by humans, that are referenced in it’s database. It can construct differing versions of an original idea by merging, aggregating, summarising, filtering or refining, but not originating.
Optimism, pessimism or realism?
I hope this article has dispelled some of the hype, mystique and plain old nonsense around AI. I also hope that it hasn’t disappointed you. The future of technology is bright. I am optimistic about some of the great uses for AI and how it can help us, and make life less tedious, and really free a lot of people from having to do boring and repetitive tasks.
Is AI creative.. no, can it help humans become more creative..in some ways yes. Can it be a force for evil.. yes, we are already seeing a lot of search engine results becoming filtered, manipulated and force ranked, so that more popular opinions are censored and only ‘allowed’ views make it through to the results. So a balance, as always must be made about the benefits and pitfalls of a benign technology; that can become malevolent in the wrong hands, no different to the artisan’s hammer or the chefs knives. In the next article we will examine some of the current and future uses for Artificial Intelligence.