AI: A friend or a foe?

Blog
04.08.22
Back to insights

We recently invited members of our tech-savvy, intellectual community – The Tech Society – to share their experiences with AI, question whether it is “the best or worst thing to happen to humanity”, and ponder what the future of AI really is.

Artificial intelligence (AI). The latest industry buzzword or a Hollywood plotline? In truth, AI is technology we use every day – perhaps without even realising. From our voice assistants to streaming apps, personalised marketing, facial recognition and smart input keyboards, there are several useful AI applications currently in existence. In this article, we reflect on the discussions of The Tech Society to ascertain the impact AI will have on our world.

AI brings a faster pace of change

The scariest thought about AI is that robots will take over and leave humans redundant. But the chances are the impact will be no greater than past changes we have endured. From the change in horse-drawn carriages to motorcars, or switching manual labour for farm machinery, the only thing AI is destined to do is increase the pace of change.

As with most macroeconomic challenges, some organisations will struggle to thrive when faced with AI, but the vast majority will benefit from productivity gains. There will naturally be certain tasks that are better suited to the automation AI can provide. But that’s ok. Humans can take on a different role, which is uniquely ours: creativity.

In the music industry, artists like Arca, Holly Herndon and Toro y Moi use AI to push their music in new and unexpected directions. But this isn’t AI showing creativity – it’s programmatic. Artists feed music samples into the AI technology, but the output isn’t ‘new’. The technology has simply repurposed what already existed to create a different outcome. 

Creativity is like art – it’s subjective, inspired, and comes out of the blue. It is something that has never existed before, because it is the result of a biological brain process that machines are not capable of – meaning it’s exclusive to humans.

Today, AI is better suited to use cases where it can be used to boost performance. We see it a lot in the world of F1, where vast data sets are fed into AI systems that are programmed with well-defined constraints. The outcome is a new way to finetune the car’s performance based on the system’s calculations.

Be realistic about AI capabilities

There are 4 types of AI:

  • Reactive machines
  • Limited memory
  • Theory of mind
  • Self-awareness

Unless you are tech-savvy and notice AI everywhere in the world around you, the greatest exposure to the technology you have is through Hollywood’s interpretation – as seen in movies like The Terminator, Blade Runner, The Matrix, Ex Machina, and I Robot. But this is AI at one extreme of the spectrum: self-awareness.

Certain engineers may argue we have hit the point where AI is sentient. But the reality is we’re still closer to the other end, which is more programmatic, such as Tesla’s self-driving cars.

It’s important to focus on where AI makes sense now and make it the best it can be. Take chatbots as an example. They are designed to allow customers to self-serve, or direct them towards the right team for help. But where a lot of chatbots fail is when they haven’t been programmed using natural language. This makes them no more useful than a person in a call centre who is constrained to a script. However, with a little more time and effort it’s possible to tweak the chatbot AI to prevent it being a barrier to a great customer experience.

It’s our fault if AI is prejudice

When the Apple iPhone X launched, its AI-powered facial recognition technology was unable to tell Chinese people apart, because the creators had only tested it on white people.

Bias is one of the biggest challenges to overcome with AI technologies, because the potential exists to inadvertently program it in. Consider that most IT teams around the world are male dominated. It shouldn’t affect the outcome, because they’re not consciously thinking about creating something for men-only, but subconsciously they can fail to think through the problem in its entirety. For example, Apple’s Health application failed to account for women and didn’t allow reproductive health to be tracked.

To limit prejudice in AI it needs to start by creating more diverse teams to build the tech, and then testing the tech with more diverse audiences

Furthermore, we need to flip the script. Technology is always pushed out the door as quickly as possible because teams are measured by outputs instead of the quality of the outcome. In an attempt to hit their KPIs, important aspects may be missed or overlooked – like checking you’ve included the health requirements for 50% of your audience. If we shift the thinking to focus on outcomes it’s likely we’d end up with a better result. And if we monitor the outcomes to identify instances when something isn’t right, it’s possible to go back and understand how the tech is working to fix the unconscious bias.

The scariest thing about AI…

Most AI technology development is driven by economic greed. According to Gartner, a third of technology providers plan to invest $1m+ in AI over the next two years, to create new products and services that expand their customer base and generate new revenue. 

It doesn’t seem right that sectors like healthcare are lagging behind in AI adoption when the technology has so much potential to do good. Already, it’s started to create a tech divide between the ‘innovators’ who are more tech savvy, and the ‘laggards’ who aren’t. Long-term, this could have devastating consequences for people who don’t (or can’t) keep pace with change. Ideally we need to see more research and social policies that drive equality to ensure AI is made accessible to all, and help solve wider societal issues.

Good AI starts with good ethics

For AI to be a positive force in the world, we need to assign responsibility. However, the software industry as a whole has a bad reputation for not taking responsibility for its actions. Social media is notoriously bad for mental health and wellbeing – studies show a correlation between its usage and disrupted sleep, depression, and poor academic performance. The providers know their social platforms are ‘toxic’, but refuse to address the issue. 

But what happens when the AI doesn’t work as intended and someone gets hurt? Who do we hold to account – the company that owns the technology’s IP? The programmer who wrote the code? Do we pass the blame on to the user?

Eventually, the lawmakers will get involved (hopefully with A LOT of industry input) to regulate the use of AI and ensure its safety. For now, the organisation is collectively responsible for building a product. From the CEO down to the people who clean the office at the end of the day, it is a team effort. For AI to become the best thing to happen to humanity, it needs to start with good people – a diverse team with strong ethics, and a courageous leader who remains focused on the outcome. Only then will AI be used to solve problems that raise the quality of life for all.

Want to get involved in the tech society?

At S&S we believe in the power of the community, which is why we have established specialist groups in the tech society – to share best-practice and learn from our peers. Our events are by invitation-only, but if you’d like to find out how to join, please get in touch.