Does your AI strategy protect you from these three major risks?

News Artificial Intelligence
CreateFuture Nov 01, 2024
Back to insights

Since the next generation of AI hit the market, the headlines have been brimming with high-profile faux pas – from AI providers and AI users alike.

Luckily, this means that other businesses have learnt their lesson so you don’t have to. Giving you the information you need to prepare better, evolve your strategy – and avoid falling into the same traps.

Let’s take a look at some of the most important lessons we’ve learnt about AI in the last few years.

AI can sometimes invent facts and figures

Examples of AI ‘hallucinations’ – instances where AI models generate a nonsensical or incorrect output – are everywhere. Google’s AI-powered search misinterpreted a sarcastic comment from a Reddit post and told users to add 1/8 cup of non-toxic glue to their pizza sauce to help stop their cheese from sliding off. And, Microsoft’s Bing Chat A.I. made a catastrophic error during its first public demo, providing completely inaccurate summaries of the financial data listed in clothing brand Lululemon’s annual report.

That’s why ‘human in the loop’ approaches – where humans and AI work hand-in-hand, instead of leaving AI to work unsupervised – are so important.

Without a human to fact-check, those flawed facts, figures and suggestions can work their way into your business decisions, or even your external communications.

What’s the lesson here?

Don’t rely on AI for accuracy. Make sure your people are conducting thorough checking of any “facts” a generative AI tool provides for you. You can even ask your AI tool to provide references for any claims it makes – just make sure you check that a) the source is reputable and b) that the AI is interpreting it accurately.

AI tools might be storing your data

Large language models (LLMs) like ChatGPT and Meta’s Llama are constantly being trained and improved by their makers.

And how do they train them? Some of them base their training partially on the data you enter into their chatboxes. For a business, that could mean anything from financial figures to delicate HR information.

And that data doesn’t just sit in a vault somewhere. It’s actively used to shape the model’s outputs – which means that, one way or another, your data could work its way into others’ hands.

Not to mention that many of these companies didn’t prepare their security for the large volumes of sensitive data they’re now storing – OpenAI, for example, has suffered multiple breaches of its data, some of which exposed customer information.

Crucially, most free LLMs use user prompts and responses for training. If you want to keep your data out of a training database, you’ll usually need to upgrade to the paid business or enterprise version of tools like Google’s Gemini or ChatGPT.But, if you’re not prepared, it’s still easy for your sensitive internal data to make its way into a database you never even knew existed.

What’s the lesson here?

If you’re using an LLM, double-check how your data is being used and give your people strict instructions about what they can – and cannot – share. Or, better yet, invest in a centrally managed enterprise AI tool that guarantees it won’t store or use your data.

AI can magnify bias and put your reputation at risk

AI tools are built on human data. Which means their databases contain our best insight, empathy and understanding.

But they also contain the worst of our biases.

That’s why in 2023, the EU Commissioner for Competition Margrethe Vestager said that AI’s potential to amplify bias or discrimination is a significant cause for concern.

Imagine that – like 42% of companies surveyed by IBM – a company starts using AI in its recruiting and HR processes.

That might mean that, for example, the company begins using AI to review CVs for a software development role. 

At many businesses, the AI tool could easily look at previous data for hiring and pay reviews and decide that hiring a man is always preferable to hiring a woman. This bias in the company’s hiring might not have been visible to the human eye – but AI picks up on it right away, and assumes that it’s best practice.

Even a slight bias can become magnified into a codified way of working for your AI tool. If this happens, it’s not entirely the fault of your data – bias can easily be encoded into the model from day one – but it can still have a catastrophic impact on your processes and reputation.

What’s the lesson here?

When using any kind of LLM – whether it’s open- or closed-source – make sure you’re constantly guarding against bias. 
This will partly depend on old-fashioned critical thinking. You’ll need to be on the lookout for any signs that bias is creeping in. This might mean that you create policies and frameworks to help you regularly check your outputs, using input from a diverse range of people. You can also run post-hoc analysis on how AI recommendations have shaped your strategy.

Some companies even ask their AI tools to check themselves for bias. AI-based tools that are specifically designed to uncover bias in AI outputs are becoming more common, too.

AI can be a powerful tool. Let’s treat it like one.

AI is changing everything and the size of its impact is only growing.

Which means we need to take it seriously.

That might not sound groundbreaking. But a surprising number of companies find themselves sliding into AI adoption – or even jumping head first into it – without ever really understanding the risks.

But, in reality, AI needs to be approached just like any other technology: with a healthy respect for its weaknesses, as well as its strengths.

Of course, this isn’t a ‘one and done’ job. These risks might be the most prevalent now, but AI technology and best practices are evolving constantly. To make sure AI delivers the benefits you’re looking for and avoids the risks, you’ll need to hold on to a ‘continuous learning’ mindset that helps you stay up to date on the latest advancements and best practices.’

That can be hard work, but it pays off. Because, once you understand the risks, you can avoid them.