Hype comes before a fall: Navigating new AI and its potential security pitfalls

News Artificial Intelligence
Jeff Watkins Jan 31, 2025
data padlock
Back to insights

The starting pistol has been fired on a global AI race this week, supposedly ‘shocking’ global markets that thought the US had got this one sewn up. That’s right: China has launched its very own ChatGPT equivalent in the form of DeepSeek.

Although there’s been a groundswell of AI platforms coming to market over the last couple of years, all offering to help us with those nichest of tasks, this latest entrant has ruffled a few Silicon feathers.

Not only does it claim that its R1 model was developed at a fraction of the cost of its main US equivalents, but the company also claims that its models require less computational power, meaning reliance on fewer and less advanced chips compared to those used by U.S. tech giants. This efficiency purportedly reduces the dependency on high-end hardware, leading to lower operational costs.

In addition, DeepSeek runs on an open source basis, meaning developers worldwide can access, modify, and build upon their technology, promoting broader collaboration and innovation. In the words of Microsoft CEO Satya Nadella, “super impressive.”

So, is this guileless blue killer whale that’s making a BIG splash in the vast ocean of super AI platforms all it's cracked up to be? Should companies risk chasing the hype whilst simultaneously exposing their data to be chomped on like fresh baby seals?  

Anthropic CEO Dario Amodei doesn’t think so. Admodei has downplayed DeepSeek’s capabilities, arguing that its models are not necessarily competitive with leading Western AI systems and warning against overhyping their significance.

What’s more, in recent days, reports have revealed that DeepSeek has left a critical database exposed on the internet, compromising over a million records, including user prompts and API authentication tokens. As Wired reported, this is not just a failure of AI ethics, but a fundamental cybersecurity lapse.

While organisations rush to integrate the latest AI tools, critical questions surrounding this new kid on the block serve as a stark reminder: hype should never come before security.

The security risks organisations are ignoring

AI adoption often moves faster than the security measures needed to protect it. Many organisations assume that if an AI model is functional, it’s also safe. But in reality, AI security is still an underdeveloped field, with many risks being overlooked.

In the case of DeepSeek, the exposed database wasn’t a sophisticated exploit—it was a basic security failure. This suggests a wider issue: AI platforms are being launched at speed, often without adequate security testing.

Beyond accidental data exposure, other risks include:

  • Lack of encryption: Sensitive AI interactions could be intercepted if not properly secured.

  • Unverified data storage locations: Where is your data actually being processed and stored. For some platforms, this remains unclear.

  • Geopolitical concerns: The origins of certain AI platforms could introduce regulatory and compliance risks, especially regarding data privacy laws.

Why AI security isn’t just an IT problem

A common misconception is that AI security is the sole responsibility of IT teams. In reality, it’s a strategic risk that should be addressed at the executive level. The rapid deployment of AI without a clear risk mitigation strategy could result in regulatory fines, reputational damage, and even legal action.

To avoid this, organisations should adopt a proactive approach by asking the right questions before integrating any new AI tool:

  • Where is my data going? Ensure transparency in data storage and processing.

  • What security measures are in place? Verify encryption, access controls, and compliance with industry standards.

  • Who is responsible for AI security? Create cross-functional teams that include cybersecurity experts, legal advisors, and AI specialists.

Balancing AI adoption with security

Innovation should not come at the expense of security. While AI presents significant opportunities, it must be adopted responsibly. Speculation around DeepSeek and its basic security framework is just one example of what can go wrong when organisations prioritise speed over safety.

Before you jump on the next AI trend, take a step back. Ensure your security policies are aligned with your AI strategy, and don’t assume that every AI provider has done the work for you.

Because when security fails, it’s not just data that’s at risk - it’s trust, reputation, and the very foundation of digital transformation itself.