In a recent study conducted by digital transformation firm ANS, it was found that UK organisations may be overestimating their capability to secure artificial intelligence systems. As Microsoft UK Partner of the Year, ANS carried out surveys with more than 2,000 senior IT decision-makers to understand the disconnect between confidence and practical action in AI security.
The study found that 85% of organisations assert that their investments are sufficient to support the safe adoption of AI, yet few take the critical steps to genuinely secure these systems. Only 42% of respondents reported that security is proactively incorporated into AI initiatives, and merely 37% regard it as a priority during implementation. This indicates a reactive approach, with organisations addressing AI security concerns only after systems are deployed or risks become evident.
The gap in implementation occurs despite consistent cybersecurity investments. The survey identified that more than half of the organisations direct 11% to 30% of their IT budgets to security, although seldom focusing on risks specific to AI systems.
Only 39% of decision-makers intend to allocate resources to secure AI model and algorithm training in the coming three years. Yet, these skills are indispensable for mitigating threats like model poisoning, data leakage, and the potential manipulation of AI outputs.
Moreover, employee training in the secure use of AI appears limited. With only 34% expressing plans to upskill staff, this is particularly concerning, given that employees are often seen as vulnerable entry points for cyber attackers.
The Chief Technology Officer at ANS, Kyle Hill, remarked on the issue: "AI is transforming how organisations operate, but it also introduces entirely new attack surfaces and vulnerabilities. Many businesses assume their existing cybersecurity measures automatically extend to AI, but that simply isn’t the case," he continued. "This overconfidence is creating a false sense of security. Without targeted investment in areas like model security, governance frameworks, and employee training, organisations risk leaving their AI systems exposed to misuse, manipulation, and emerging threats."
Hill emphasised that AI security should form the core of responsible adoption practices, urging organisations to adopt a proactive approach. He concluded that those who incorporate risk-led strategies for AI initiatives will best harness the technology's potential while maintaining safety.