The Emerging Risks of Generative AI: Why Ethical Implementation is Non-Negotiable

Stephanie Gradwell Managing Partner

6 min read .

In the wake of devastating cyber attacks on Marks & Spencer and Co-op that cost millions in lost revenue and compromised customer data, one truth stands starkly apparent: organisations that implement generative AI without robust ethical frameworks and security protocols risk existential damage.

The UK government’s recent analysis, “Safety and security risks of generative artificial intelligence to 2025,” provides a sobering assessment of how generative AI can enhance threat actor capabilities, accelerate attack frequency, and dramatically increase the sophistication of security breaches. Implementing AI ethically is essential protection against a rapidly evolving threat landscape.

The New Cyber Battlefield: AI Transforms the Risk Horizon

The cyber attacks on M&S and Co-op in the first half of 2025 represent a watershed moment in the UK’s cybersecurity landscape. These weren’t conventional breaches but rather sophisticated campaigns potentially leveraging generative AI’s capabilities. The M&S attack in April 2025 severely disrupted operations for over three weeks, exposed customer data, and wiped approximately £1.3 billion in market value. Evidence suggests it originated through “advanced social engineering, most likely targeting staff credentials through AI-crafted phishing”.

Similarly, Co-op’s breach in May impacted 2,300 food stores across the UK and compromised data from an estimated 20 million membership scheme participants. These attacks exemplify what the government’s assessment described as generative AI’s ability to “significantly increase risks to safety and security by enhancing threat actor capabilities and increasing the effectiveness of attacks.”

What makes these attacks particularly noteworthy is how they align with the government paper’s prediction that by 2025, “generative AI is more likely to amplify existing risks than create wholly new ones, but it will increase sharply the speed and scale of some threats.” The paper presciently warned that “the rapid proliferation and increasing accessibility of these technologies will almost certainly enable less-sophisticated threat actors to conduct previously unattainable attacks.”

Why Business Leaders Should Pay Attention

The implications for business leaders extend far beyond the immediate headline-grabbing breaches. Several key observations from the government assessment deserve particular attention:

First, the barrier to entry for sophisticated cyber attacks has dramatically lowered. As the paper states, “Generative AI will almost certainly continue to lower the barriers to entry for less sophisticated threat actors seeking to conduct previously unattainable attacks.” This democratisation of attack capabilities means that organisations of all sizes are now viable targets, not just high-profile retailers.

Second, the attack surface has expanded. The integration of generative AI into critical functions and infrastructure presents “a new attack surface through corrupting training data (‘data poisoning’), hijacking model output (‘prompt injection’), extracting sensitive training data (‘model inversion’), misclassifying information (‘perturbation’) and targeting computing power.”

Third, the nature of attacks is evolving from pure data theft to operational disruption. The M&S attack reveals this shift, suggesting that “ransomware groups increasingly aim for reputational leverage and business continuity failure rather than just data monetization”. This represents a fundamental shift in threat actor strategy that requires a corresponding evolution in defensive posture.

Finally, supply chain vulnerabilities have become more pronounced. The attack on Peter Green Chilled, which disrupted deliveries to major supermarkets like Tesco and Sainsbury’s, demonstrates how “attackers are increasingly targeting supply chains as a means of causing widespread disruption”. A breach at one point can cascade throughout an entire network of businesses.

From Understanding to Action: Implementing AI Ethically

The government assessment makes clear that while generative AI presents significant risks, it also offers substantial benefits if managed appropriately. The key lies in ethical, responsible implementation. Based on the discussion paper and emerging best practices, here are essential recommendations for business leaders:

  • Conduct rigorous AI risk assessments: Before implementing any generative AI solution, conduct thorough risk assessments that consider potential vulnerabilities, attack vectors, and failure modes. This should include evaluating the entire lifecycle from data collection through model deployment and monitoring.
  • Establish robust governance frameworks: Create clear governance structures for AI oversight, compliance, and risk management. This includes defining roles and responsibilities, establishing review processes for high-risk AI applications, and ensuring accountability throughout the organisation.
  • Invest in security by design: Embed security considerations into AI development from the earliest stages rather than treating it as an afterthought. This includes implementing strong authentication, encryption, and access controls around AI systems and the data they use.
  • Develop AI-aware security protocols: Update security training and protocols to address AI-specific threats. This includes educating staff about sophisticated AI-generated phishing attempts and social engineering attacks, which were potentially factors in the M&S breach.
  • Implement model monitoring and evaluation: Establish continuous monitoring of AI systems to detect anomalies, performance drift, or potential security breaches. Regular evaluations should assess not just technical performance but also security posture.
  • Foster transparency and explainability: Prioritise AI systems that provide transparency in their operations and decisions. This helps identify potential vulnerabilities and builds trust with stakeholders.
  • Prepare for incidents: Develop comprehensive incident response plans specifically addressing AI-related security breaches. The costly three-week disruption to M&S’s operations highlights the importance of business continuity planning.
  • Engage with regulatory developments: Stay informed about evolving AI regulations and security standards. The government paper notes that “Global regulation is incomplete, falling behind current technical advances and highly likely failing to anticipate future developments.”

Moving Forward Together

As we navigate this new frontier of AI-enhanced security threats, the path forward requires both vigilance and collaboration. The convergence of AI with other technologies will continue to create unanticipated risks beyond what we can currently envision.

The attacks on M&S and Co-op serve as sobering reminders of what’s at stake. Yet they also offer valuable lessons that can strengthen our collective resilience. By implementing AI ethically and responsibly with governance, security, and human oversight at the core, organisations can work to harness the benefits while mitigating the risks.