In 2026, a board member at a major corporation will lose their job amid rising breaches and legal scrutiny. This could be at the behest of their company, the courts, or of their own volition, as organisations recognise that cyber risk is a business risk that CISOs cannot shoulder alone.
We have already seen evidence of this trend: digital chiefs/CIOs at both Quantus and M&S; resigned following last year's cyberattacks, as breach-related consequences have risen amid a turbulent 12 months for security.
Retailers, manufacturers, and airlines suffered major breaches, while regulations like DORA, NIS2, and the EU AI Act tightened accountability. With AI and a growing number of ‘things’ to protect, expanding the attack surface, breaches now have severe financial fallout. Ingram Micro lost an estimated $136 million per day following its breach, and JLR’s compromise is expected to cost the UK economy more than £2 billion.
Security keeps getting harder, and regulators have needed to step in to stop a single point of failure from impacting entire interconnected supply chains. Class actions and legal penalties against companies are also becoming a worrying new normal, driven by a regulatory focus on preventing systemic failures.
As these consequences continue to rise, the industry will move away from the instinct to treat CISOs as convenient scapegoats. The role itself is evolving: CISOs with the visibility to share accountability across the organisation will be better positioned to influence outcomes, ensure those responsible are answerable, and achieve cyber resilience. This visibility also strengthens their ability to defend their position when the business chooses to accept greater risk, thereby preventing breaches.
AI puts a new face on attack and defence
As AI continues to evolve rapidly in 2026, it will transform operations for both cyberattackers and defenders. Familiar attack vectors such as phishing, ransomware delivery, deepfake fraud and software exploitation will be scaled and refined through agentic AI, making it a powerful multiplier for the challenges facing CISOs. Even when autonomously launched attacks contain errors, their sheer volume increases the likelihood of success – attackers only need to succeed once.
For defenders, AI will enhance behavioural detection, predictive analytics and incident forensics, while also driving a shift toward proactive cybersecurity that anticipates and mitigates threats before they materialise. From identifying gaps and measuring control effectiveness to translating complex data into insights for business leaders, AI will enable security teams to act with greater precision and speed.
Ultimately, AI will follow the same trajectory as other technologies, such as cloud; it will answer some question, but raise many more of its own, and it won’t be the silver bullet for security that many in the industry expect. Governance issues around AI will also persist, as organisations risk becoming overwhelmed by the very technology designed to protect them, as AI weaves its way into all existing technologies, increasing the attack surface.
The deployment of AI in defence, and anywhere within an organisation, must be underpinned by robust governance. In the absence of comprehensive state regulation, organisations must establish clear oversight for all AI-driven security initiatives.
GRC becomes continuous
In response to a rapidly changing threat and regulatory landscape, next year will see a fundamental shift in how organisations think about governance, risk and compliance (GRC), moving from quarterly and monthly reporting to real-time, continuous monitoring. After a tumultuous year where major attacks have dominated the front pages, it is clear that security is getting significantly harder for CISOs. Businesses need constant information to make informed risk-based decisions, and boards, auditors and regulators need constant reassurance about organisations’ risk posture.
CISOs that stick to rigid, traditional models of GRC will find themselves constantly behind the curve. New threats, regulations and technological developments – such as fresh forms of AI and post quantum security – will rapidly evolve, outpacing reporting cycles and the speed at which CISOs can effect change.
Governance and security platforms will need to communicate in real time to inform security policies and daily operations, while ensuring that both internal and external stakeholders are satisfied that security meets the required standard. This autonomous compliance will require siloes between people, process and technology to be broken down, ensuring that every element is drawing from a single system of record, with trusted data setting the ground truth.
This will reflect a broader change; the growth of regulations worldwide, with broadly the same aims, will drive standardisation as bodies agree on best practices and organisations focus on the most stringent requirements to ensure global compliance. This will also drive a ‘back-to-basics’ approach; with security teams focusing on established cyber hygiene to prevent the vast majority of attacks, using AI to compress development cycles, and unite with compliance to focus their energies on controls that mitigate the most complex and dangerous risks.
The return of the AI Wild West
In 2026, Agentic AI will return businesses to an experimental phase, where risks rise as fast as opportunities. The early days of GenAI resembled a tech ‘Wild West’. Organisations experimented with AI without fully understanding its limitations, resulting in frequent errors, such as engineers sharing valuable source code with ChatGPT, effectively leaking IP to the entire world. Over time, however, organisations brought much of this chaos under control through stronger governance, clearer policies, and more mature operational practices.
Agentic AI will open the gates again, shifting the risk landscape and raising the stakes even higher. Because these systems operate autonomously, there is even less oversight and a greater risk of chaos spiralling out of control. Even minor issues, such as authentication faults or misconfigurations within the AI system or its dependent processes, could cascade across companies, exposing sensitive data or triggering unintended actions.
As organisations increasingly delegate decision-making to AI agents, these mistakes will be amplified, making proactive governance essential. Gaining complete visibility over the systems AI interacts with, enforcing strict access controls, upskilling teams, and embedding robust governance frameworks will be essential to maintaining innovation, while controlling risk and avoiding a return to the bad old days of the AI Wild West.
Find out more at www.panaseer.com
© Technews Publishing (Pty) Ltd. | All Rights Reserved.