AI augmentation in security software and the resistance to IT

February 2024 Security Services & Risk Management, Information Security


Paul Meyer.

According to Gartner augmented cybersecurity leadership ties human talent to technology capabilities to balance organisational growth aspirations and cyber risk. Gartner says future security and risk management leaders will be AI-enabled, human-centric decision-makers who effectively steer through turbulent times.

The global research house found organisations are increasingly focused on personalised engagement as an essential component of an effective security behaviour and culture program (SBCP). They list key findings, including:

• When cybersecurity efforts are harmonised with business changes, the agency of the cybersecurity leader is improved.

• Accelerated digital transformation is now dependent on predictable operations; however, fragmented responsibility leads to higher costs, drops in quality, exposure to threat actors and non-compliance with regulations.

• Cybersecurity leaders and their teams are suffering from widespread burnout and attrition, which erodes effectiveness and increases organisational cyber risk.

• New laws and precedents expose cybersecurity leaders to personal liability, similar to that of more traditional officer roles.

AI can bring many advantages, including negating the noise from massive volumes of data and preventing human error due to fatigue, etc. This also speaks to skills attrition in companies that leave huge gaps in analytic capabilities – AI can fill those gaps. Herein lies the root of resistance, people do not understand that human intervention will always be required, even in an age where Quantum computing is rapidly expanding. Sycamore is one example – this is Google’s Quantum computer that can do calculations in seconds, whereas the supercomputer Frontier would take 47 years.

A Forrester report notes generative AI exploded into consumer awareness with the release of Stable Diffusion and ChatGPT, driving enterprise interest, integration, and adoption. The report details the departments most likely to adopt generative AI, their primary use cases, threats, and what security and risk teams will need to do if they are to defend against this emerging technology.

According to this report, discussions around generative AI are dominated by interest, anxiety, and confusion. The release of these platforms went viral almost immediately, garnering wide attention and speculation, along with plenty of concerns from security researchers. Forrester advises security and risk teams to adapt to how their enterprise plans to use generative AI, or they will find themselves unprepared to defend it.

Resistance to augmented AI in security software

Forrester says today’s security leaders worry about the impact on their security team first. They agree it will change how security programs operate, but it will change workflows for other enterprise functions well before that happens. They go on to note that, unfortunately, many CISOs tune out news about new technologies, considering it a distraction. The caveat with that approach is that it can lead to tomorrow’s emergency when the security program learns, for example, that the marketing team plans to use a large language model (LLM) to produce marketing copy and expects it to do so securely.

They advise us to think in terms of code, not natural language, and note that one of the interesting ways to subvert or make unauthorised use of generative AI is finding creative ways to structure questions or commands. While bypassing safety controls online is reportedly fun for hobbyists, those same bypasses could allow generative AI to leak sensitive data such as trade secrets, intellectual property, or protected data.

It is noted that security and risk professionals know the danger and complexities inherent in managing suppliers. Emerging technologies create new supply chain security and third-party risk management problems for security teams and introduce additional complexity given that the foundational models are so large that detailed auditing of them is impossible.

It is widely acknowledged that for AI success, you need to deploy modern security practices. Many security technologies that will secure your firm’s adoption of generative AI already exist within the cybersecurity domain. Two examples include API security and privacy-preserving technologies. These technologies are introducing new controls to secure generative AI.

Static application security testing (SAST), machine-learning–assisted auditing of SAST results unlock and reproduce contextual awareness and security expertise, thereby eliminating the need for human auditing. There are many positives attached to this. SAST analyses an application’s source code, bytecode, or binary code for security vulnerabilities. The US government agency, the National Institute of Standards and Technology (NIST), notes that static analysis tools are one of the last lines of defence to eliminate software security vulnerabilities during development or after deployment. The detailed discussion around SAST will be the topic of my next article on the benefits of augmented AI in security.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

AI and ransomware: cutting through the hype
AI & Data Analytics Information Security
It might be the great paradox of 2024: artificial intelligence (AI). Everyone is bored of hearing it, but we cannot stop talking about it. It is not going away, so we had better get used to it.

Read more...
Local manufacturing is still on the rise
Hissco Editor's Choice News & Events Security Services & Risk Management
HISSCO International, Africa's largest manufacturer of security X-ray products, has recently secured a multi-continental contract to supply over 55 baggage X-ray screening systems in 10 countries.

Read more...
NEC XON shares lessons learned from ransomware attacks
NEC XON Editor's Choice Information Security
NEC XON has handled many ransomware attacks. We've distilled key insights and listed them in this article to better equip companies and individuals for scenarios like this, which many will say are an inevitable reality in today’s environment.

Read more...
Detecting humans within vehicles without opening the doors
Flow Systems News & Events Security Services & Risk Management
Flow Systems has introduced its new product, which detects humans trying to hide within a vehicle, truck, or container. Vehicles will be searched once they have stopped before one of Flow Systems' access control boom barriers.

Read more...
A standards-based, app approach to risk assessments
Security Services & Risk Management News & Events
[Sponsored] Risk-IO is web-based and designed to consolidate and guide risk managers through the whole risk process. In this article, SMART Security Solutions asks Zulu Consulting to tell us more about Risk-IO and how it came to be.

Read more...
Cybercriminals embracing AI
Information Security Security Services & Risk Management
Organisations of all sizes are exploring how artificial intelligence (AI) and generative AI, in particular, can benefit their businesses. While they are still figuring out how best to use AI, cybercriminals have fully embraced it.

Read more...
Integrate digital solutions to reduce carbon footprint
Facilities & Building Management Security Services & Risk Management
As increasing emphasis is placed on the global drive towards net zero carbon emissions, virtually every industry is being challenged to lower its carbon footprint and adopt sustainable practices.

Read more...
Cybersecurity and AI
AI & Data Analytics Information Security
Cybersecurity is one of the primary reasons that detecting the commonalities and threats of what is otherwise completely unknown is possible with tools such as SIEM and endpoint protection platforms.

Read more...
Visualise and mitigate cyber risks
Security Services & Risk Management
SecurityHQ announced its risk and incident management capabilities for the SHQ response platform. The SHQ Response Platform acts as the emergency room, and the risk centre provides the wellness hub for all cyber security monitoring and actions.

Read more...
Eighty percent of fraud fighters expect to deploy GenAI by 2025
Security Services & Risk Management
A global survey of anti-fraud pros by the ACFE and SAS reveals incredible GenAI enthusiasm, according to the latest anti-fraud tech study by the Association of Certified Fraud Examiners (ACFE) and SAS, but past benchmarking studies suggest a more challenging reality.

Read more...