Bytes Blog: The Power of Security Automation

Monday 13th May 2024

 
Nicole Chesworth
Pre-Sales Cyber Security Consultant
Author
 
Giuseppe Damiano
Pre-Sales Solution Consultant
Co-Author
 
Daniela Miccardi
Cyber Security Marketing Manager
Editor

The need to automation in information security is certainly not new. Fewer available specialists, shrinking budgets and uncertain economic climates all contribute to the demand for better automation throughout all IT areas.

The availability of the first Generative AI tools - thanks to the technological advances of Large Language Models (or LLMs) - effectively unlocked people’s ability to interact with, and draw value from ‘intelligent’ algorithms through a natural language prompt and with applications spanning across cyber security and beyond. 

The ground-breaking aspect of Gen AI is undoubtedly the way in which people can benefit from it, without the need of any specialist knowledge. But while the potential capabilities of this technology are vast, concerns have started to rise around the risks linked to its accuracy, reliability and unwanted influence from the outside world on the presented results.

While some governments are rushing to create suitable legislation to control the negative effects of implementing Generative AI, organisations of all sizes and verticals are scrutinising this technology to see if, and how much value can be drawn from already available tools.
One key example of Gen AI implementation is in the Security Operation Centre, where it promises to bring a reduction in the level of effort required by analysts while maximising their ability to deal with alerts and incidents quickly and effectively.

Security analysts are facing a number of challenges with the increase in the volume of attacks, complex or disjointed toolsets and processes, more sophisticated attacks and burnout from the shortage of staff and skills to support incident response. 
They need a new approach.

While many security vendors are accelerating the development of automation through AI, some are heading the way and have solutions already available today.

Crowdstrike Charlotte AI

CrowdStrike's Charlotte AI is a generative AI-based security assistant introduced on the 30 of May 2023 and built into the core of the CrowdStrike Falcon® solution.

It uses industry-leading generative AI technologies to enable users to ask plain-language questions and quickly surface data within the Falcon platform in order to provide real-time insight into an organisation's risk profile.

Charlotte AI is designed to democratize security and help every user — from novice to security expert — operate like a power user of the Falcon platform. It automates repetitive tasks like data collection, extraction, and basic threat search and detection. This automation allows security experts to focus on more advanced security actions, speeds up detection and response, and helps close the cybersecurity skills gap. 

Keen to learn more about CrowdStrike's Charlotte AI? Join Bytes and CrowdStrike at the Digital Modernisation & AI Summit this May! Live from 30 Euston Square - click here to register.

Microsoft Copilot for Security

Microsoft Copilot for Security promises the ability to quickly sift through and understand large amounts of log data in order to deliver fast results sorted by risk and obtained through natural language queries input on a prompt.

It integrates with the end-to-end Microsoft Security portfolio of solutions and - with the addition of plugins to ingest third party telemetry via Sentinel - proves to be invaluable for organisations already imbedded in the Microsoft world.

The primary uses cases are incident summarisation and reporting, getting an impact analysis of a breach or incident, reverse engineering of scripts and getting guided responses and support to resolve said incidents; all using natural language. Importantly, Copilot enables the strengthening of a team’s expertise with cyber skills and the use of promptbooks so that everyone can work at the same level and as efficiently as possible.

All of which is supported by Open AI advanced models and Microsoft’s hyperscale infrastructure, security specific orchestrator and built in threat intelligence.

Check Point Infinity AI Copilot

Check Point announced their take on Gen AI in February 2024 at their European CPX event. On top of explaining how the tool was created, trained and developed, they revealed that it was already available for customers and partners to access and experience first-hand.

The Infinity AI Copilot can provide information related to any aspect of a firewall solution as a response to a chat prompt question. Additionally, a single toggle switch unleashes the ability of leveraging the same tool to actually implement changes in configuration, in response to a phrased request and without any in-depth knowledge or expertise of any of their management tools.

Considerations

  • AI is a new and complex technology, making it difficult to predict potential vulnerabilities and track the consequences of improper usage. It’s important to understand the risks associated with AI such as data breaches, adversarial attacks, ethical implications, and complex vulnerability management.
  • Clearly define your objectives for incorporating automation. Assess your current security setup to pinpoint where automation could yield benefits.
  • Educate users and employees about the potential risks associated with AI in cybersecurity and provide training on how to use AI-driven tools securely.
  • SecOps, DevOps, and Governance Risk & Compliance teams should collaborate to lead the development and implementation of AI security practices.
  • Ensure comprehensive visibility into your AI systems to detect any unusual activity or potential threats.

Best practices 

The following list is intended to provide a set of best practices to maximise the value of implementing AI as well as any form of automation in the security operations centre. 

1. Test 

The first step should always be to test the automation, pretty much like typing 1+1 on a new pocket calculator and verifying that the result is indeed 2. Identify a set of relevant use cases and compare results obtained manually and through the automation, to ensure they are consistent. Some organisations might want to go as far as procuring penetration tests or even red team exercises to test the AI response to those.
  
2. Measure 

The point of introducing automation in a SOC is arguably that of reducing the overall number of alerts – not just false positives - for analysts to process. Using the same use cases mentioned earlier, compare the alerts generated before and after as well as the time required to process them in order to understand the level of improvement brought in. While this will undoubtedly improve over time as the model continues to be trained, initial results should still show a noticeable improvement.
 
3. Scale up

As the scope of automation is widened to encompass all available areas, the idea is to ensure that the time required by SOC analysts stays the same or is even reduced. This is a crucial step and should also enable analysts to increase their prompt engineering skills in order to maintain interactions with the tool as efficient as possible. While AI constantly learns, people need to adapt in how they leverage its power.

Conclusion

The human brain is not fit to perform tedious, repetitive tasks accurately and at an ever-increasing rate. That’s what we invented machines for.

While Generative AI brings both excitement and perplexity, it is becoming clear that this technology is here to stay, and the key to draw its benefits, is to implement it correctly. 
 


Want to keep informed? Sign up to our Newsletter

Connect