How to Leverage AI and Mitigate Risks in the SOC 

Reading Time : 3min read
AI and Security

This is a guest post featuring insight from Mitch Mellard, Threat Intelligence Consultant at Talion, one of Devo’s MSSP partners.

The security landscape is evolving at an unprecedented, exciting, and, to some, concerning pace. It will certainly continue to do so in 2024. And if one term stole the show, it was artificial intelligence (AI). From AI jailbreaks and WormGPT to employee education, the technology has introduced both risks and opportunities to the security operations center (SOC). 

Both security executives and analysts alike must understand the potential consequences of this technological advancement in order to defend against malicious AI and plan proactive strategies within the SOC. In this post, we will examine real-world scenarios and actionable strategies to navigate this ever-changing threat landscape in 2024 and beyond. 

The dark side of AI in cybersecurity 

AI has played a major factor in technological advancements over the past year, but with that, the demand for AI jailbreaks (adversaries that exploit AI chat systems) has also increased. What was built for opportunity has also become a playground for those with not quite the same intentions – and there enters the cybercriminal.

AI models can be manipulated to generate malicious output. Perhaps the most well-known is WormGPT – an explicit cybercrime tool that acts much like ChatGPT, but without guardrails. Even security novices can use something like WormGPT to some level of success.   Unfortunately 2024 will be no different – this trend only raises future concerns about the misuse of AI for phishing messaging, building malware, and more.

In Devo’s recent SOC Appreciation Day, Mitch Mellard (Threat Intelligence Consultant at Talion) took the virtual stage to share his thoughts on staying hyper-aware of AI, what information goes into the system, and getting the balance right between convenience and efficiency with moral direction, alongside long-term gain versus potential loss.

“With AI, someone can simply ask ‘Can you write me a Python script that does X, Y, Z?’ and it spits something out that works. We never used to have access to something like that before. Really, it’s like magic. The main risk when information is shared with AI tools is that it’s also shared with other users,” says Mellard. “We’ve seen this with the Samsung incident, where employees were feeding information into an AI model without realizing it’s a source that other people can query. As a result, sensitive information regarding proprietary company technology was freely available to those giving the AI the correct prompt.”

Mitch also spoke about the boom in machine learning and large language models (LLM): 

“Behavioral analytics have been available and widely used for some time now, but LLMs are a type of AI that has reached the mainstream consciousness. I think the main reason they have managed to bridge that gap is the usability and availability to non-technical individuals. While code written by LLMs isn’t going to replace a skilled software developer immediately, it does allow a level of access to the technical landscape that non-technical people simply haven’t had before, acting like a translator that bridges technical jargon and code to plain English in the same way a translator bridges actual language barriers.”

Combat Malicious AI with Education and Clear Boundaries 

Mellard says that while the advancements in AI are undoubtedly promising and enticing for a layperson, it’s crucial to be hyper-aware of how to mitigate risks associated with LLMs and AI. Organizations should prioritize informing and educating every last employee that, while you may be able to “talk” with an LLM in plain English, any information you feed it is like placing it into a database that other people can query, which carries data control and privacy concerns.

Chaz Lever, Sr. Director of Security Research at Devo, has been keeping close tabs on the advancement of AI over the last year, especially how security teams can effectively use the technology based on where it is today. Lever recommends that most security applications should focus more on tasks such as data summarization in non-adversarial environments. In other words, the technology is just not quite there yet to replace key tactics and actions that need to be taken by security analysts. 

Takeaways

Employers and employees alike must remain aware of these rising concerns as we move into 2024. While it can be easy to fall for that AI magic Mellard from Talion speaks of, it can be a rabbit hole, and we don’t know quite what’s down there yet.

As Mellard concluded during his SOC Appreciation Day talk – keeping an eye on information going into AI data will always be beneficial. Ultimately, by relying on our experience and developments in the cybersecurity space, we can create an impact with intelligence and effective decision-making. 
For more insights on cyber trends over the year and what we can expect in 2024, read Talion’s 12 Days of Cyber Christmas: 2023 Trends and 2024 Outlook.

You Might Also Be Interested in

Ready to release the full potential of your security data?

Request a Demo Let’s Chat