Cybersecurity professionals have reacted to the world’s embrace of generative AI with well-founded caution. The technology is in its early days and a lot of the implications have yet to be worked out. That being said, the radical efficiencies of generative AI clearly have enormous potential to strengthen enterprise security and ease the strain on cybersecurity teams that are struggling to keep up with the nonstop growth of threats.
Two big advantages of generative AI today are its capacity to process huge amounts of data at high speed and its ability to communicate in clear natural language. This combination is driving the uptake of AI coding companions to make software developers’ lives easier. Now similar generative AI security companions are emerging for cybersecurity platforms.
AI security assistants can translate alerts into user-friendly language, analyze and interpret code and commands, and search for threats based on natural-language queries like, “Tell me what this bubble map is showing.” As they evolve and mature, they will go even farther in reducing the risk of breaches, speeding up mitigation, optimizing performance and costs, and improving organizations’ internal understanding of cybersecurity issues.
Preventing breaches and mitigating threats with generative AI security
Automation has streamlined many manual cybersecurity tasks, but security analysts still have to roll up their sleeves and sift through piles of log entries to evaluate possible threats when their security information and event management (SIEM) systems raise alerts. A generative AI security assistant can save the trouble by providing the same assessment in seconds. With a single prompt, the AI will scour logs and other data and report back on immediately what may be a threat and what isn’t.
Generative AI can also de-obfuscate malicious script. If an endpoint is running code that seems suspect but isn’t obviously harmful, a companion can break the code down, analyze it and make a determination about its intent—all much faster than a human analyst.
The ability of generative AI to ingest and logically analyze huge quantities of data not only speeds up mean-time-to-detection but also allows security professionals to prioritize risks more efficiently by almost instantly assessing which corporate assets are most vulnerable to a specific threat.
Improving team performance and optimizing costsSpeed isn’t the only benefit of generative AI security. Its interpretive capabilities provide context and understanding that allow less-senior personnel to contribute to a stronger overall security posture. By the same token, it will increasingly free up senior personnel to focus on higher-value issues. Both help overcome the difficulty many organizations face finding sufficiently skilled cybersecurity professionals.
Generative AI will also give cybersecurity teams a way to scale. With thread loads growing exponentially, this is critical. Two years ago, Trend Micro blocked 60 billion threats; in 2022, we blocked 140 billion. It’s not an option for enterprises to simply add headcount to keep up.
Allowing teams to stay lean and focused, with more—smarter—automation and less reliance on senior experts all add up to better cybersecurity performance with lower overall costs to the enterprise.
Communicating strategic priorities with the help of generative AI security
Corporate boards are taking more and more interest in cybersecurity as part of their risk management mandates. And cybersecurity insurance providers are asking more questions that organizations need to be able to answer.
In the past, CISOs were basically restricted to reporting performance stats. “We blocked 10,000 spam messages last month.” But those kinds of backward-looking data points don’t really say much about an organization’s present-day security status—or readiness for the future.
‘Classic’ AI and machine learning can produce risk scores based on discovery and assessment that rank the relative vulnerabilities of endpoints, assets, accounts, and more. Generative AI will be able to go even further and produce plain-language reports CISOs can share with boards and executives to show clearly where the organization is strong and where cybersecurity investment is needed. And eventually generative AI security will also be able to offer recommendations on how the most pressing risks can be addressed—for example, by suggesting patches for exchange servers vulnerable to known threats.
At the start of the road
Generative AI has many benefits to offer cybersecurity in terms of effectiveness, efficiency, and understanding. Beyond today’s search and assessment functions, the near term is likely to bring additional advanced capabilities: AI-generated cybersecurity guidance, custom reporting, sandboxing, and specialized threat detection applications—for example, to identify socially engineered emails more effectively than classic AI/machine learning.
As the costs of generative AI come down, the way will open further for new tools to be developed. Some of these should focus on helping enterprises manage how generative AI is used by employees—preventing data leakage by controlling access privileges and the content that can be fed into AI engines.
Of course, cybersecurity solution vendors aren’t the only ones exploring the possibilities of generative AI. Threat actors are looking for ways to misuse the technology. There again, generative AI security tools will likely be the remedy. As Mark Andreesen wrote in “Why AI Will Save the World”, offsetting the risk of bad actors requires government and business to work together and “...vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities...”
In other words, generative AI is not only an advantage in combatting today’s existing threats; it can also be expected to be our best defense against AI-generated malware in the future.
Next steps
All-in on generative AI security
At Trend Micro, we’ve been working with AI and machine learning since 2005. We’ve built a generative AI companion into our Trend Vision One™ cybersecurity workbench that can generate risk assessments for IP addresses, IT assets, and accounts. To develop our team’s AI expertise, we’ve held an annual employee AI contest since 2018—and earlier in 2023, our threat research team defined 40 different potential ‘misuse’ cases for generative AI.
For more Trend Micro thought leadership on generative AI security, check out these other resources: