Artificial Intelligence (AI)
5 AI Security Takeaways featuring Forrester
Highlights from the recent discussion between Trend Micro’s David Roth, CRO Enterprise America, and guest speaker Jeff Pollard, VP, Principal Analyst, Forrester about AI hype versus reality and how to secure AI in the workplace.
Depending on who you ask, generative AI could either be the salvation of humankind or the bringer of our doom. For cybersecurity specifically, it’s been positioned both as a kind of panacea and the source of unprecedented new threats.
David Roth and guest speaker Jeff Pollard recently sat down for a “not so fast” webinar to disentangle the truth about AI from the straw heaps of hype. Roth is Trend Micro Chief Revenue Officer for Enterprise America. Pollard is a Vice-president and Principal Analyst at Forrester.
The two acknowledged the fact that AI and machine learning are hardly new to security, even if generative AI is, and looked at some of the areas where genAI is practical and useful for cybersecurity teams today. This blog distills key parts of their conversation.
“I hesitate to say we've reached peak AI hype. We have to be close or I think we're all going to have a terrible time in the industry because there are so many false promises or expectations being laid out that just aren't being met right now.”
Jeff Pollard
#1 – Beware of the AI hype
It's not just fascination with a new ‘shiny object’ that has fueled the hype about generative AI for cybersecurity. Security teams are desperate for relief. They’re over-pressured and under-resourced, squeezed by a years-long skill shortage, and faced with threats that keep multiplying and changing.
It’s understandable, then, that when genAI hit the scene, many people latched onto fantasies of fully autonomous security operations centers (SOCs) powered by Terminator-style malware-hunting agents.
Yet today’s genAI systems aren’t effective enough to operate without human intervention and oversight. Far from magically solving the skills shortage, genAI could exacerbate it in the short term by introducing new training needs. Culture is also a factor. While experienced practitioners may learn new AI tools quickly, it can still take weeks or months for them to change their habits and integrate those tools into their workflows.
Despite all that, there are compelling security use cases for current genAI systems. By augmenting existing capabilities, AI can help teams do more and get better results with less drudgery, especially in areas such as application development and detection and response.
“The faster you’re able to [generate reports], the more time you’re spending working an event.”
Jeff Pollard
#2 – Grab the quick wins
One task where security teams stand to gain fast from genAI is generating documentation. Action summaries, event write-ups, and other types of reports are tedious and time-consuming to produce. But they need to get done. GenAI can produce them on the fly, freeing up security practitioners to immerse themselves in more events and work more incidents.
One caveat is that security professionals need good communication skills to perform their roles. AI-generated reports may save time but can’t come at the cost of professional development.
GenAI can also recommend next best actions and query existing knowledge bases to surface usable information quicker than a human. The key in these cases is to be sure AI outputs align with the organization’s needs and methodologies. If a defined process has seven steps and an AI companion recommends four, a human agent needs to make sure all seven are followed—to achieve the desired outcomes and to ensure the actions being taken are compliant with corporate policies and external regulations. Inadvertent shortcuts could have serious consequences.
“It's not like the ‘minority report’ on the pre-crime unit, but at least [you] can see enough across a huge volume of data about what the likelihood of an attack path could or would look like.”
David Roth
#3 – Be more proactive with AI
GenAI has potential to turn the ‘big data problem’ into a big data opportunity and allow security teams to become much more proactive than they are today by identifying changes in an attack surface and running attack path scenarios. While it might not predict exactly what will happen, it could position security teams to get ahead of threats that would otherwise slip by.
How effective this turns out to be in practice depends on the extent of an organization’s awareness of its systems, configurations, and current states. If there are gaps, the AI will also have gaps. Today, those gaps are unfortunately common, with data and documentation recorded in multiple spreadsheets on different computers even in large enterprises.
This points to the importance of good, AI-ready data hygiene and applying standardized approaches to data management. The better the raw material, the more AI can do with it.
“If you have enterprise software in your business, it has AI now. Whether it's SAP Joule, Salesforce with Einstein GPT, Microsoft with Copilot, and the dozens of other names out there. That’s another area you need to worry about because this changes the ways users interact with company data. ”
Jeff Pollard
#4 – Watch out for shadow AI
Enterprises have legitimate concerns about AI leaking sensitive company or customer information. This can happen through employee use of unauthorized tools or via sanctioned enterprise software, which is increasingly being augmented with AI capabilities. In the past, a bad actor would need to understand how to hack into an ERP system to access unauthorized data inside it; with AI, the right prompt could too easily make the same information available.
Enterprises need to secure themselves against employee shadow AI and illegitimate use of approved AI tools. They also need to take care when using large language models (LLMs) to build applications for themselves. The underlying data needs to be secured along with the application being built, the LLM itself, and the prompt system.
These risks basically boil down to a new set of problems: bring-your-own-AI problems, enterprise app problems, and product security or innovation problems. Each type requires its own protective measures and impinges on CISO responsibilities—even though CISOs aren’t in charge of the related projects.
“It almost reminds me of the early hyper-growth cloud days.... Being able to enforce through governance and security controls is harder when you’re not keeping up with the speed of the business.”
David Roth
#5 – You need an AI strategy
There are helpful parallels between the shadow IT app frenzy and early days of cloud and the present state of AI. While security leaders called unsanctioned apps ‘shadow IT’, business leaders and investors saw their use as ‘product-led growth’. And security teams learned quickly that they couldn’t ban those innovations: any attempt to clamp down just drove usage underground where things couldn’t be managed at all.
Security teams need to accept and adapt to the AI reality. Even if it isn’t currently fulfilling all the hopes and dreams of its most fervent champions—and even if it encounters some setbacks—in two to three years AI will be much more mature and powerful than it is today. Organizations can’t afford to try to secure it after it’s adopted.
That means the opportunity is right now to get ready, develop security-oriented AI strategies, learn about the technology, and be prepared for when it really takes off. Many observers feel security teams got caught flat-footed with cloud even though they had plenty of lead time. Given the potency and complexity of AI, they can’t afford to repeat that mistake.
Infographic: Survival Guide for AI-Generated Fraud
Take it with a grain of salt, but take it seriously
GenAI may not have lived up to the hype of its transformative potential—yet—but it nonetheless has meaningful applications in cybersecurity. It won’t solve the skills shortage in the near term but it can lift some of the burden off security teams, and the better organizations manage and maintain their IT data, the more AI will be able to detect and even prevent over time. By taking lessons from recent past experiences with shadow IT and cloud adoption, security teams can prepare effectively for the day when AI does start to realize its wilder dreams—and keep their enterprises safe.
Explore more AI perspectives
Check out these additional resources: