Risk Management
Generative AI: What Every CISO Needs to Know
New technologies always change the security landscape, but few are likely to have the transformative power of generative AI. As platforms like ChatGPT continue to catch on, CISOs need to understand the unprecedented cybersecurity risks they bring—and what to do about them.
The ‘disruptive’ part of disruptive innovations often comes from the unexpected consequences they bring. The printing press made it easy to copy text, but in doing so re-wove the social, political, economic, and religious fabric of Europe. By revolutionizing human mobility, the car reshaped community design, spawning suburbs and a 20th-century driving culture. More recently, the world wide web completely transformed how people connect with each other and access information, reframing questions of privacy, geopolitical boundaries, and free speech in the process.
Generative AI seems poised to be every bit as transformative as all of these, with large language models like ChatGPT and Google Bard and image generators like DALL-E capturing outsize interest in the span of just months.
Given the rapid uptake of these tools, CISOs urgently need to understand the associated cybersecurity risks—and how those risks are radically different from any that have come before.
Unbridled uptake
To say companies are excited by the possibilities of generative AI is a massive understatement. According to one survey, just six months after the public launch of ChatGPT, 49% of businesses said they were already using it, 30% said they planned to use it, and 93% of early adopters intended to use it more.
What for? Everything from writing documents and generating computer code to carrying out customer service interactions. And that’s barely scratching the surface of what’s to come. Proponents claim AI will help solve complex problems like climate change and improve human health—for example by accelerating radiology workflows and making X-ray, CT scan, and MRI results more accurate, while improving outcomes with fewer false positives.
Yet any new technology brings risks, including novel vulnerabilities and attack modalities. Amid all the noise and confusion surrounding AI today, those risks are not yet well understood.
What makes generative AI different?
Machine learning (ML) and early forms of AI have been with us for some time. Self-driving cars, stock trading systems, logistics solutions, and more are powered today by some combination of ML and AI. In security solutions like XDR, ML identifies patterns and benchmarks behaviors, making anomalies more detectable. AI acts as a watchdog, monitoring activity and applying sniffing out potential threats based on that ML analysis of what normal or non-threat activity looks like, triggering automated responses when needed.
But ML and simpler forms of AI are ultimately limited to working with the data that’s been fed into them. Generative AI is different because its algorithms aren’t necessarily fixed or static as they usually are in ML: they’re often constantly evolving, building on the system’s past ‘experiences’ as part of its learning and allowing it to create completely new information.
Up to now, bad actors have largely avoided ML and more limited forms of AI because their outputs aren’t especially valuable for exploitation. But the data-processing capacity of ML with the creativity of generative AI makes a far more compelling attack tool.
Security risks: Key questions
The British mathematician and computer scientist Alan Turing conceived of a test in the 1950s to see if a sufficiently advanced computer could be taken for human in natural language conversation. Google’s LaMDA AI system passed that test in 2022, highlighting one of the major security concerns about generative AI, namely its ability to imitate human communication.
That capability makes it a powerful tool for phishing schemes, which up to now have relied on phony messages often rife with spelling mistakes. AI-created phishing texts and emails, on the other hand, are polished and error-free and can even emulate a known sender such as a company CEO issuing instructions to her team. Deep fake technologies will take this a step further with their ability to mimic people’s faces and voices and create whole ‘scenes’ that never happened.
Generative AI can do this not only on a one-to-one basis but also at scale, interacting with many different users simultaneously for maximum efficiency and chances of penetration. And behind those phishing schemes could be malicious code also generated by AI programs for use in cyberattacks.
Many companies have piled on the AI chatbot bandwagon without fully considering the implications for their corporate data—especially sensitive information, competitive secrets, or records governed by privacy legislation. In fact, there are currently no clear protections for confidential information that gets entered into public AI platforms, whether that consists of personal health details provided to schedule a medical appointment or proprietary corporate information run through a chatbot to generate a marketing handout.
Inputs to a public AI chatbot become part of the platform’s experience and could be used in future training. Even if that training is moderated by humans and privacy-protected, conversations still have potential to ‘live’ beyond the initial exchange, meaning corporations do not have full control of their data once it’s been shared.
AI chatbots have famously proven susceptible to so-called hallucinations, generating false information. Reporters from The New York Times asked ChatGPT when their paper first reported on artificial intelligence and the platform conjured up an article from 1956—title and all—that never existed. Taking AI outputs on faith and sharing them with customers, partners, or the public, or building business strategies on them, is clearly a strategic and reputational corporate risk.
Equally concerning is the susceptibility of generative AI to misinformation. All AI platforms are trained on datasets, making the integrity of those datasets vitally important. Increasingly, developers are moving toward using the live, real-time Internet as a continuously updated dataset, putting AI programs at risk of exposure to bad information—either innocently erroneous or else planted maliciously to skew AI outputs, possibly creating safety and security risks.
What can be done about generative AI security risks?
Many security companies plan to use AI to combat AI, developing software to recognize AI-generated phishing scams, deep fakes, and other false information. These kinds of tools will become increasingly important going forward.
Even so, businesses need to bring their own vigilance, especially because generative AI may erode traditional information silos that passively keep information protected. While cloud has given businesses a kind of dry run dealing with the liabilities of distributed data responsibilities and open systems, generative AI introduces new levels of complexity that need to be addressed with a combination of technological tools and informed policies.
Imagine, for example, a company that has historically kept its customer payment card information (PCI) separate from other datasets. If someone in the business uses a public AI platform to identify sales growth opportunities based on customers’ past spending patterns, that PCI data could become part of the AI knowledge base and crop up in other places.
One of the most important steps a business can take to protect itself is avoid thinking that because it doesn’t own or sanction the use of AI tools, it’s not at risk. Employees, partners, and customers may all be using public AI platforms and wittingly or unwittingly feeding them potentially compromising corporate information.
Clearly, generative AI brings much to consider from a cybersecurity perspective, and we are just at the start of where this new technology will lead. In our next blog post, we’ll take a closer look at what organizations can do to protect themselves.
Next steps
For more Trend Micro thought leadership on a generative AI, check out these resources: