AI Comes Into Its Own
2024 may go down as the year AI stopped being a technological novelty and became—more consequentially—a Fact of Life. Big names like Microsoft, Salesforce, and Intuit built AI into mainstream enterprise solutions; specialised AI apps and services sprung up for everything from copywriting to data analysis; and governments, think tanks, and regulators poured effort into setting up meaningful guardrails for AI development and use. Meanwhile, bad actors made good on finding new ways to dupe, intimidate, and extort using AI tools.
This special issue of AI Pulse looks back over the AI trends in 2024 and what they mean for the year ahead.
AI Trends in 2024
AI Advances by Leaps and Bounds
Our previous AI Pulse was dedicated mostly to agentic AI—for good reason. Autonomous, cooperative machine-based problem solving is widely seen as an essential step along the path to artificial general intelligence (AGI). All the big AI players spotlighted R&D efforts in the agentic arena over the course of 2024—and non-AI players moved in to offer AI agents as a service (AIAaaS).
Teaching computers to use computers
One of the year’s big agentic releases was the public beta of Computer Use for Anthropic’s Claude 3.5 Sonnet model. As the name suggests, Computer Use allows Claude 3.5 Sonnet to use a computer by ‘looking’ at the screen, manipulating the cursor, clicking on links, and entering text. Other developers are also working on web-savvy agents, though assessing performance at scale is a widely recognised challenge. The research company ServiceNow is aiming to change that with its AgentLab offering—an open-source Python package launched in December that’s capable of running large-scale web agent experiments in parallel across a diversity of online environments.
From RAGs to AI riches
AI systems need relevant data to solve problems effectively. Retrieval-augmented generation (RAG) provides that by giving systems access to contextually significant information instead of broad, unfocused data sets. On its own, RAG has been found to reduce AI hallucinations and outperform alternative approaches such as long-context transformers and fine-tuning. Combining RAG with fine-tuning produces even better results.
Anthropic announced its own spin on RAG earlier this fall with “contextual retrieval”—said to make information retrieval more successful—and a new Model Context Protocol (MCP) for connecting AI assistants to data systems in a reliable and scalable way.
Trend Micro has found RAG isn’t without its risks. Exposed vector stores and LLM-hosting platforms can give way to data leaks and unauthorised access. Security issues such as data validation bugs and denial-of-service (DoS) attacks are common across RAG components. Beyond authentication, Trend recommends implementing transport layer security (TLS) encryption and zero-trust networking to prevent unauthorised access and manipulation.
‘Smallifying’ AI models
Hand in hand with the shift to agentic AI is the need for smaller, nimbler, faster models purpose-built for specific tasks. Again, lots of work went into this in 2024. In October, Meta released updates to its Llama AI model that are as much as four times faster and 56% smaller than their precursors, enabling sophisticated AI features on devices as small as smartphones. And Nvidia released its Nemotron-Mini-4B Instruct small language model (SLM), which gets VRAM usage down to about 2GB for far faster speeds than LLMs.
Smaller models aren’t only more nimble: they’re also more energy-efficient than LLMs—and more affordable, too. That in turn makes them more widely accessible. All of this aligns well with the UN Sustainable Development Goals.
AI Fraud and Cybercrime: Seeing is no Longer Believing
Most experts agree AI can’t yet generate wholly novel threats, but in 2024 it certainly proved it can make existing attack vectors a lot more potent—especially large-scale, highly targeted phishing schemes. Deepfake propaganda took a toll on the public discourse. AI-abetted cybercrimes cost businesses millions if not billions. And phenomena like virtual kidnappings ushered in a new era of do-it-from-your-desktop extortion.
Deception gets an upgrade
2024 kicked off with the story of an employee in Hong Kong who paid US$25 million to fraudsters because he thought his CEO told him to—when really it was a video deepfake. Scammers in India put a businessman under ‘house arrest’ and staged a fake online court proceeding to fleece him of more than US$800,000. Virtual kidnappings became a real-world threat with deepfake media convincing victims their loved ones had been abducted and would be harmed unless a ransom was paid. And in November, Forbes profiled a new deepfake tool that can circumvent two-factor authentication, allowing criminals to open illegitimate accounts to access credit and loans, claim government benefits, and more.
Cases like these prompted the U.S. Financial Crimes Enforcement Network (FinCEN) to issue an alert in November about deepfake fraud targeting financial institutions and customers. Trend Micro continues to track the growing spread of pig butchering in particular, a form of investment fraud and romance scam that increasingly uses fake images and phoney portfolios to trick individuals out of their money.
Democracy, deepfake-style
More than 40 countries held elections in 2024. Anticipating that slate, tech industry leaders signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections at the Munich Security Conference in February, committing to counter harmful AI-generated content by working together on tools for detection and intervention, educational initiatives and more.
Despite the Accord, deepfake images repeatedly shaped public perceptions in elections throughout the year—including, in the U.S., AI-generated shots of Taylor Swift fans declaring support for then-candidate Donald Trump, and depictions of Kamala Harris leading a communist rally.
In December, it emerged a far-right candidate in Roumania had been boosted by paid content on TikTok that violated the platform’s policies and Romanian laws. The reportedly anti-NATO and pro-Putin candidate may also have benefited from data thefts and cyberattacks, some of which were traced back to Russian cybercrime platforms.
Bottling the Genie: AI Regulations
Aware of the current and potential risks—including ‘rogue AI’ systems that might act against human interests—governments and regulators took steps to constrain AI development and use throughout 2024. Some observers felt the new measures didn’t go far enough; others argued they went too far, at a risk to innovation.
We are the world: Global views on AI
The Global Digital Compact (GDC) adopted at the end of September’s UN Summit of the Future is a framework for overseeing AI and other digital technologies. Its five goals are to:
- Bridge digital divides
- Make the digital economy inclusive
- Ensure digital space(s) protect human rights
- Advance good data governance
- Enhance “international governance of artificial intelligence for the benefit of humanity”
The GDC’s incorporation into the UN’s larger Pact for the Future was proof that AI safety and digital equity are seen as important at the highest levels.
The OECD also weighed in on AI in 2024, with its Expert Group on AI Futures publishing a November report on top risks and policy priorities. The table below shows some of what they highlighted—with more sophisticated cyberattacks ranking as the top risk.
RISKS |
PRIORITIES |
|
|
Putting people first: New measures in the EU
In March, the EU approved an Artificial Intelligence Act to ensure safety, fundamental human rights, and AI innovation. The Act bans specific applications that threaten human rights, such as using biometrics to “categorise” people, building facial recognition databases from the internet and CCTV footage, and using AI for social scoring, predictive policing, or human manipulation.
That was followed in December by the EU’s Cyber Resilience Act, which requires digital product manufacturers, software developers, importers, distributors, and resellers to design-in cybersecurity features such as incident management, data protection, and support for updates and patches. Product makers must also address any vulnerabilities as soon as they’re identified. Violations can result in high-cost penalties and sanctions.
Also in December, the EU updated its Product Liability Directive (PLD) to include software—unlike other jurisdictions such as the U.S. that don’t see software as a ‘product’. This makes software companies liable for damages if their solutions are found to contain defects that cause harm, including, by implication, AI models.
Born in the USA: AI regulation stateside
The back half of the year was busy at the federal level in the U.S., with the White House issuing its very first National Security Memorandum on AI in October. The memo called for “concrete and impactful steps” to:
- Ensure U.S. leadership in the development of safe, trustable AI
- Advance U.S. national security with AI
- Drive international agreements on AI use and governance
In November, the National Institute of Standards and Technology (NIST) formed a taskforce— Testing Risks of AI for National Security (TRAINS)—to deal with AI’s national security and public safety implications. TRAINS members represent the Departments of Defence, Energy, and Homeland Security as well as the National Institutes of Health and will facilitate coordinated assessment and testing of AI models in areas of national security concern such as radiological, nuclear, chemical, and biological security, cybersecurity, and more.
Also in November, the Departments of Commerce and State co-convened the International Network of AI Safety Institutes for the first time, focusing on synthetic content risks, foundation model testing, and advanced AI risk assessment.
Across the equator: AI regs in Latin America
Most Latin American countries have taken steps to deal with AI risks while embracing its potential. According to White & Case, Brazil and Chile are amongst those with the most detailed proposals while others, such as Argentina and Mexico, have come at the issue more generally. Some are focused on mitigating risks, either through prohibitions or regulatory constraints, while others see opportunity in taking a freer approach that invites innovation and international investment.
Know Thy Enemy: AI and Cyber Risk
To regulate AI, it’s important to know what the risks actually are. In 2024, OWASP, MIT, and others dedicated themselves to the task of identifying and detailing AI vulnerabilities.
OWASP’s LLM chart-toppers
The Open Worldwide Application Security Project (OWASP) unveiled its 2025 Top 10 Risk List for LLMs. Back again are old chestnuts like prompt injection risks, supply chain risks, and improper output handling. New additions include vector and embedding weaknesses, misinformation, and unbounded consumption (a blow-out of the previous DoS risk category).
OWASP expanded its concerns about ‘excessive agency’ largely because of the rise in semi-autonomous agentic architectures. As OWASP puts it, “With LLMs acting as agents or in plug-in settings, unchecked permissions can lead to unintended or risky actions, making this entry more critical than ever.”
MIT also contributed to the effort to track AI risks. In August, it launched a public AI Risk Repository with more than 700 risks based on over 40 different frameworks, with citations and risk taxonomies.
AI Can do Good, Too
While it’s important to be clear about the risks of AI, it’s just as important to stay mindful of the benefits—and a number of efforts sought to highlight those positive capabilities in 2024.
Beating the bad guys to it
Using AI to discover vulnerabilities and exploits got a fair bit of attention throughout the year. While AI isn’t always needed, in situations where complexity is high and unknowns abound, it can deliver excellent results. The Frontier Model Forum found vulnerability discovery and patching is an emerging area of AI strength, due partly to increased use of coding examples in post-training and partly because of expanding context windows. AI can also support open-source intelligence gathering and reporting through real-time monitoring and analysis, trend identification, and more.
As predicted by Trend Micro for 2025, agentic AI could expand on those capabilities with a combination of tooling, data, and planning that reduce the amount of human brain time involved. Combining agentic use of reverse-engineering tools such as Ida, Ghidra, and Binary Ninja with code similarity, architectural RAG, and algorithm identification for compiled code could be a powerful lever in the cybersecurity arms race.
Promoting public peace
Trend took part in the 2024 Paris Peace Forum and announced its partnership with the Forum on developing guidance for secure AI adoption and implementation. As Martin Tisné, CEO of the AI Collaborative, said at the Forum meeting, what’s most important is to ensure that AI is outcomes-based from the start, so that its development and uses coincide with the good it can bring to society.
What’s ahead?
This time of year is rife with predictions and we’ll be sharing more of our own in the weeks to come. Clear from the AI trends in 2024 is that innovation won’t be slowing down anytime soon: the full agentic revolution is still about to hit, and with it will come new choices for regulators, new capabilities for cybercriminals to weaponize—and new opportunities for cyber-defenders to proactively secure the digital world.
More perspective from Trend Micro
Check out all our 2024 issues of AI Pulse:
- AI Pulse: Siri Says Hi to OpenAI, Deepfake Olympics & more
- AI Pulse: Brazil Gets Bold with Meta, Interpol’s Red Flag & more
- AI Pulse: Sticker Shock, Rise of the Agents, Rogue AI
- AI Pulse: What's new in AI regulations?
- AI Pulse: Election Deepfakes, Disasters, Scams & more
- AI Pulse: The Good from AI and the Promise of Agentic