Artificial Intelligence (AI)
AI Pulse: The Good from AI and the Promise of Agentic
The perils of AI get a lot of airtime, but what are the upsides? This issue of AI Pulse looks at some of the good AI can bring, from strengthening cybersecurity to driving health breakthroughs—and how the coming wave of agentic AI is going to take those possibilities to a whole new level.
What Good is AI?
As a cybersecurity company, we at Trend Micro spend a lot of time in the ‘risk zone’: anticipating threats and figuring out how to neutralize them. But we appreciate it’s also important to take a step back and consider the tremendous power of AI to bring positive change to humanity.
This issue of AI Pulse looks at some of the collective action being taken to make AI safe and beneficial, including by strengthening cybersecurity through AI-enhanced vulnerability hunting and threat detection. And we consider how agentic AI will usher in whole new possibilities for making people’s lives better as our civilization continues down the road toward artificial general intelligence (AGI).
What’s New: AI as a Force for Good
In France, you can’t spell peace—“paix”—without AI
The Paris Peace Forum sees peace as more than just the absence of war: it’s a cooperative approach to tackling challenges such as climate change, migration, and cyber threats—big issues that affect all countries and have no regard for national borders. This year’s Forum on November 11–12 made it clear that cyber threats (and opportunities) don’t necessarily include AI. Sessions across the two days covered everything from responsible AI governance and bridging the AI divide to disinformation and women in AI.
Trend was there and has announced plans to partner with the Forum on developing guidance for secure AI adoption and implementation. Trend will also take part in the upcoming February AI Action Summit led by French President Emmanuel Macron, sharing insights into AI threats and how to meet them with advanced cybersecurity tools and practices.
Is agentic AI good for your health?
According to the MIT Technology Review, it very well could be. The journal recently ran a story about the potential of agent-driven AI to unlock breakthroughs in biology. The piece was notable for a couple of reasons. First, it asserted that AI agents’ unstructured, ‘generalist’ problem-solving approach stands to penetrate the complexity that has been a longtime barrier for human researchers. Second, the piece itself was written by an AI system—Owkin, branded as “the world’s first end-to-end AI biotech.”
The dual ability of AI to drive discovery and contribute to high-quality scientific communication has many observers excited. Of course, there are risks to manage: The Verge recently reported on inappropriate hallucinated passages cropping up in AI-generated medical transcripts; and Physics World cited a study that found certain computer-aided diagnoses were less accurate for some patients than others due to biased datasets. But in the balance, the promise of agentic AI to improve lives and boost general scientific literacy seems like a net positive and a good use of transformative technology.
Gossip on the AI grapevine
Every leap forward amplifies AI’s capacity to do good, and the rumor mill is churning right now with hints of new releases from three major players. OpenAI CEO Sam Altman has denied it but word on the street is that the company will launch a platform—possibly GPT-5, currently codenamed ‘Orion’—as soon as December. An exec fueling the rumors has teased that the new platform could be 100 times more powerful than GPT-4. According to The Verge, OpenAI’s goal is to get closer to artificial general intelligence (AGI) by combining its various LLMs (though some sources say the company has already achieved AGI internally).
Anthropic is working on the next iteration of its platform, Claude 3.5 Opus. CEO Dario Amodei is being similarly coy about a release date, but observers suspect to see something before the turn of the year. No specific enhancements have been announced, but “faster, better, smarter,” seems a fair expectation, along with further proof of Anthropic’s commitment to ethical AI development.
Google is also said to be preparing the launch of Gemini 2.0 before the end of the year. Again, details are scant, though TechRadar suggests “smarter answers, faster processing, support for longer inputs, and more reliable reasoning and coding” are likely. A little farther on the horizon is the next release of Meta’s Llama platform. A company spokesman has said Llama 4 can be expected early in the new year with enhancements that approach autonomous machine intelligence (AMI)—“perception, reasoning, and planning” via techniques such as chain-of-thought. It's clear from these rumors that competition remains fierce, with AGI the prize everyone is ultimately after.
To be open or not open, that is the question
As the AI majors labor away on their new models, debate continues to swirl around the pros and cons of open-source AI. The con side worries open-source AI could easily be misused, which China’s co-opting of Llama 2 for military use suggests is a genuine risk. But those in the pro camp point to the success of open-source software in application spaces such as the internet and music streaming, pointing out that openness tends to produce stronger, more trustable solutions. A recent Economist opinion piece applauded the at least partial openness of today’s major AI models and urged governments to encourage open-source development through regulation and IP laws.
Trends in AI-enabled Cybersecurity
Source: Bing: An AI Agent that is a pest exterminator hunting for bugs
AI goes bug hunting
As the AI ‘arms race’ continues to escalate, defenders are leveraging every advantage they can. They now have a new tool at their disposal: AI for automated vulnerability hunting. Back in October, Protect AI launched Vulnhuntr: an AI-enhanced Python static code analyzer that can “find and explain complex, multistep vulnerabilities”—and that has reportedly discovered more than a dozen zero-day vulnerabilities in open-source AI projects.
In early November, Google announced its AI-driven discovery of a zero-day vulnerability in the SQLite open-source database engine, the first to be identified with Google’s LLM-assisted Big Sleep framework. Both underscore the advantages of open-source development—because finding vulnerabilities is much easier with access to source code than without.
Enlisting AI as a cyber defender
A recent article by McKinsey found that 17 of the top 32 cybersecurity offerings now include advanced AI use cases, with the majority of enterprise customers (more than 70%) “highly willing” to pay for AI-based security tools. Security operations is reportedly a big focus area, especially threat detection and response—including using AI to query large data sets and recommend protective actions.
Other opportunity areas include cloud security, endpoint security, and the use of AI assistants for autofilling questionnaires and reports. Google, meanwhile, has been promoting its three-month long AI for Cybersecurity Growth program to help startups “innovate responsibly” when developing AI-enabled cybersecurity solutions.
Taking a stand for AI safety
Our September issue of AI Pulse looked at some of the collective and regulatory efforts underway to fight AI threats. Since then, the landscape has continued to evolve.
In early October, MITRE announced its AI Incident Sharing Initiative, which allows a community of member companies to access anonymous data about incidents affecting AI-enabled systems. At the end of that same month, Anthropic posted a strongly worded call for targeted AI regulation, saying governments need to act in the next 18 months or risk losing control of AI. The article also promoted Anthropic’s responsible scaling policy (RSP) as “an adaptive framework for identifying, evaluating, and mitigating catastrophic risks.” While the RSP framework is meant to be adopted voluntarily, Anthropic has explicitly recognized the need for enforceable regulation as well.
Toward the end of November, the U.S. Commerce Department and U.S. State Department co-hosted the International AI Safety Institute Summit in San Francisco, gathering AI experts from the International Network of AI Safety Institutes and other government-supported scientific offices to advance collaboration and knowledge sharing on AI safety. Network members include Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the U.S.A. The November meeting was another precursor to the February 2025 AI Action Summit in Paris.
What’s Next: Agentic AI
AI that can think for itself
AI agents featured briefly in our August AI Pulse but deserve a deeper look as the widely acknowledged Next Big Thing on the path to artificial general intelligence (AGI).
In the simplest terms, agentic AI is artificial intelligence that can make decisions and take actions without human intervention. Gartner has called it the most important strategic technology of the next few years, projecting that AI agents will be making about 15% of daily work decisions by 2028.
You get an agent! And you get an agent!
October was a big month for agentic AI. Microsoft announced Copilot Studio users can now create autonomous agents and added nearly a dozen agents to its Dynamics 365 ERP/CRM suite. Anthropic rolled out AI agents that can ‘read’ computer screen contents and execute tasks in software applications and on the internet. Salesforce unveiled ‘Agentforce’, which lets users create their own agents to carry out business tasks and has already been adopted by OpenTable, Saks, and Wiley. Intuit announced the launch of Intuit Assist, an agentic offering that provides personalized recommendations to users of TurboTax, Credit Karma, QuickBooks, and Mailchimp.
As companies create and deploy agentic capabilities, the search is on for ways to keep them streamlined, efficient, and coordinated—scaling up their problem-solving power by working together. OpenAI is testing out a ‘Swarm’ approach for lightweight agent coordination, and Trend experts have been following the concept of agentic meshes, which would interconnect AI agents to cooperatively pursue human-prompted goals.
With agentic meshes, trust-building mechanisms will be critical, along with ways for humans to supervise and audit agentic AI activity. This will become even more important as AI agents start creating agents of their own, propagating autonomy ‘invisibly’ throughout an increasingly machine-made environment.
The golden ring: AGI
Self-directed AI agents are considered by many to be an essential steppingstone toward AGI, which Anthropic’s Dario Amodei prefers to call “powerful AI”. In his essay, “Machines of Loving Grace”, Amodei says it’s “critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for....”
Amodei notes a handful of areas where post-agentic AGI stands to dramatically improve human life: biology and physical health, neuroscience and mental health, economic development and poverty reduction, peace and governance, and work and meaning.
Not surprisingly, OpenAI’s Sam Altman is similarly optimistic, envisioning a future in which everyone has their own “personal AI team” of “virtual experts in different areas, working together to create almost anything we can imagine.”
With agentic AI upon us—and about to unleash the next wave of the AI revolution in 2025—Amodei and Altman may have it right: that the best way forward is to allow ourselves to be inspired by, and excited about, the good AI can do, even as we remain alert to the risks we collectively need to manage.
More perspective from Trend Micro
Check out these additional resources: