Generative AI continues to be misused and abused by malicious individuals. In this article, we dive into new criminal LLMs, criminal services with ChatGPT-like capabilities, and deepfakes being offered on criminal sites.
This report discusses the state of generative artificial intelligence (AI) in the cybercriminal underground: how cybercriminals are using ChatGPT, how they're adding ChatGPT features to their criminal products, and how they’re trying to remove censorship to ask ChatGPT anything.
From articles to hackathons, cybercriminals are resorting to crowdsourcing to find more ways to exploit systems. In this article, we tackle these contests, explore their results, and anticipate their possible impacts on the work of cybersecurity defenders.
Cybercriminal groups cannot operate independently. To keep their operations up and running, they need specific services provided by third parties. We investigate one such business that has been integral to the activities of a number of high-profile gangs.
Innovators are diving into a new and immersive virtual space, but with new technology comes new threats. We bring forward possible problematic issues that metaverse pioneers should be wary of.
We examine an emerging business model that involves access brokers selling direct access to organizations and stolen credentials to other malicious actors.
Our two-year research provides insights into the life cycle of exploits, the types of exploit buyers and sellers, and the business models that are reshaping the underground exploit market.
We take a closer look at an emerging underground market that is driven by malicious actors who sell access to a gargantuan amount of stolen data, frequently advertised in the underground as “cloud of logs."
Our underground monitoring revealed several ways how criminals have been entertaining themselves during isolation, with normal activities that offer cyber-crime-related prizes.