Artificial Intelligence

AI Assistants in the Future: Security Concerns and Risk Management

December 06, 2024

Digital Assistants (DA) are AI-driven software, sometimes embedded into dedicated hardware and integrated with multiple systems, that understand natural language and use them to perform various tasks

Read more  

Artificial Intelligence
  • PAGES:
  • 1
  • 2
  • December 04, 2024
    How can misconfigurations help threat actors abuse AI to launch hard-to-detect attacks with massive impact? We reveal how AI models stored in exposed container registries could be tampered with— and how organizations can protect their systems.
  • November 28, 2024
    This analysis investigates the security risks of eKYC systems in relation to deepfake attacks, highlighting the diverse strategies employed by cybercriminals in bypassing eKYC security measures.
  • October 24, 2024
    Elections are not just an opportunity for nation states to use Generative AI tools to damage a politician’s reputation. They are also an opportunity for cybercriminals to use Generative AI tools to orchestrate social engineering scams.
  • September 26, 2024
    Businesses worldwide are increasingly moving to the cloud. The demand for remote work, for instance, has driven a surge in cloud-based services, offering companies the flexibility and efficiency that traditional data centers often lack.
  • September 19, 2024
    Elections are the cornerstone of modern democracy, an exercise where a populace expresses its political will through the casting of ballots. But as electoral systems adopt and embrace technology, this introduces significant cybersecurity risks, not only to the infrastructure supporting an election but also to the people lining up in polling booths.
  • July 30, 2024
    The cybercriminal abuse of generative AI (GenAI) is developing at a blazing pace. After only a few weeks since we reported on Gen AI and how it is used for cybercrime, new key developments have emerged. Threat actors are proliferating their offerings on criminal large language models (LLMs) and deepfake technologies, ramping up the volume and extending their reach.
  • July 25, 2024
    The adoption of large language models (LLMs) and Generative Pre-trained Transformers (GPTs), such as ChatGPT, by leading firms like Microsoft, Nuance, Mix and Google CCAI Insights, drives the industry towards a series of transformative changes. As the use of these new technologies becomes prevalent, it is important to understand their key behavior, advantages, and the risks they present.
  • June 04, 2024
    This article discusses the importance of properly identifying and protecting AI model files and their associated assets, such as labels, from malicious or even unintended tampering.
  • May 08, 2024
    Generative AI continues to be misused and abused by malicious individuals. In this article, we dive into new criminal LLMs, criminal services with ChatGPT-like capabilities, and deepfakes being offered on criminal sites.
  • PAGES:
  • 1
  • 2