Cybersecurity of AI-Agents and AI-Agents in cybersecurity // AI vs AI
Both sides: AI-Agents cybersec risks and usage of AI-Agents to prevent cybersec risks
IBM has introduced a new generative AI-powered Cybersecurity Assistant within its managed Threat Detection and Response Services, aimed at enhancing security operations for clients. Built on the watsonx data and AI platform, this assistant helps IBM Consulting analysts quickly identify, investigate, and respond to security threats. It integrates historical correlation analysis and a conversational AI engine to accelerate threat investigations, streamline operational tasks, and improve the overall security posture for clients.
The speaker discusses the rapid growth and potential of AI agents, specifically mentioning a framework called CrewAI, which has executed over 10 million agents in the past month. AI agents, built using large language models (LLMs) like GPT, can make autonomous decisions and perform complex tasks, streamlining processes that traditionally required manual engineering. The speaker explains how CrewAI facilitates the creation and orchestration of these agents, leading to more efficient automation across various functions, such as marketing, lead qualification, and documentation.
CrewAI has garnered significant attention, with a large community and industry support, and the speaker emphasizes the transformative potential of AI agents, comparing their impact to that of the internet. The talk concludes with announcements of new CrewAI features, including automated tool-building, agent training, and a universal platform for integrating third-party agents. The speaker also introduces CrewAI Plus, an enterprise offering that allows users to deploy AI agents as scalable APIs, promising quick and efficient integration into existing systems.
You can also use the educational mode to learn more about social engineering and cybersecurity threats, such as scams and phishing.
Some platforms, like email services, constantly improve their spam filters, and cyber criminals are learning to bypass them. Many channels are hard to protect, like direct calls, SMS, and messengers. These channels are used for phishing, scams, and vishing.
BRAMA agent is going to help users receive additional context regarding suspicious emails, messages, and SMS.
Agent utilize Anthropic Claude Models to analize the conext. Together with information from 3d party APIs and resources, the agent can provide additional context:
Whether the URL was marked as malicious or used for phishing
Whether the phone number that is calling/sending SMS to you is being used for scams, phishing, or other suspicious activities
Check the sender’s email and content
Checks the Domain-based Message Authentication, Reporting, and Conformance (DMARC) record of a domain
After processing by the BRAMA agent, you will receive a comprehensive analysis of the messages or content, providing you with the necessary information to make informed decisions.
If the input contains multiple data types to check (domain, phone, email), we call agent tools one by one. Note: Some cases can work with errors. Error management needs to be improved.
A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.