AI Agents are changing the cybersecurity landscape forever // red, blue, purple, any teams

Advanced AI-tools can be used by attackers and by defenders, the question is who will use it better

sbagency
10 min readAug 2, 2024

Why AI Agents? AI agents are a high-level abstraction that provides incredible flexibility for cognitive tasks that were never possible before. How about CISO AI-Agent (Chief Information Security Officer)?

https://www.columbusglobal.com/en/blog/ai-fighting-ai-the-future-of-cybersecurity

In “AI Fighting AI: The Future of Cybersecurity — Are You Ready?” Magnus Oxenwaldt discusses the necessity of incorporating AI into cybersecurity to keep pace with rapidly evolving threats that surpass human capabilities. As cyber threats become more sophisticated and AI-powered, traditional human-centric security methods are insufficient. AI offers proactive protection by predicting and neutralizing threats before they cause harm and should be integrated from the design phase of digital systems to ensure robust security. However, trusting AI in cybersecurity involves critical considerations, such as transparency, human oversight, and the selection of appropriate AI solutions tailored to specific security needs. The article emphasizes the importance of maintaining accurate documentation and understanding complex security landscapes, advocating for a collaborative approach where AI and human expertise work together. Challenges like transparency and robust governance are highlighted as essential for building trust in AI systems. Ultimately, embracing AI in cybersecurity is portrayed as crucial for safeguarding the digital future, urging organizations to invest in AI-driven solutions, train security teams to work with AI, and develop clear policies for AI use in cybersecurity.

https://eviden.com/publications/digital-security-magazine/ai-and-cybersecurity/ai-agents-system-2-thinking/

The article explores the transformative potential of Artificial Intelligence (AI) in cybersecurity, focusing on the concept of System 2 thinking — complex, reflective cognitive processes — and its application through multi-agent systems. It discusses how multi-agent frameworks, such as AutoGen and CrewAI, could enhance cybersecurity operations by automating tasks in Identity and Access Management (IAM), Security Information and Event Management (SIEM), and Managed Detection and Response (MDR) workflows. The article also highlights the dual nature of these advancements, noting both the opportunities for improved efficiency and the risks, including potential legal and ethical issues. As AI continues to evolve, these multi-agent systems promise to revolutionize the field of cybersecurity, although they come with challenges that need to be carefully managed.

https://arxiv.org/pdf/2402.06664v1

In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents. In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs.

https://arxiv.org/pdf/2407.13093v1

SIEM systems are prevalent and play a critical role in a variety of analyst workflows in Security Operation Centers. However, modern SIEMs face a big challenge: they still cannot relieve analysts from the repetitive tasks involved in analyzing CTI (Cyber Threat Intelligence) reports written in natural languages. This project aims to develop an AI agent to replace the labor intensive repetitive tasks involved in analyzing CTI reports. The agent exploits the revolutionary capabilities of LLMs (e.g., GPT-4), but it does not require any human intervention.

https://arxiv.org/pdf/2405.01674

The dawn of Generative Artificial Intelligence (GAI), characterized by advanced models such as Generative Pre-trained Transformers (GPT) and other Large Language Models (LLMs), has been pivotal in reshaping the field of data analysis, pattern recognition, and decision-making processes. This surge in GAI technology has ushered in not only innovative opportunities for data processing and automation but has also introduced significant cybersecurity challenges. As GAI rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks, leading to a paradox wherein the same innovations meant to safeguard digital infrastructures also enhance the arsenal available to cyber criminals. These adversaries, adept at swiftly integrating and exploiting emerging technologies, may utilize GAI to develop malware that is both more covert and adaptable, thus complicating traditional cybersecurity efforts. The acceleration of GAI presents an ambiguous frontier for cybersecurity experts, offering potent tools for threat detection and response, while concurrently providing cyber attackers with the means to engineer more intricate and potent malware. Through the joint efforts of Duke Pratt School of Engineering, Coalfire, and Safebreach, this research undertakes a meticulous analysis of how malicious agents are exploiting GAI to augment their attack strategies, emphasizing a critical issue for the integrity of future cybersecurity initiatives. The study highlights the critical need for organizations to proactively identify and develop more complex defensive strategies to counter the sophisticated employment of GAI in malware creation.

https://www.horizon3.ai/
https://www.dropzone.ai/
https://www.helpnetsecurity.com/2024/08/05/ai-soc-analysts/

AI is playing an increasingly pivotal role in cybersecurity operations, especially within Security Operation Centers (SOCs). The primary challenge for SOC analysts is managing vast amounts of data and identifying genuine threats among numerous false positives. AI can alleviate this burden by rapidly processing large data sets, detecting patterns indicative of malicious behavior, and automating routine tasks such as alert triaging, log analysis, and vulnerability scanning. This allows human analysts to focus on more complex and strategic tasks like threat hunting and incident response.

Despite its strengths, AI has limitations, notably the need for human oversight to ensure accuracy and relevance of insights. AI struggles with complex contextual decisions and lacks the strategic thinking necessary for nuanced threat management. Consequently, human judgment remains crucial for interpreting AI findings and making high-stakes decisions.

The integration of AI into cybersecurity does not threaten human jobs but transforms them. Historical parallels, such as the introduction of Excel, show that while automation can reduce the need for certain roles, it also creates new, specialized positions. Similarly, AI in cybersecurity will lead to the emergence of roles like Security Automation Specialists, AI Security Engineers, and AI Security Researchers. These roles will focus on optimizing AI tools, developing AI-powered security solutions, and innovating new approaches to counter evolving threats.

Ultimately, AI enhances the efficiency of SOCs, enabling organizations to manage threats more effectively with existing resources. It fosters a synergistic environment where AI handles repetitive tasks, and human analysts apply their expertise to more critical issues. This collaboration ensures that human expertise remains central to cybersecurity, leveraging AI to augment rather than replace human capabilities.

https://finance.yahoo.com/news/firecompass-unveils-industrys-first-agent-120000965.html

FireCompass has launched Generative-AI powered Agent AI for Ethical Hacking and Autonomous Penetration Testing, natively integrated with the FireCompass Platform to autonomously execute the entire penetration testing workflow. Unlike typical generative AI tools, Agent AI autonomously finds organization-specific vulnerabilities, generates tailored attack plans, and executes specific attack playbooks, increasing testing coverage and enhancing productivity. This new capability addresses the severe cybersecurity challenges faced by the US, with a 78% increase in breaches and a global shortage of 3.4 million cybersecurity professionals. FireCompass’ CEO highlights that while traditional penetration testing covers only the top 20% of assets annually, FireCompass GenAI & Agentic AI can target all assets continuously, achieving 10 to 100 times more frequency and cost efficiency. By automating complex, multi-stage attacks, FireCompass significantly improves the speed, depth, and breadth of testing, addressing the limitations of conventional methods.

https://firecompass.com/

Organizations typically secure their most important assets but often fail to test pre-production assets that contain production data, making them attractive targets for hackers who exploit vulnerabilities in these peripheral assets. While companies conduct pen-tests quarterly, hackers continuously attack, exploiting new vulnerabilities within 24 hours to 12 days, whereas organizations can take up to 30 days to address these issues, creating a significant risk window. Additionally, pen-testing remains largely manual, fragmented, and resource-intensive, resulting in high costs, false-positive fatigue, and overstretched security teams.

FireCompass offers an AI-powered platform for automated pen testing, red teaming, and next-gen attack surface management. It provides real-time monitoring and discovery of risky assets through passive and active reconnaissance, alerting you to exposed IPs immediately. The platform helps prioritize vulnerabilities and reduces false positives by 99%, allowing focus on critical security gaps. FireCompass automates complex multi-stage attack paths, completing in seconds what would take human pen testers days. It also tests patched vulnerabilities against real-world threats like ransomware and nation-state actors. As a Pen Test as a Service (PTaaS) solution, FireCompass reduces complexity and cost by combining AI and managed services for comprehensive pen testing. The platform ensures prioritized alerts with no false positives through a mix of passive recon and active testing, validated by both humans and automation.

https://www.afcea.org/signal-media/cyber-edge/cyber-heist-20-ais-role-new-age-hacking

The proliferation of large language models (LLMs) and generative AI has lowered the barriers to entry for cybercriminals, enhancing their ability to conduct sophisticated attacks such as automated exploitation, malware evasion, and optimized brute force attacks. While AI improves the efficiency and scale of these malicious activities, security measures and guardrails are also advancing to counter these threats. Experts like Chris Cullerot and Gaurav Pal highlight that AI can now accomplish in hours what used to take days, making hacking more accessible and efficient. However, the same technology can also be leveraged for defense, with strategies such as integrating AI into automated security testing and using cloud infrastructure to strengthen defenses. Despite the enhanced capabilities of AI-equipped hackers, the cybersecurity field is adapting by improving defenses and emphasizing the importance of staying ahead of evolving attack vectors.

https://zeropath.com/

ZeroPath provides automatic vulnerability patching that integrates seamlessly with your existing SAST tools to verify and remediate vulnerabilities. It offers an easy setup via a GitHub app, requiring no credit card, and supports communication through email, Discord, or scheduled meetings. ZeroPath enables quick discovery and remediation of vulnerabilities with almost zero false positives by linking to your SAST tools and filtering out ~95% of false positives. It automatically patches vulnerabilities, allowing users to approve PRs for patch application. Users can interact with ZeroPath’s AI for edits in PR comments and avoid SAST vendor lock-in through ZeroPath Connect.

https://www.clearly-ai.com/

Accelerate Security & Privacy
First, we build a knowledge base personalized to your stack.
Then we get to the good stuff.
Detect and track risks
Automate privacy impact assessments
Automate vendor questionnaires
Apply security and privacy policies
Configure workflow automation for repetitive security work
Continuously scan for new threats and 0-days
Achieve compliance with security and privacy from first principles

https://www.codeant.ai/

Static Code Analysis, SAST, IaC, and CSPM collectively ensure robust code quality and security across all stages of development and infrastructure management. Static Code Analysis helps maintain a clean and reliable codebase by auto-fixing quality issues with each change, while SAST focuses on identifying and addressing critical vulnerabilities to comply with industry standards. IaC scans infrastructure code to prevent misconfigurations and vulnerabilities, and CSPM provides a comprehensive scan of cloud resources for major security risks. Code Governance enforces company-specific practices across various development tools, and reporting offers deep insights and executive summaries on code health and issues. Integrations like the Control Center, Pull Request Reviewer, and Shift Left tools streamline the process of identifying, fixing, and reviewing code and security issues, enhancing overall efficiency and security.

https://www.unboundsecurity.ai/

Our service offers secure management of consumer Gen AI apps, ensuring compliance with AI policies and protecting sensitive information by monitoring app usage in real-time. It integrates smoothly with existing systems without requiring additional infrastructure or network modifications. For more information or a demo, please contact us. By doing so, you consent to our Privacy Policy and the processing of your personal data, with an option to opt out at any time.

https://shiboleth.ai/
https://tracecat.com/

Turn security alerts into solvable cases

Automate SecOps using pre-built actions (API calls, webhooks, data transforms, AI tasks, and more). No code required.

Open cases direct from workflows. Track and manage security incidents all-in-one platform.

https://www.nuanced.dev/

AI-generated spam, abuse,
and fraud is on the rise.

* User-generated content platforms are contending with an increased amount of fraud, deepfakes, and inauthentic content.

The distinction between human and AI-generated content has become increasingly blurred as advanced models produce highly convincing and authentic text. To stay ahead of these developments, our algorithms are designed to detect AI-generated content effectively, while our research team continuously refines these models. We prioritize user privacy with a zero PII approach, ensuring that personal data is not compromised. Our API offers a simple and seamless integration with existing systems, catering to various platforms, from moderation and fraud detection to user-generated content and abuse mitigation, allowing you to enhance your services without sacrificing privacy or performance.

--

--

sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.