Malicious AI/ML // no surprise
The document discusses the discovery of around 100 malicious AI/machine learning models on the Hugging Face platform that can grant attackers control over victims’ machines through backdoors. It highlights the potential risks posed by poisoned open-source repositories. Additionally, it mentions research on techniques like BEAST for generating harmful prompts from large language models (LLMs), the development of a generative AI worm called Morris II capable of stealing data and spreading malware, and the ComPromptMized attack that injects malicious prompts into applications relying on generative AI services. The document emphasizes the growing threat of adversarial attacks against AI models through techniques like prompt injection and adversarial perturbations in multi-modal inputs.