According to Decrypt, attackers planted malicious code in Mistral AI’s software package distributed via PyPI. When developers use the package on Linux systems, the malicious code automatically executes, downloading a file named transformers.pyz from a remote server and running it in the background. The file name mimics the widely-used Hugging Face Transformers library. Microsoft’s threat intelligence team stated the malware primarily steals developer login credentials and access tokens. Mistral confirmed that one developer’s machine was compromised, but the company’s infrastructure remained unaffected.
Related News
AI suite supply chain sees two-way attacks: Mistral and fake OpenAI models are both compromised
Google: Large language models are being used for real-world attacks; AI can bypass dual-factor authentication security mechanisms
Google reveals the first AI-generated zero-day vulnerability: hackers aim to bypass 2FA for large-scale exploitation