News

Startup Develops Tool to Detect Hidden Malware in Open-Source AI Models

An AI security startup has introduced a tool aimed at safeguarding open-source AI models from hackers who are increasingly embedding malware within them. This development is significant because hackers have become adept at concealing malware in foundational AI models, making organizations reliant on open-source models vulnerable to cyberattacks.

Protect AI, founded in 2022, has launched its Guardian scanning tool to assist companies in detecting hidden trojan malware in AI models before they are incorporated into their networks. Building on the company’s existing open-source tool, ModelScan, Guardian operates as an intermediary. It scans AI models for indications of tampering, including specific file formats and functions that may reveal signs of malware.

Furthermore, Guardian evaluates whether the model aligns with a company’s internal AI policies, including those related to data collection and use cases. If any issues are identified, Guardian will halt the download and provide information about the problems detected.

Protect AI plans to utilize Huntr, an AI-focused bug bounty program it acquired in August, to enhance the range of vulnerabilities that Guardian can scan for. According to Protect AI researchers, there have been 3,354 models on platforms like Hugging Face since August that contained malicious code. Alarmingly, 1,347 of these models were not marked as “unsafe” by Hugging Face’s security scans.

Ian Swanson, CEO and co-founder of Protect AI, emphasized the risks associated with downloading and using models that may contain malicious code, highlighting the potential for data theft or system compromise.

Creating proprietary AI models involves substantial resources, including terabytes of data and significant financial investments. As a result, many organizations turn to open-source foundational models. While these models are readily available on platforms like Hugging Face, they may not undergo sufficiently deep security scans to detect all potential vulnerabilities introduced by hackers. Furthermore, even when a repository identifies a security issue, models are often not removed, making them accessible for download.

Protect AI’s Guardian scanning tool aims to mitigate these risks and enhance the security of open-source AI models, protecting organizations from potential cyber threats.

Leave a Reply

Your email address will not be published. Required fields are marked *