| |

When loading a machine learning model means loading an assailant

Organisations often underestimate the risks associated with downloading and loading machine learning models, similar to the caution exercised when opening unfamiliar email attachments or downloading random apps. A recent study by Researchers from Politecnico Di Milano revealed that loading a shared model can pose risks comparable to executing untrusted code. Their tests identified six previously unknown vulnerabilities in popular machine learning tools, each capable of allowing an attacker to seize control of a system upon model loading. This highlights a new type of supply chain threat embedded within the very models that organisations are eager to adopt. The study also found that security controls across various platforms are inconsistent, with some scanning for known threats while others rely on user discretion or isolated environments. Even supposedly safer formats can inadvertently permit harmful code execution, depending on processing methods.

The perception of security among machine learning practitioners does not always align with reality. A survey of 62 practitioners revealed that 73 per cent felt more secure loading models from well-known hubs that advertise built-in security scanning. However, this trust can be misplaced, as the study demonstrated that some security scanning tools failed to detect malicious models, while others incorrectly labelled files as safe due to format limitations. This disconnect between perceived and actual protection can lead to overconfidence, leaving systems vulnerable. To mitigate these risks, Chief Information Security Officers (CISOs) should treat machine learning models as they would any other code entering their environment. They should encourage teams to use trusted sources for models, maintain strict isolation during testing, keep frameworks updated, and establish robust policies for model scanning and approval. 

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *