Research firm Forrester queried decision-makers responsible for AI security for a new report and found that machine learning (ML) projects will play a critical or important role in their company's revenue generation, customer experience and business operations in the next 18 months.

The Forrester report, titled “It’s Time For Zero Trust AI,” was commissioned by HiddenLayer, an AI application security company. Forrester supplemented its research for the report with a survey of 151 AI security decision-makers at the director level or above. The respondents hail from industries such as telecommunication services, tech services, financial services, insurance, among others. The custom survey began and was completed in February. 

Possibly the biggest barrier to ML security today is the lack of automated processes and tools, according to the report. As organizations increase their ML workloads, they increase their threat landscape. In response, they need to quickly manage and mitigate threats; however, the majority of respondents are still using manual processes.

Forrester research found that 40 percent to 52 percent of respondents are either using a manual process to address threats, such as insider threats, model theft and Zero Trust controls, or they are still discussing how to address the threat.

Eighty-six percent of respondents were extremely concerned or concerned about their organization’s ML model security. To address this challenge, 80 percent of respondents expressed interest to invest in a solution that manages ML model integrity and security within the next 12 months. According to the report, respondents are prioritizing solutions that not only run on cloud, on-premise and edge, but also integrate with their current technology stack and align with their overall Zero Trust processes.

As AI becomes a critical technology for business success, the report emphasized the necessity for organizations to invest in Zero Trust, automated ML solutions to enable AI and security teams. With the increasing complexity of ML models and the rise in attacks against them, traditional enterprise security teams are not currently keeping pace with the evolving threat landscape. 

Zero trust is a cybersecurity strategy based on the principle of “never trust, always verify.” It is a security model that assumes that all users, devices and applications are untrusted and must be verified before being granted access to resources. This approach differs from traditional security models that rely on perimeter-based security methods, such as firewalls and VPNs. 

Zero Trust AI aims to safeguard shared data and prevent it from being decrypted by any external party, including service providers. As AI becomes a critical technology for business success, the report emphasized the necessity for organizations to invest in Zero Trust, automated ML solutions to enable AI and security teams. 

“As ML projects continue to increase and play a role in business success, ML integrity and security is key. Businesses need to manage and mitigate risk associated with ML models to be successful,” the report states. 

Among other findings, the report found that ML security is a shared responsibility. More than 3 in 4 respondents say IT operations are responsible for their firm’s ML integrity and security, followed by security professionals (61 percent), data engineers (43 percent), ML engineers (40 percent), and data scientists (40 percent). While most respondents identify IT operations as an owner, 97 percent select more than one role, indicating that this is a collaborative responsibility. In fact, 71 percent of respondents say that responsibility for ML model security is dispersed at their organization.

The complete report can be downloaded here.