Arcjet Extends Runtime Policy Engine to Block Malicious Prompts

Arcjet today added an ability to detect and block risky prompts before they are shared with a large language model (LLM) embedded within an application.

The Arcjet AI prompt injection protection capability is based on an LLM that the company has been specifically training to detect patterns indicative of risky prompts that can then be blocked using a runtime policy engine built using WebAssembly (Wasm). That approach makes it simpler to embed the Arcjet policy engine into application code and apply it to endpoints built with JavaScript, Python or frameworks such as the Vercel AI software development kit (SDK) or LangChain.

Arcjet CEO David Mytton said that the overall goal is to prevent malicious prompts from being used to, for example, discover the underlying components of an application environment or delete data. Alternatively, a prompt might also expose sensitive data to an AI model in a way that shouldn’t be allowed.

Initially, Arcjet is focused on prompt-extraction and shell-injection protection but over time will add additional layers of protection from malicious prompts, he added.

As more organizations build applications that embed AI models, cybercriminals are creating hostile instructions that are deliberately designed to override application behavior, expose hidden prompts, or extract data. The only way to discover these issues after an application has been deployed is to analyze logs, which by then means a DevSecOps team is already too late to prevent anything malicious from occurring, noted Mytton.

The Arcjet runtime policy engine now prevents those prompts from ever reaching the AI model in the first place while adding less than 100 milliseconds of overhead to an application, he added.

While application developers are clearly relying on AI to write code, how many are embedding AI functionality into their applications at this point is unclear. However, at some point soon just about every application will have some type of AI capability. The challenge then becomes protecting the AI models embedded within an application from malicious prompts that are trivial to create. The more AI functionality embedded within an application, the more tempting a target an application is likely to become.

Ultimately, just about every application is going to need to implement policies as code to ensure security in an era where the cost of creating a malicious prompt is essentially zero. While there are multiple ways to achieve that goal, Arcjet is making a case for using an engine based on Wasm that makes it easier to apply the same policies across applications written in multiple programming languages.

Hopefully, DevSecOps teams will be more proactive about securing AI applications than they might have been when it came to preventing application security issues in the past. Unfortunately, it appears that many application development teams are once again moving quickly to add new capabilities and functions without considering the security implications.

At this point, DevSecOps teams should assume that as more AI is embedded within their application portfolio it’s only a matter of time before there will be one or more incidents. The challenge and the opportunity now is to get ahead of AI security issues now rather than after there has been a cataclysmic event.

Read More

Scroll to Top