Securing LLM Applications: NeMo Guardrails + Fiddler Guardrails Integration
Discover the Fiddler Guardrails integration with NVIDIA NeMo Guardrails — delivering industry-leading <100ms response time, making it the fastest guardrails for hallucination, toxicity, and jailbreak attempts available in the NeMo Guardrails ecosystem and in the industry. This video demonstrates how Fiddler delivers the fastest guardrails solution that can be securely deployed in your environment.
[00:00:00] Fiddler is proud to announce our integration with NVIDIA's NeMo Guardrails, a powerful combination that elevates the safety and reliability of LLM applications. NeMo Guardrails is an open source toolkit that allows developers to add programmable Guardrails to LLM-based applications, providing real-time control over inputs, retrievals, and outputs.
[00:00:19] Fiddler Guardrails, part of the Fiddler Trust Service, can proactively detect and mitigate risks, such as hallucination, safety violations, and prompt injection attacks.
[00:00:28] Fiddler Guardrails are fast with industry leading latencies of under a hundred milliseconds are more cost-effective than other LLMs-as-a-Judge options and are highly enterprise, secure and scalable, deployable even within the most secure enterprise environments.
[00:00:42] Customers can now quickly set up Fiddler Guardrails directly through NeMo Guardrails.
[00:00:46] Here's a quick demo.
[00:00:47] I've set up the NeMo Guardrails chat interface that comes out of the box from NeMo with Fiddler Guardrails, I've also created a sample configuration file within NeMo Guardrails here that uses Fiddler Guardrails.
[00:00:57] It's really this easy to set up, just four to five lines of code.
[00:01:02] I'll now use the chat interface to send some messages. Let's say I'm looking for some quick healthy dinner options. Let's quickly put that into the chat and as you'd expect the chat is fine and our Guardrails didn't take action.
[00:01:15] NeMo takes care of calling Fiddler Guardrails for you, but let's say I wanted to do something a little bit more nefarious, like asking the chatbot how to split poison into the meal. Let's see what happens. And you can see it was blocked on the chat. NeMo used Fiddler Guardrails, judged the input as unsafe and toxic, and blocked the prompt from even getting into the LLM.
[00:01:36] You can also use Fiddler Guardrails to protect against prompt injection attacks. Here, I've set up a common prompt injection attack around ignoring previous instructions, to give me a social security number. And again, NeMo used Fiddler Guardrails to block this prompt injection attack.
[00:01:52] We're really excited to see you use this integration to launch better, safer experiences for your customers.