Securing GenAI Usage with Versa’s GenAI Firewall

Anusha Vaidyanathan
By Anusha Vaidyanathan
Sr. Director, Product Management
May 6, 2024

Versa GenAI Firewall safeguards sensitive data from being uploaded into Generative AI tools (e.g. ChatGPT) while limiting shadow use cases of GenAI. It manages, monitors, and reports how your organization uses GenAI – including assessments on the riskiness of apps, controlling access, and preventing unauthorized data movement.

Securing Generative AI Applications: Beyond LLMs

While ensuring security for large language models (LLMs) is necessary to facilitate the adoption of GenAI applications within organizations, it is equally crucial to address broader concerns related to generative AI. Let’s delve into additional considerations:

User-to-Application Access Control:

  • Context: Generative AI tools are often accessed by various users within an organization, including developers, data scientists, marketing and end-users.
  • Challenge: Ensuring that only authorized users can interact with these tools is essential. Blind spots in access control can lead to unauthorized usage, data leaks, or misuse.
  • Solution: A generative AI firewall that regulates user access based on roles, permissions, and context. This firewall authenticates users, enforce fine-grained access controls, and monitor usage patterns.

Benefits:

  • Security: Prevents unauthorized access and protects sensitive data.
  • Compliance: Helps organizations meet regulatory requirements.
  • Visibility: Provides insights into who is using generative AI tools and how.
  • Adaptability: Can dynamically adjust access based on changing requirements.

Simple Configurations

Today, our community of users is actively using it to secure GenAI use cases that range from content production apps (e.g. Jasper) to code repository co-pilots (e.g. GitHub Co-Pilot) across an extended landscape of platforms including web interfaces, chat systems, knowledge bases and APIs. Because of the spectrum of use cases, there has been added benefit from its native integration into the platform’s SSE product line, its unified DLP policies and SaaS protection across Generative AI tools and SaaS apps. As an example from one of our users, they created 3 rules:

  • GenAI-sanctioned rules that would allow usage but log this for visibility
  • GenAI-unsanctioned policies that block usage but generate an alert with a request asking for a justification. Upon access, data loss policies would be enforced.
  • GenAI policies for tolerated or moderate risk applications can be created to enforce other intermediate actions. Like JUSTIFY access, instead of blocking.

This would both allow and restrict access depending on the category of the application; all while balancing productivity – in this case, a group of application is tolerated but with restriction on data movement or the groups of users who can use it (e.g. only marketing teams can use content production tools).

Expanded Visibility

These users are then able to create a Generative AI Analytics report. Out-of-box reports are ready on day 1; but the analytics are fully configuration. Commonly observed use cases include:

  • Identifying, top GenerativeAI tools and applications
  • Top GenAI users
  • Riskiest behaviors in GenAI Tools

We invite our current community and users to see how this extends existing SaaS app and SSE controls while enhancing visibility for the emerging risks from Shadow use cases, GenAI access and data leakage risks.

Topics





Recent Posts








Top Tags


Gartner Research Report

2023 Gartner® Critical Capabilities for SD-WAN

Versa Networks has been positioned in the highest ranked three vendors for all five Use Cases in the 2023 Gartner® Critical Capabilities for SD-WAN Report.