Versa GenAI Firewall safeguards sensitive data from being uploaded into Generative AI tools (e.g. ChatGPT) while limiting shadow use cases of GenAI. It manages, monitors, and reports how your organization uses GenAI – including assessments on the riskiness of apps, controlling access, and preventing unauthorized data movement.
Securing Generative AI Applications: Beyond LLMs
While ensuring security for large language models (LLMs) is necessary to facilitate the adoption of GenAI applications within organizations, it is equally crucial to address broader concerns related to generative AI. Let’s delve into additional considerations:
User-to-Application Access Control:
Benefits:
Simple Configurations
Today, our community of users is actively using it to secure GenAI use cases that range from content production apps (e.g. Jasper) to code repository co-pilots (e.g. GitHub Co-Pilot) across an extended landscape of platforms including web interfaces, chat systems, knowledge bases and APIs. Because of the spectrum of use cases, there has been added benefit from its native integration into the platform’s SSE product line, its unified DLP policies and SaaS protection across Generative AI tools and SaaS apps. As an example from one of our users, they created 3 rules:
This would both allow and restrict access depending on the category of the application; all while balancing productivity – in this case, a group of application is tolerated but with restriction on data movement or the groups of users who can use it (e.g. only marketing teams can use content production tools).
Expanded Visibility
These users are then able to create a Generative AI Analytics report. Out-of-box reports are ready on day 1; but the analytics are fully configuration. Commonly observed use cases include:
We invite our current community and users to see how this extends existing SaaS app and SSE controls while enhancing visibility for the emerging risks from Shadow use cases, GenAI access and data leakage risks.