WitnessAI is constructing guardrails for generative AI models

Date:

Boutiquefeel WW
Cotosen WW
Pheromones
Giftmio [Lifetime] Many GEOs

Generative AI makes stuff up. It could actually be biased. Sometimes it spits out toxic text. So can it’s “secure”?

Rick Caccia, the CEO of WitnessAI, believes it will probably.

“Securing AI models is an actual problem, and it’s one which’s especially shiny for AI researchers, nevertheless it’s different from securing use,” Caccia, formerly SVP of selling at Palo Alto Networks, told TechCrunch in an interview. “I believe of it like a sports automotive: having a more powerful engine — i.e., model — doesn’t buy you anything unless you will have good brakes and steering, too. The controls are only as essential for fast driving because the engine.”

There’s definitely demand for such controls among the many enterprise, which — while cautiously optimistic about generative AI’s productivity-boosting potential — has concerns concerning the tech’s limitations.

Fifty-one percent of CEOs are hiring for generative AI-related roles that didn’t exist until this 12 months, an IBM poll finds. Yet only 9% of firms say that they’re prepared to administer threats — including threats pertaining to privacy and mental property — arising from their use of generative AI, per a Riskonnect survey.

WitnessAI’s platform intercepts activity between employees and the custom generative AI models that their employer is using — not models gated behind an API like OpenAI’s GPT-4, but more along the lines of Meta’s Llama 3 — and applies risk-mitigating policies and safeguards.

“One in every of the guarantees of enterprise AI is that it unlocks and democratizes enterprise data to the staff in order that they’ll do their jobs higher. But unlocking all that sensitive data too well –– or having it leak or get stolen — is an issue.”

WitnessAI sells access to several modules, each focused on tackling a special type of generative AI risk. One lets organizations implement rules to stop staffers from particular teams from using generative AI-powered tools in ways they’re not presupposed to (e.g., like asking about pre-release earnings reports or pasting internal codebases). One other redacts proprietary and sensitive info from the prompts sent to models and implements techniques to shield models against attacks which may force them to go off-script.

“We expect the most effective approach to help enterprises is to define the issue in a way that is smart — for instance, secure adoption of AI — after which sell an answer that addresses the issue,” Caccia said. “The CISO wants to guard the business, and WitnessAI helps them do this by ensuring data protection, stopping prompt injection and enforcing identity-based policies. The chief privacy officer desires to be certain that existing — and incoming — regulations are being followed, and we give them visibility and a approach to report on activity and risk.”

But there’s one tricky thing about WitnessAI from a privacy perspective: All data passes through its platform before reaching a model. The corporate is transparent about this, even offering tools to watch which models employees access, the questions they ask the models and the responses they get. Nevertheless it could create its own privacy risks.

In response to questions on WitnessAI’s privacy policy, Caccia said that the platform is “isolated” and encrypted to stop customer secrets from spilling out into the open.

“We’ve built a millisecond-latency platform with regulatory separation built right in — a novel, isolated design to guard enterprise AI activity in a way that’s fundamentally different from the standard multi-tenant software-as-a-service services,” he said. “We create a separate instance of our platform for every customer, encrypted with their keys. Their AI activity data is isolated to them — we will’t see it.”

Perhaps that can allay customers’ fears. As for employees fearful concerning the surveillance potential of WitnessAI’s platform, it’s a tougher call.

Surveys show that folks don’t generally appreciate having their workplace activity monitored, no matter the rationale, and imagine it negatively impacts company morale. Nearly a 3rd of respondents to a Forbes survey said they could consider leaving their jobs if their employer monitored their online activity and communications.

But Caccia asserts that interest in WitnessAI’s platform has been and stays strong, with a pipeline of 25 early corporate users in its proof-of-concept phase. (It won’t grow to be generally available until Q3.) And, in a vote of confidence from VCs, WitnessAI has raised $27.5 million from Ballistic Ventures (which incubated WitnessAI) and GV, Google’s corporate enterprise arm.

The plan is to place the tranche of funding toward growing WitnessAI’s 18-person team to 40 by the top of the 12 months. Growth will definitely be key to beating back WitnessAI’s rivals within the nascent space for model compliance and governance solutions, not only from tech giants like AWS, Google and Salesforce but in addition from startups comparable to CalypsoAI.

“We’ve built our plan to recuperate into 2026 even when we had no sales in any respect, but we’ve already got almost 20 times the pipeline needed to hit our sales targets this 12 months,” Caccia said. “That is our initial funding round and public launch, but secure AI enablement and use is a brand new area, and all of our features are developing with this latest market.”

We’re launching an AI newsletter! Enroll here to begin receiving it in your inboxes on June 5.

Share post:

Popular

More like this
Related

Yo Gotti Shows Love With Lavish Birthday Trip

Yo Gotti is making it clear that he’s not...

Not much of a feat, but not less than, Terrafirma’s in win column

Stanley Pringle and Terrafirma had good enough reasons to...

Release date, price, and contents for Terrifier bundle

Halloween events are at all times an enormous deal...

Volcanoes may help reveal interior heat on Jupiter moon

By staring into the hellish landscape of Jupiter's moon...