It’s now widely accepted that artificial intelligence will permeate nearly every aspect of our lives. That presents recent challenges related to AI threats, enterprise AI management and adapting security programs for an increasingly AI-driven world.
What matters most in assessing your risk exposure is to know what sort of AI is getting used. There’s a direct correlation: More modern generative and agentic AI can expose us to greater threats. Lack of transparency with gen AI apps makes it harder for security teams to get vital visibility into where potentially sensitive data goes.
The convenience and accessibility of AI tools often outweigh users’ perceived security risks, making a cycle of adoption and potential exposure. To effectively counter these evolving threats, organizations must adapt their security programs. With that in mind, let’s explore strategies to forestall these challenges from manifesting across various organizational domains.
Fostering flexibility
A key area for transformation is rethinking approval workflows for AI use. Traditional security models, often characterised by binary yes/no or allow/block decisions, are too rigid for the dynamic nature of AI. As a substitute, security teams must embrace more flexible approaches, potentially including opt-in/opt-out models for certain AI functions, especially when customer or regulated data is involved.
One strategy I’m implementing with my team is setting clear parameters for what users can and may’t use. The goal is to shift security from a bottleneck that impedes innovation to an enabler of secure AI adoption, with clear lines of in- and out-of-bounds activities.
Yes, there’s at all times going to be a layer of shadow AI and agentic AI to maintain a watch out for as those expand your attack surface, but there’s also a necessity to reply to legitimate requests from business units. It’s necessary to have a system that reduces the friction of processing and auditing requests for brand new tools as they are available from other teams inside the organization.
I began by in search of trusted platforms and partners, and finding ways to process and approve requests for those tools faster. I call this a “yellow light” process, wherein you could have the choice to hurry up or hit the brakes. This implies finding the 2 questions that absolutely should be answered a few tool to get this trusted platform or partner approved. For instance, we may ask, “Are you learning from my data?” and “What controls do you could have in place for us to have the option to show this tool on or off if we’d like to?”
This enables us to hurry a review that after took days into just quarter-hour. Now, teams have a level of flexibility to make use of the tools they need without sacrificing the vital layers of security.
The ability of AI ambassadors
Lots of us are accustomed to the concept of getting “security champions” inside a company, but I’m also an enormous proponent of “AI ambassadors.” These programs seek to have interaction and empower business units to tackle a greater share of AI governance responsibilities.
AI ambassadors are people from across teams and departments, who know the principles around AI tools, and may encourage their teams to follow them. Essentially, they operate as an extension of the safety team, bringing a layer of accountability that makes sure their colleagues are following the right procedures in selecting and using AI tools. They will inventory their team’s apps and request reviews of the approval process, giving them more investment in ensuring the ways their very own team is using AI are secure and in keeping with broader security policies.
By training and equipping AI ambassadors inside different departments, organizations can decentralize a number of the initial security review processes. The ambassadors are liable for understanding and adhering to AI governance policies, ensuring that security considerations are integrated from the outset with any recent tool brought into the organization.
Security Champions and AI Ambassadors aren’t the identical. Keeping the teams distinct from one another fosters a culture of shared responsibility, enabling faster deployment of AI solutions while maintaining a customer-focused governance and robust security posture.
AI ultimately doesn’t require major changes but smaller strategic adjustments to existing strategies to alleviate a number of the friction for end users and foster a greater security culture.
By understanding the true nature of AI-driven threats, addressing the unique nature of managing AI and fostering a culture of shared security responsibility, organizations not only mitigate risks but additionally harness the technology’s full potential.
James Robinson is the chief information security officer at Netskope Inc. He wrote this text for SiliconANGLE.
Image: SiliconANGLE/Ideogram
Support our mission to maintain content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with greater than 11,400 tech and business leaders shaping the longer term through a novel trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our recent proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to assist technology corporations make data-driven decisions and stay on the forefront of industry conversations.

