A gaggle of distinguished tech executives will join the Artificial Intelligence Safety and Security Board, a panel tasked with advising the federal government on using AI in critical infrastructure.
The Wall Street Journal reported the event today. In response to the paper, the panel comprises not only representatives of the tech industry but additionally academics, civil rights leaders and the chief executives of several critical infrastructure firms. In all, the Artificial Intelligence Safety and Security Board could have nearly two dozen members.
Microsoft Corp. Chief Executive Satya Nadella, Nvidia Corp. CEO Jensen Huang and OpenAI’s San Altman are among the many participants. They shall be joined by their counterparts at Advanced Micro Devices Inc., Amazon Web Services Inc., Anthropic PBC, Cisco Systems Inc., Google LLC and IBM Corp.
Secretary of Homeland Security Alejandro Mayorkas is leading the panel. In response to the Journal, the Artificial Intelligence Safety and Security Board will advise the Department of Homeland Security on find out how to safely apply AI in critical infrastructure. The panel’s members will convene every three months starting in May.
Besides providing advice to the federal government, the panel may even produce AI recommendations for critical infrastructure organizations. The trouble is ready to concentrate on firms akin to power grid operators, manufacturers and transportation service providers. The panel’s recommendations will reportedly concentrate on two principal topics: ways of applying AI in critical infrastructure and the potential risks posed by the technology.
Multiple cybersecurity firms have observed hacking campaigns that make use of generative AI. In a few of the campaigns, hackers are leveraging large language models to generate phishing emails. In other cases, AI is getting used to support the event of malware.
The Artificial Intelligence Safety and Security Board was formed through an executive order on AI that President Joe Biden signed last 12 months. The order also called on the federal government to take a lot of other steps to deal with the technology’s risks. The Commerce Department will develop guidance for identifying AI-generated content, while the National Institute of Standards and Technology is working on AI safety standards.
The chief order established recent requirements for personal firms as well. Particularly, tech firms developing advanced AI must now share data about recent models’ safety with the federal government. This data includes the outcomes of so-called red team tests, evaluations that assess neural networks’ safety by simulating malicious prompts.
Several of the AI ecosystem’s largest players have made algorithm safety a spotlight of their research efforts. OpenAI, for instance, in December revealed that it’s developing an automatic approach to addressing the risks posed by advanced neural networks. The tactic involves supervising a complicated AI model’s output using a second, less capable neural network.
Image: Unsplash
Your vote of support is very important to us and it helps us keep the content FREE.
One click below supports our mission to supply free, deep, and relevant content.
Join our community on YouTube
Join the community that features greater than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and plenty of more luminaries and experts.
THANK YOU