Estimated reading time: 8 minutes
Given a few of artificial intelligence’s (AI) challenges at once, it may be tempting to say that AI isn’t the panacea that everybody expected it to be. Personally, I believe we’re still very early within the AI adoption curve, so organizations must proceed to listen to what’s developing and conduct experiments to see how it really works.
Prior to now, we’ve talked in regards to the need for organizations to develop an AI strategy. Today, I would like to speak about developing an internal AI policy. I had the chance to listen to our friend Carrie Cherveny speak at SHRM’s 2024 Annual Conference on “Getting Smart About AI”, which was very informative. So, I asked Carrie if we could speak about developing AI policy and thankfully, she said yes.
Having an AI policy is a fundamental step to be ‘ready’ for AI in your workplace. An AI policy is now just as essential as, for instance, your anti-harassment or Family and Medical Leave Act (FMLA) policies.
Carrie Cherveny is chief compliance officer and senior vp of strategic solutions at HUB International. In her role, Carrie works with clients to develop strategies that ensure compliance and risk mitigation with regards to advantages and employment practices. As all the time, please do not forget that her comments mustn’t be construed as legal advice or as pertaining to any specific factual situations. If you have got detailed questions, they needs to be addressed directly along with your friendly neighborhood employment attorney.
Carrie, thanks for being here. Why do organizations need to think about having an internal AI policy (along with an AI strategy)?
[Cherveny] Today AI is all over the place. Did you catch any of the Olympic games? It gave the impression of greater than half the ads were for AI platforms. Actually, on June 10, 2024, Apple announced the upcoming launch of Apple Intelligence – its latest artificial intelligence technology that shall be integrated into the discharge of iOS18. Based on the Apple press release, ‘It harnesses the ability of Apple silicon to grasp and create language and pictures, take motion across apps, and draw from personal context to simplify and speed up on a regular basis tasks’. Ready or not – AI is here. Having an AI policy is a fundamental step to be ‘ready’ for AI in your workplace. An AI policy is now just as essential as, for instance, your anti-harassment or Family and Medical Leave Act (FMLA) policies.
Employers have some decisions to make. Employers have to choose if they are going to allow the usage of AI within the workplace and whether AI shall be limited to a particular platform. Likewise, employers could have to discover the departments and roles which can be permitted and/or prohibited from using AI. Well-crafted policies are designed to specifically address these questions and more.
In the case of drafting policies, often human resources departments take the lead. Who needs to be involved in helping to develop an AI policy?
[Cherveny] AI has the potential to affect every corner of your organization. Which means that your organization’s AI policy needs to be multifaceted and include various material disciplines. Organizations should establish an AI committee and include, at a minimum:
- Legal/in-house counsel
- Human Resources
- Finance/Accounting
- Operations
Other material expert (SME) committee members shall be depending on the character of the business. For instance, a healthcare organization would likely include its Health Insurance Portability and Accountability Act (HIPAA) Privacy Officer. A financial services firm may include its compliance department together with a knowledge privacy officer. Employers with union employees will probably want to include a union representative.
Once we determine who needs to be involved in helping to develop an AI policy, is there a framework they will follow to discover key areas of consideration?
[Cherveny] Not only should the AI committee work together to develop a comprehensive policy, however the committee also needs to be charged with vetting the AI tools. For instance, a committee should develop a strong discovery process to higher understand the seller’s status, the way it handles the data entered into its system, and its data security and cyber security measures.
The organization should draft comprehensive, clear, and unambiguous ‘rules of the road’ for the usage of AI within the workplace including, for instance:
- Prohibited uses of AI. Consider the kinds of knowledge that employees may never put into an AI platform comparable to Personally Identifiable Information (PII), Protected Health Information (PHI), company confidential information (financials, methodologies, trade secrets, attorney-client privileged information, etc.).
- Permitted uses of AI. When may an worker use AI within the performance of their job? For instance, AI may create efficiencies for general research, creating/identifying sample documents, wordsmithing a written document or job aids (comparable to skill development, learning a brand new system or a tool in a system such Excel pivot tables).
- Required Safeguards. Will employees be required to ‘fact check’ data or findings sourced through AI? We’ve all read in regards to the attorneys who turned in briefs to the courts that were full of fictitious cases and citations. Employees needs to be required to envision reliable sources to be sure that the AI findings are accurate and credible. There are some AI platforms that may also provide the citations and sources of its findings. For instance, Microsoft CoPilot provides the citations and sources for its findings. Nevertheless, even when the AI provides its sources, the end-user also needs to check sources outside of the AI references to make sure complete, thorough, and accurate work.
- Required notices and disclosure. Will you require your employees to reveal after they use AI? For instance, under a brand new Recent York state law, users are required to reveal after they use AI. Notice and disclosures are quickly becoming a best practice in AI policy requirements. Employers will probably want to consider requiring employees to reveal the aim or reason for using the AI, discover the platform(s) used, and a summary of the outcomes that were included within the work product.
- Required attribution and citations. Will you require employees to discover the precise AI tools upon which they relied when developing their work product? That is just a little like a notice or disclosure policy but requires the worker to specifically discover the AI tool or platform – not simply the usage of AI.
Since we’re having an AI conversation, I asked Microsoft CoPilot for a ‘sample employer AI use policy for the workplace’. Here’s what it produced:
AI Tool Usage Policy [template created by Microsoft CoPilot]
Policy Transient & Purpose: Our company’s AI tool usage policy goals to stipulate the responsible and secure use of artificial intelligence (AI) tools within the workplace. This is especially essential when handling sensitive data and proprietary company and customer information.
Scope: AI tools are revolutionizing our work by automating tasks, enhancing decision-making, and providing insights. Nevertheless, in addition they introduce challenges regarding information security and data protection.
Purpose: The aim of this policy is to be sure that all employees use AI tools in a secure, responsible, and confidential manner, adhering to the necessities for evaluating security risks and protecting confidential data.
Policy Statement: We recognize the potential risks AI tools pose to our operations and customers. We’re committed to maintaining the confidentiality, integrity, and availability of all company and customer data.
Security Best Practices: Employees are expected to follow these best practices when using AI tools:
- Evaluation of AI Tools: Before using any AI tool, evaluate its security measures, terms of service, and privacy policy. Check the tool developer’s status and any third-party services involved.
- Protection of Confidential Data: Don’t upload or share confidential, proprietary, or regulated data without prior approval from the suitable department.
- Access Control: Don’t grant access to AI tools outside the corporate without proper approval and compliance with security requirements.
- Use of Reputable AI Tools: Only use AI tools which can be reputable and meet our security and data protection standards.
Compliance: All employees must comply with this policy as a part of their employment terms. Any violation may end in disciplinary motion as much as and including termination of employment.
Consider this template a start line, and you must modify it based in your specific needs and legal requirements. It’s also advisable to seek the advice of with legal counsel to make sure compliance with all applicable laws and regulations. Remember, an efficient policy is one which is evident, comprehensive, and enforceable.
I would like to thank Carrie for sharing her knowledge with us. And I really like that she included the sample AI policy template to get our considering began! If you need to learn more, try this archived webinar from HUB International on “Humanizing HR within the Age of AI: Embracing the Technology Revolution”.
Once organizations resolve that they should create an AI policy, then the challenge begins of determining what to incorporate within the policy. Carrie mentioned some initial considerations here, but in our next article, we’re going to do a deeper dive into the components of a man-made intelligence policy. Stay tuned!
Image created by DALL-E demonstrating the importance of human oversight in AI
The post Why Organizations Need an Artificial Intelligence Policy [Part 1] appeared first on hr bartender.