HR Policy: How to create an AI policy
By the time this piece is finished, artificial intelligence (AI) might have morphed into something new, it’s moving that fast. While most everyone is finding ways to use AI to enhance productivity, efficiency, and decision-making, AI also brings challenges. Employers have to comply with applicable law and respect company information privacy, confidentiality, integrity, and availability.
These challenges include ethics, bias, data security, and cybersecurity compliance. To help with AI’s potential and risks, employers benefit from a well-defined policy that governs employees’ use of AI. While it’s best to create policies to address emerging issues in the workplace before these situations arise, employers might have gotten caught without a needed AI policy.
Employees are likely already using AI without realizing it. Simply searching online can trigger responses from AI sources. Employers might even leverage AI to help write an AI workplace policy.
One of the reasons for creating policies is to anticipate how to deal with situations that may arise. When crafting an AI policy, remember that most states have laws governing data security, and a few states have laws specifically focused on AI usage.
One of the first steps in crafting an AI policy is to get input from appropriate stakeholders, such as the company's IT department. An IT department might, for example, maintain a list of AI Tools that have been approved for use, and perhaps the assigned users or computers, depending on software setup, authorized to use each tool. The NIST AI Risk Management Framework, for reference, can help provide some guidance.
The policy should cover all types of AI, including Generative AI (GAI), Augmented AI (IA or intelligence amplification), and Algorithmic AI (AAI).
An AI policy should set strict guidelines on how data is collected, stored, shared, and processed, ensuring that customer and employee privacy remains protected.
Components of a policy
Employers might want to consider the following for inclusion in an AI policy:
Definitions: This can define the company (affiliates and subsidiaries), who AI users are, what company information and company resources are involved, and AI tools. The definitions might also include which AI tools the company has vetted and approved for employees to use.
Principles: The policy should provide a clear framework for ethical AI use, guiding when the policy applies, when and how AI may be used for enhancing rather than harming the workforce and the company reputation. AI can “hallucinate,” which results in incorrect information. All sources should be vetted before using any responses through AI.
AI-produced results may also reflect biased or incomplete data sets on which they were trained. Employees should not use AI tools blindly for decision making and/or the creation of content, and should never rely upon them for important inquiries.
Publication of the output of AI Tools could result in the violation of the intellectual property rights of third parties. Before publishing or distributing content generated by AI Tools, employees must receive approval.
Responsibilities: The policy should clearly state user responsibilities. Users of an approved AI Tool should be held responsible for ensuring AI-tool output. They should also, for example, know not to paste any information in an AI prompt they don’t want to share with the world, including content they want to copyright, company secrets, and so on.
AI users might be required to provide a disclaimer or otherwise that the AI tool output was generated by AI.
Users of an approved AI tool might be required to immediately contact their leader if they become aware of a possible violation of this policy, breach of data privacy, confidentiality, integrity, or availability, OR a circumstance where an AI tool is generating erroneous, incomplete, misleading, offensive, harassing, or discriminatory output.
Policy violations
Like many policies, an AI policy should include any repercussions for violating it. This might include repercussions for using “shadow” AI. Shadow AI are tools employees might be using but are not yet vetted and approved by the employer.
Review it
Once a policy is written, it should be reviewed. Stakeholders involved in its creation should be included in the review process. This is when employers can resolve any questions and make the required adjustments.
Be sure the policy is clear and easy to read. Employers can make policies readable through formatting and using simple language.
As with any new policy, having knowledgeable counsel familiar with the organization can be a big help. While all policies should be reviewed regularly, given the speed at which AI is evolving, a related policy might benefit from more frequent reviews.
Tell employees about it
If employees aren’t aware of a new or revised policy, holding them to it can be challenging. Therefore, employers should let employees know of the policy. Simply sending out a bulk email telling folks about the policy might not be enough. Include a link to the policy along with whom to send any questions.
Managers and supervisors can help introduce and explain a policy and help funnel any related questions to the appropriate people.
Try to anticipate potential questions to be prepared to address them. Explain the reason behind the policy and why it is beneficial. This can help earn respect and buy-in from the workforce.
Employees might need some training on the policy, and if so, this training should be provided before or shortly after the policy is shared.
Audit it
While most policies are audited annually or when changes occur, given the rapid pace of AI development, this policy may warrant more regular review.
The content of this piece was not written with the help of AI.