After an international law firm blocked general access to several AI tools following a ‘significant increase in usage’ by staff not in line with its AI policy, Su Apps, Employment Law Partner at Ashfords, looks at what a good workplace AI policy should include.
AI adoption seems almost inevitable for every business today, with billions being pumped into its development and deployment across different sectors.
The message is clear – we must embrace it or get left behind. But what happens when businesses lean into it a little too much?
It’s a fine line and one which law firm, Hill Dickinson, found itself grappling with recently when a ‘significant increase in usage’ – including more than 32,000 hits to ChatGPT in one week – prompted it to block general access to AI tools, only permitting staff to use them if they submit a request.
Whilst it appears that Hill Dickinson had an AI policy in place to help it address these issues, what is concerning is the number of businesses that do not. Research conducted by Ashfords last year amongst 250 British businesses revealed that 41% did not have a documented AI policy in place and furthermore 12% – more than one in 10 – did not know they needed one.
Incredibly, almost one in five (18%) also described AI as a passing ‘fad’ which they did not believe would affect them.
The effective use of Generative AI is a potential game-changer for businesses of all sizes, but if not used responsibly it also carries risk. It is therefore vital that employers protect themselves by providing clear guidance on how it should be used.
What makes a good workplace AI policy
AI tools are becoming part of the everyday in many jobs, and it is incumbent upon employers to ensure responsible usage by explaining what the rules are and why they’re in place.
An AI policy doesn’t need to be onerous or overly complicated, but it does need to be clear on:
• Explaining what ‘AI tools’ are – particularly those used in your business – from chatbots such as ChatGPT and automation tools to predictive analytics and image recognition.
• What tools can and cannot be used for specific tasks within your business. You should also stipulate whether this applies to just business accounts or individuals’ personal accounts as well if they are using them to carry out work-related tasks.
• Your rights as an employer to monitor employees’ usage of AI in the workplace and what form this might take.
• Why only general and/or anonymised data must be used as opposed to personal and/or private and confidential data, which is prohibited in AI applications.
Human accountability. Employees should understand that anything produced by AI must be considered a draft and, as such, they retain full responsibility and accountability for the end result. You may wish to also include details of how outputs should be audited by employees
• To confirm their accuracy. AI-generated content is getting more sophisticated all the time, but it is still far from perfect and blaming it for mistakes is no defence.
• What the consequences will be for anyone found to be in breach of the AI policy up to and including any disciplinary action. Detailing this may also act as a deterrent.
• How the policy interacts with other policies such as the privacy policy, cybersecurity policy or employees’ code of conduct.
Businesses may, as part of their policy, provide general guidelines for the proper use of AI at work in order that it is both effective and appropriate. Employers may also take the belt and braces approach of listing exactly how AI tools should not be used, the risks and even that staff keep a record of each time they use it and why.
When drafting an AI policy, it is advisable to consult with your business’s legal advisor to ensure that it complies with all the various rules and regulations and anticipates and addresses potential legal issues.
Larger companies may appoint a Chief AI Officer, but for those not operating at such a scale, it’s worth considering if there is an existing member of staff who could act as the main point of contact for AI-related queries.
How you communicate your policy to employees is key and, as well as utilising all the channels at your disposal, leadership buy-in is crucial in helping spread the message and getting staff on board.
Training, on an ongoing basis, is also essential in ensuring that employees understand how to use and apply permitted AI tools to get the most out of them, whilst minimising areas of risk and should be made mandatory.
Finally, a good workplace AI policy should be reviewed regularly. At the moment, every six months may be considered appropriate in such a rapidly evolving field. AI audits are also considered good practice to ensure compliance and provide a useful barometer of the quality and accuracy of the tools being used, as well as highlighting any newer tools which may prove more effective.
Help on the horizon?
SME leaders across the UK have urged the government to offer subsidised training to help them navigate the rapid advancements in AI.
It is one of a series of recommendations put forward in a new report from Goldman Sachs, Generation Growth: The Growth Agenda, which sought input from hundreds who took part in the bank’s 10,000 Small Businesses management training programme.
Training would undoubtedly increase confidence in the implementation of AI, with our own research highlighting the ability to use it safely and effectively one of the main concerns for businesses.
Hill Dickinson was somewhat unlucky in that its staff memo was leaked to the BBC and became a news story, but the firm should in fact be lauded for having a policy and procedures in place which enabled it to quickly identify and take steps to address an increase in usage.
Businesses that have not made sufficient preparations to regulate the use of AI could find themselves facing a hefty fine from the Information Commissioner’s Office for AI misuse, or even hauled before the courts, so it is well worth laying the foundations by putting a robust policy in place now.