Skip to content
Law Society of Scotland
Search
Find a Solicitor
Contact us
About us
Sign in
Search
Find a Solicitor
Contact us
About us
Sign in
  • For members

    • For members

    • CPD & Training

    • Membership and fees

    • Rules and guidance

    • Regulation and compliance

    • Journal

    • Business support

    • Career growth

    • Member benefits

    • Professional support

    • Lawscot Wellbeing

    • Lawscot Sustainability

  • News and events

    • News and events

    • Law Society news

    • Blogs & opinions

    • CPD & Training

    • Events

  • Qualifying and education

    • Qualifying and education

    • Qualifying as a Scottish solicitor

    • Career support and advice

    • Our work with schools

    • Lawscot Foundation

    • Funding your education

    • Social mobility

  • Research and policy

    • Research and policy

    • Research

    • Influencing the law and policy

    • Equality and diversity

    • Our international work

    • Legal Services Review

    • Meet the Policy team

  • For the public

    • For the public

    • What solicitors can do for you

    • Making a complaint

    • Client protection

    • Find a Solicitor

    • Frequently asked questions

    • Your Scottish solicitor

  • About us

    • About us

    • Contact us

    • Who we are

    • Our strategy, reports and plans

    • Help and advice

    • Our standards

    • Work with us

    • Our logo and branding

    • Equality and diversity

  1. Home
  2. For members
  3. Journal Archive
  4. Issues
  5. June 2023
  6. AI and the workplace of the future

AI and the workplace of the future

Artificial intelligence in the workplace is here to stay, but the different contexts in which it is used each bring potential legal pitfalls for employers. This feature highlights some issues
19th June 2023 | Sarah Leslie, Morgan McSherry

Over recent months, ChatGPT has hit the news – and many of our phone and laptop screens – as we come to terms with the rapid advances in artificial intelligence (“AI”) technology.

In the workplace, there has been a gradual shift towards AI in recent years, expanding its use in areas such as recruitment, management, and performance review.

This article explores the current uses of AI and considers what legal pitfalls employers should be aware of to ensure their use of the technology complies with all relevant legislation.

A comprehensive framework?

Although there is no overarching legislation or regulatory body for AI, and none immediately anticipated to be put into place, employers should be aware of the following laws which can impact on the use of AI in the workplace:

  • The Human Rights Act 1998 – covers the right not to be discriminated against as well as rights of autonomy, privacy, and transparency.
  • The Employment Rights Act 1996 – deals with the need for fair procedures for grievances and dismissals.
  • The Equality Act 2010 – provides further protection against discrimination.
  • The General Data Protection Regulation (GDPR) – provides that individuals have a “right to object to automated individual decision making”, as well as “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”.

When introducing any AI-driven processes for personal data, employee or applicant GDPR privacy notices must be adequately updated to inform those whose data is being processed that this is being done by AI.

Current workplace uses and potential pitfalls

Recruitment

The main, and most prevalent way of using AI in the recruitment process is “CV screening”. AI is also used to advertise jobs, using online databases to display vacancies, and, through algorithms, process information and direct a particular job to certain people. The use of AI to sort through job applications and create a shortlist for interviews can save time and expense in disregarding applicants with unlikely prospects of success.

As businesses become more familiar with AI, some have expanded its use to conduct psychometric testing or assessments before the interview stage. The results of these tests can be used to further whittle down applications for consideration, before there is any human engagement. As a final step, AI can even conduct interviews with applicants, potentially as an initial further screening procedure before proceeding with face-to-face interviews.

The obvious benefit of this for employers is the impact on efficiency and cost saving. Although there is an initial cost for purchasing and implementing the technology, once it is in place there are usually minimal further costs. For example, Unilever estimates that it has cut its average recruitment time by 75% by using automated screening, with projected cost saving of around £250,000 in the first year of using the technology.

There is also an argument that this way of recruiting new staff will reduce discrimination, as removing the human element should remove any ingrained bias. In practice, however, this is rarely the case. The algorithm used to screen candidates is only as smart as it is taught or has learnt from its analysis of historic data. This means that any bias that may be ingrained in a company’s recruitment process is likely to be replicated by the AI.

As an example, Amazon had built an automated algorithm to streamline its recruitment process. It programmed the AI using its own historic recruitment data in order to teach the program what the company valued in its recruits. In doing this, Amazon taught the algorithm to favour male candidates over female candidates. Partly in consequence of this, Amazon later abandoned the project.

New technology has also emerged in recent years, allowing AI to conduct video interviews, recording the applicants’ responses and analysing their answers, vocal and facial cues. This can open employers to potential indirect discrimination, as there is the possibility that these algorithms will discriminate against candidates with autism or other conditions which may impact their facial expressions. Facial recognition technology has also been widely criticised for failing to recognise non-white faces, so there is the potential for an indirect race discrimination claim.

Management

The potential use of AI in the workplace does not end after the recruitment process. Those applicants who are successful may still be subject to decisions made by AI. AI is being used more prevalently in the management of employees, particularly in shift and work allocation, potentially dramatically reducing management time used for these types of tasks.

Technology has been developed that will analyse customer footfall, workloads, orders, and deadlines to determine how many members of staff will be required to meet the needs of the business accurately that particular day. A good example of this is the Thorpe Park tourist attraction, where AI analyses the number of tickets sold on a similar day and allocates an appropriate level of staff. These programs facilitate shift swaps, remove favouritism and can even analyse the skills of employees and allocate them to the best-suited shift.

However, one glaring issue with removing the human element of shift allocation is just that: removing the human element. Although the algorithm will not play favourites, it will also not show any compassion to employees who may require flexibility. In some cases, this could also lead to discrimination claims of indirect sex discrimination due to a lack of flexibility for caring commitments, or potential failure to make reasonable adjustments for employees with disabilities.

These same algorithms can also allocate tasks to certain workers, using the same technology to analyse workloads, skills and availability. Delivery drivers, for example, will be allocated their delivery destinations, the route they should take and the timeframe in which they should complete their work. Although this will save time and is possibly more efficient, it runs the risk of creating a high pressure system in which employees are trying to meet their targets to impress the algorithm.

Using another example from Amazon, workers in their warehouses are now provided with a “wearable haptic feedback device” which tells them which items to pick, where they will find these items and the number of seconds in which they should be found. While improving efficiency, systems such as these seriously reduce the control and autonomy of individual workers, which may lead to a reduction in morale and pride in their work.

Performance review

Some employers are also using AI for performance analysis of their workforce, for example by monitoring employees’ activities. The baseline for any such algorithm would be programming the system with how an employer felt an employee should be performing. This usage of AI can clearly leave an employer exposed to risk, not least direct and indirect sex and disability discrimination which may arise from setting an unrealistic target for all staff.

Normally in such cases, AI collects data about employees’ activity from customers, colleagues and tracking/monitoring software. These technologies range from monitoring how long one employee spends on a certain task, or how long they are away from their computers, to more advanced models that include wearable devices that monitor conversations, tones of voice and engagement with others. Call centres see this type of technology used increasingly frequently.

One particular AI programme – Cogito – provides recorded and live assessments of each employee’s performance to their manager. It also provides voice analysis of their conversations and provides feedback such as the speed and tone of voice.

Technology such as this comes with many challenges, particularly around consent: even allowing employees the choice of whether they wish to participate in activity monitoring can lead to pressure to agree, out of fear of suffering detriment if they do not. If introducing software like this for the first time, employers should be sure to inform their workforce appropriately, and consult with any relevant employee groups or trade unions. As a cautionary tale, the Telegraph newspaper recently introduced software intended to monitor, among other things, how long employees spend at their desks, but failed to inform staff that they were going to do so. Predictably, this led to backlash from the National Union of Journalists.

The human factor

As well as performance monitoring on a day-to-day basis, some companies are using this technology in performance reviews, either to completely replace the role of a manager by carrying out the review itself, or to aid a manager-led review using collected data. The use of AI in this way has many benefits: it can remove racial and gender bias, and remove “recency bias” (i.e., it will give the same amount of weight to something that happened nine months ago as to something that happened one month ago), control “contrast bias” (i.e., where one employee is compared to another), and is overall an objective review.

This being said, the consequences of this use of technology should also be taken into account. GDPR creates a robust right for an individual not to be subjected to a completely AI-driven decision-making process, meaning that employers must ensure there is at least some human involvement in the process, and preferably the final outcome should be determined by a human-decision maker.

Here to stay

It is clear that AI in the workplace is here to stay. The implications for both employer and employee are vast. When used correctly, it can potentially increase productivity and efficiency, save money, control bias, and create training and development opportunities for staff. However, when AI is used for the wrong reasons or the algorithms used are not trained well enough, an employer can open themselves up to risks such as discrimination, violation of employees’ privacy, and breaches of regulations such as GDPR.

With the growing use of AI in day-to-day tasks, it is expected that many jobs once carried out by people will instead be done by AI. As some roles are replaced by AI, others will change significantly, and new roles may appear which we don’t currently see the need for. It will be interesting to see in the coming years just what level of impact this will have on the job market.

The Author

Sarah Leslie is an associate, and Morgan McSherry a trainee solicitor, in the Employment team at Shepherd and Wedderburn LLP

Share this article
Add To Favorites
https://lawware.co.uk/

Regulars

  • People on the move: June 2023
  • Book reviews: June 2023
  • Reading for pleasure: June 2023

Perspectives

  • Opinion: Jen Ang
  • President's column: June 2023
  • Editorial: Half baked
  • Viewpoints: June 2023
  • Profile: Paul Gostelow

Features

  • AI and the workplace of the future
  • How should we regulate AI?
  • Animals, ESG and climate change: the solicitor’s role
  • Rethinking those ts and cs
  • Show us the money: immigration for the better off
  • Accounting for suspicion
  • When law school starts earlier

Briefings

  • Criminal court: Dangerous or careless?
  • Corporate: Bill gives CMA consumer enforcement powers
  • Agriculture: A question for the Land Court?
  • Intellectual property: Who owns AI generated copyright?
  • Succession: Variation by an attorney?
  • Sport: Participation in LIV Golf ruled out of bounds
  • Scottish Solicitors' Discipline Tribunal: June 2023
  • Data protection: Meta's mega matter
  • In-house: Scanning wider horizons

In practice

  • Public policy highlights: June 2023
  • Trainee CPD goes O Shaped
  • Bill with a high price
  • The Eternal Optimist: Solving the trust equation
  • Risk: Top tips for trainers and trainees
  • Tradecraft tips: June 2023
  • AML: Source of funds – have we moved forward?
  • Ask Ash: Chill at first sight
  • OPG update: June 2023

Online exclusive

  • Civil actions: raising the IP address curtain
  • The potential risks of using ChatGPT at work
  • Managing long-term sickness absence
  • Green leases – here to stay
  • AI in healthcare: how could liability arise?

In this issue

  • A match made in Heaven!
  • Cyber risk: are you properly tested?

Recent Issues

Dec 2023
Nov 2023
Oct 2023
Sept 2023
Search the archive

Additional

Law Society of Scotland
Atria One, 144 Morrison Street
Edinburgh
EH3 8EX
If you’re looking for a solicitor, visit FindaSolicitor.scot
T: +44(0) 131 226 7411
E: lawscot@lawscot.org.uk
About us
  • Contact us
  • Who we are
  • Strategy reports plans
  • Help and advice
  • Our standards
  • Work with us
Useful links
  • Find a Solicitor
  • Sign in
  • CPD & Training
  • Rules and guidance
  • Website terms and conditions
Law Society of Scotland | © 2025
Made by Gecko Agency Limited