5 best practices for implementing AI in the workplace

If implemented poorly, the use of AI may bring with it the potential for implicit bias and disparate impact toward protected categories.

The use of AI in the workplace to streamline certain activities could result in a disparate impact on an older workforce and potentially expose a company to discrimination claims.

As more and more companies begin to utilize Artificial Intelligence (AI) in the workplace, it becomes increasingly important for employers to understand both the risks and rewards that accompany this new technology. While the use of AI can be an efficient and cost effective means for employers to handle tasks such as talent acquisition, compensation analysis, and the completion of administrative duties, it is not without its challenges.

Rather, as discussed below, the use of AI may also bring with it the potential for implicit bias and disparate impact toward protected categories, particularly in the context of gender and age. In addition, if AI is not properly introduced into the workforce, it may foster concerns among employees that the company no longer values their work or cause anxiety about employee job security. This article sets forth a high-level overview of some of the more prevalent challenges employers may encounter when deploying AI in the workplace, while also offering guidance on the proactive steps employers should consider when implementing or utilizing AI.

The growing use of AI

AI is often used in the workplace to assist employers with recruitment through the use of algorithms to make hiring decisions. According to a 2017 survey by the talent software firm CareerBuilder, approximately 55 percent of U.S. human resource managers opined that AI will become a regular part of their work within the next five years.

Related: 6 futuristic AI tools HR departments can use right now

Similarly, as reported by the Society for Human Resource Management following a 2018 survey conducted of over 1,100 in-house counsel, human resource professionals and C-suite executives, 49 percent of respondents said that they already use AI and advanced data analytics for recruiting and hiring. While the use of AI may assist these companies, the technology may not always eliminate bias in the recruitment process.

The potential for implicit bias and disparate treatment

Title VII of the Civil Rights Act of 1964, as amended, prohibits employers from discriminating against an individual on the basis of race, color, sex, national origin or religion with respect to all aspects of employment. Pursuant to the Bureau of Labor Statistics Monthly Labor Review, Occupational Employment Protections to 2022, the growth of employment in computer science and engineering jobs is more than double the national average. Despite the surge in this field, women and minorities continue to be under-represented.

In 2016, the U.S. Equal Employment Opportunity Commission stated that diversity in the high-tech sector is “a timely and relevant topic for the Commission to investigate and address.” Since then, some companies have evaluated using AI in the recruitment process to increase diversity in their workforce. As discussed below, however, it may ultimately have the opposite effect.

As reported by Reuters, in 2017, the online tech giant Amazon announced it would be shuttering an experimental hiring tool it had been working on for the past several years. Amazon had hoped to use the tool to review job applicants’ resumes and streamline the search for top talent. Unfortunately, it was discovered that the computer program showed a bias toward women when it came to recruitment for software developer jobs and other technical positions.

According to Reuters, Amazon trained its computer programs to vet applicants through patterns in resumes submitted to the company during a 10-year period. Due to the fact that the tech industry remains a male-dominated field, the majority of resumes submitted during that time came from men. As a result, the AI system determined that male candidates were preferable and subsequently penalized resumes that included the word “women’s” or downgraded graduates from certain all-women’s colleges. Although Amazon edited the programs to prevent these occurrences, the company ultimately decided to discontinue the program, noting that company recruiters never used the software to evaluate candidates.

Similarly, employers considering the implementation of AI in the workplace should be cognizant of the potential for age discrimination claims. The Age Discrimination in Employment Act (ADEA) prohibits age-based discrimination against applicants or employees age 40 or over. The use of AI in the workplace to streamline certain activities could result in a disparate impact on an older workforce and potentially expose a company to discrimination claims. Specifically, if older workers struggle to adapt to new technology, or implicit bias results in the perception that younger employees are better suited to handle the changes than their older counterparts, employees age 40 or older may face adverse employment actions as a result.

Another potential for bias could result if a company undergoes a reduction in force as a result of the introduction of AI into the workplace, as older workers may be laid off at a disproportionate rate to their younger counterparts if AI is not programmed to account for age-related considerations.

So how can employers reap the benefits of AI without also exposing themselves to the potential for liability? Below are some best practices for employers to keep in mind when using or implementing AI in the workplace.

Best practices

  1. Engage third parties to assist in selecting AI software utilized for recruiting to ensure that the programs selected mitigate the effect of unconscious bias;
  2. Devise an action plan on how best to present the topic to current employees without creating an alarmist environment;
  3. Keep in mind the implications of the federal Worker Adjustment and Retraining Act, Notification Act and similar state laws that require an employer to provide advance notice of job loss;
  4. Be aware of the protection afforded to workers under the National Labor Relations Act for engaging in concerted activities in response to changes in the workplace;
  5. Be cognizant of invasion of privacy claims stemming from the over-collection of data through AI.

While the use of AI in the workforce continues to grow, and a recent study conducted by McKinsey Global Institute noted that as much as one-third of the United States workforce could be displaced by automation by the year 2030, the shift to automation will not happen overnight, affording time to create policy changes and increased regulation in areas such as layoffs, severance pay and training.  Against this backdrop, employers should ensure that they consider the implications of AI and the best practices recommended above when implementing new and innovative solutions in the workplace.


Read more: 


O’Kelly E. McWilliams III, a member at Mintz, advises US and International companies on a wide array of business and employment law issues. He focuses his practice on employment, agreements, disputes and compensation matters, and regularly provides guidance on managing employee relationships and has helped many companies investigate and respond to allegations of employer misconduct.

Jennifer R. Budoff, an associate at Mintz, provides clients with representation and counsel on a broad range of employment matters, with significant experience advising and defending employers in discrimination, retaliation, harassment, and wrongful termination matters, including the representation of employers in actions before Administrative Agencies and state and federal courts.