AI compliance in HR: A Q&A with Sultan Saidov
"Allowing AI to inform the talent lifecycle also increases and improves the level of diversity within applicant pools," says Sultan Saidov.
AI has been a hot topic lately. Employers are not sure how to go about this new technology and the compliance that comes with it. Some studies have shown that AI can lead to bias in hiring, so how do employers avoid that?
Sultan Saidov, co-founder and president of Beamery, an AI-powered talent platform, believes that AI can inform the talent lifecycle also increases and improves the level of diversity within applicant pools.
What does the NYC AEDT law mean for employers and HR teams? For internal and external candidates?
The NYC Automated Employment Decision Tools (AEDT) law requires companies in New York City using recruiting tools that may automate employment decisions to disclose this fact to candidates, and be subjected to an annual bias audit conducted by a third party. These audits aim to reveal whether or not recruiting technology is unbiased, safe, and fair. Consequently, they’re a forcing function to ensure employers are hiring talent based on a candidate’s genuine merit and potential, as opposed to making decisions based on pedigree or connections.
This legislation shows the importance of equalizing the recruitment process and employment opportunities across industries. It may also introduce a level of accountability and transparency to an organization’s hiring practices in addition to providing internal and external job candidates with extra layers of protection and transparency throughout the hiring process.
Of course, it’s worth noting that legislation about equal employment already exists – the new regulations are not designed to block organizations from using AI but in fact to provide the guidelines necessary to encourage the use of AI and technologies that reduce bias and increase transparency. AI tools can, in practice, dramatically help businesses adhere to these existing anti-discrimination laws, and make fairer, more objective talent-related decisions. However, leveraging AI to increase fairness and reduce bias requires the AI to be built and used responsibly, and to be audited for verification of these bias reducing outcomes.
How can businesses get ahead of the introduction of new regulations and remain compliant?
As Artificial Intelligence’s capabilities continue to develop and evolve, it’s important that regulatory measures are taken to maintain the integrity of new technologies. With this, it’s essential that business leaders proactively evaluate and audit their own organization’s AI practices, and neither assume that introducing technologies will create more risks or assume that vendor self-assessments are sufficient. For example, a key step would be to consider how to audit bias today, even if new technologies are not introduced, as well as partnering with a third party to audit both existing and new technologies and processes for potential bias — even if this technology is not yet regulated from the perspective of the technology explicitly automating employment decisions.
How are emerging laws around the use of AI in HR and talent management impacting the workforce at scale? Has the emergence of AI like ChatGPT created a greater need for regulation?
The introduction of AI legislation, such as the NYC AEDT law, helps support the protection of individual employees and job candidates from any kind of bias and/or discrimination that may stem from AI. These laws are helping to establish the necessary safeguards that ensure AI technology doesn’t overlook the necessary ethics, fairness, and need to create equal opportunities throughout the workforce.
On a global scale, tech leaders and decision-makers are collaborating more and more to determine the significance of AI’s effect on the workforce and to develop regulations that can evolve alongside the technology. We identified two key themes: the need for responsible and accountable AI auditing, and the need for transparency around conveying AI standards to consumers.
The UK’s AI guidance, for example, outlines five principles for the safe use of AI: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Similarly, this past May the U.S. Equal Employment Opportunity Commission (EEOC) announced new guidance on employer use of AI in HR, and how the misuse of the technology could violate Title VII of the Civil Rights Act of 1964.
How will AI help talent functions? How can it help employers, talent management and HR leaders close the skills gap and improve DEI efforts within their organization?
When relevant AI technologies are deployed responsibly, businesses can benefit from not only efficiency or productivity gains, but from driving more inclusive and bias-minimizing outcomes. From an applicant perspective, AI-powered tools can provide prospective employees with a tailored and more inclusive recruitment experience, helping them find roles based on their potential rather than their credentials and reducing the traditional factors that may discourage people from under-represented backgrounds from applying for jobs — such as requirements for years of experience or educational degrees that may make it harder for people from certain backgrounds to have the confidence to apply or be considered.
This kind of bias can be reduced by encouraging skills-based role requirements rather than traditional descriptions, as well as suggested roles being shown to prospective candidates based on their interests, skills and potential. For existing employees, AI can uncover an individual’s unique skills that address a current need within the business; opening new opportunities for career mobility within their organization.
More broadly, AI can serve as a valuable resource for talent management and HR teams. When built around objective talent data, it can help identify and close the skills gaps within a company, by directing HR teams toward candidates with the skill sets needed at any given time.
Related: Compliance and legislative trends on the use of AI and automated hiring tools
Allowing AI to inform the talent lifecycle also increases and improves the level of diversity within applicant pools. This results in the most seriously-considered candidates that, before AI, may have been removed from consideration too soon.
If humans work effectively with AI to improve the recruitment process and remove inherent biases, business leaders will eventually reach a point where they hire employees based on their skills and potential versus certificates and their network. Organizations will begin to see more diversified talent populating their staff.