AI, machine learning bring risk of discrimination and bias
There is real risk for discrimination when designing algorithms for machine learning systems, including the real potential for exclusionary health insurance systems.
While machine learning systems – if designed well — can help minimize human bias in decision-making, it’s also very possible that such systems can also reinforce systemic bias and discrimination, according to the World Economic Forum white paper, “How to Prevent Discriminatory Outcomes in Machine Learning.”
The WEF report cites many real-life examples and “what-if” scenarios in which there is risk for discrimination when designing algorithms for machine learning systems, including the real potential for exclusionary health insurance systems, Erica Kochi, co-leader of UNICEF’s Innovation Unit, writes in Quartz.
Related: AI advances will require new laws, regulations, Microsoft says
At least two private multinational insurance companies operating in Mexico today are using machine learning to figure out how they can maximize the efficiency and profitability of their operations, according to the report.
“We can easily imagine a scenario in which these multinational insurance companies, in Mexico and elsewhere, can use machine learning to mine a large variety of incidentally collected data — from shopping history, public records, demographic data, etc. — to recognize patterns associated with high-risk customers and charge those customers exorbitant and exclusionary costs for health insurance,” Kochi writes in Quartz. “Thus, a huge segment of the population—the poorest, sickest people—would be unable to afford insurance and deprived of access to health services.”
The WEF white paper offers a number of recommendations for companies to lessen the risk of potential discrimination when designing algorithms for machine learning systems. At the top of the checklist is developing and enhancing industry-specific standards for fairness and non-discrimination in machine learning, as work by the Partnership on AI, the World Wide Web Foundation, the AI Now Institute, IEEE, FATML, and others have begun to do.
“The potential for machine learning systems to amplify discrimination is not going away on its own,” Kochi writes. “Companies need to actively teach their technology to not discriminate.”
Companies should also let their workers know if they are utilizing machine learning systems for recruiting and other HR purposes, according to HRDive.
“Employers also must keep in mind that where there’s discrimination, there’s the risk of liability, whether or not the violator is human,” HRDive writes. “For HR, this means asking the right questions before adopting new tech.”