By Josh Millet
Hiring has always been an inexact science. Traditionally a laborious, inefficient, and only occasionally effective enterprise, it is a quintessentially human process. That isn’t exactly a compliment. As it turns out, humans make a lot of mistakes, particularly when judging other humans.
Not surprisingly, the technology sector has spent quite a bit of energy and capital attempting to correct these mistakes—to hack hiring, in other words. Although the industry has made great strides in areas such as document management and data processing, tech tools for hiring professionals can fall short in critical ways. Artificial intelligence has become an increasingly common engine under the hoods of these machines, and because AI tools require data sets and inputs generated by humans, they can still exhibit some telltale human character flaws. A key example: bias.
AI and algorithm-based tactics are nothing new in the hiring profession, and neither are accusations of racism, sexism, and other forms of bias leveled against them. But recently, and for the first time, some legislative muscle has been thrown behind the issue: In New York City, the Department of Consumer and Worker Protection adopted what has most commonly been referred to as the AEDT rule. According to the DCWP, this legislation “prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.”
Sound fair and equitable? On the surface, yes. But the AEDT described in New York City’s new rule can be interpreted quite broadly, extending regulation well beyond AI and even basic technology to longstanding HR tools that are considered fundamental to the hiring process. How will this new rule affect those hiring tools? Has the burden of proof on business grown too great? And will the rest of the nation get behind this legislation, expanding these questions of ethics and practicality from a local issue to a national debate?
Establishing Bias in Hiring
Humans are inherently biased. It’s in our nature. We are designed to categorize and classify—to prejudge—for our own safety. But because recognizing the threat of poisonous berries and saber-toothed tigers has far less value than it once did, and no bearing at all on our ability to fill an opening in accounting, this feature has evolved into something of a design bug.
Yet it clearly exists. A recent audit by research economists found systemic discrimination in hiring across large U.S. companies. And if anything, these figures have improved over past years, when hiring biases were more pronounced and employment discrepancies even wider.
One of the purposes of artificial intelligence is to mitigate that human bias. But when the inputs for AI-driven hiring technology are based on bias-influenced historical data sets, machines will show those same tendencies in their work. And because a reported 65 percent of employers use AI in their hiring processes, the tech, through no fault of its own, winds up perpetuating generations of hiring bias.
The New York AEDT Rule: Pros and Cons
The spirit behind the New York City AEDT rule is unquestionably well-intentioned. Research proves that diverse workforces improve companies’ performance and financial outcomes. More importantly, everyone deserves a fair shake.
Yet there are legitimate concerns about complications stemming from the AEDT rule, including its enforcement and organizations’ ability to execute on it. Its broad definition of automated employment decision tools means that audits will be required of “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues … a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions.”
What tools might such a rule encompass? In theory, it could include any evidence-based approach or any tool that has been developed on the basis of data, including traditional cognitive and personality assessments.
Such a broad interpretation of the law could have perverse consequences. An organization using a tool where the scoring rubric is based on intuition and has never been checked against actual data would have no compliance requirements. The tool could be massively biased, but the absence of a professional development and validation process means that the law does not apply. Again, depending on how broadly the law is interpreted, some large organizations may find they are using hundreds of different “AEDTs,” each of which requires an annual bias audit. A shift away from professional developed, data-informed prehire assessments is surely not what the authors of the law intended, but it may yet be the result of the law.
The fallout of AEDT legislation, then, could force new costs and burdens on companies, further slowing hiring decisions. New positions, and perhaps full departments, may be needed to ensure compliance. Even advocates fiercely dedicated to removing hiring bias must acknowledge the unanswered questions new legislation raises. Even worse, it may cause organizations to shy away from any kind of objective or evidence-based approach, and revert to some version of a system that amounts to “trusting their gut.” The results of that would inevitably be far more bias, and far worse outcomes in terms of both diversity and business outcomes.
What the Future Holds for AI Hiring Bias Regulation
Whether or not you believe in the need for hiring bias regulation, the movement to ensure the equitable deployment of AI as part of best hiring practices is underway and only gaining momentum. The New York rule sets a strong precedent—one that could be adopted at the state and federal levels.
In order to ensure consistent compliance while preventing companies’ resources from being stretched to their breaking point, more research and, eventually, a codified approach, are necessary. Rooting out the biases in human hiring inputs and removing them from (or correcting) affected data sets is the quickest and most reliable way to reduce bias and promote fairness in the hiring system. And we should not forget that if our goal is to reduce bias in the hiring process, upending the status quo in hiring should be our top priority. Because a system that leaves room for too much subjectivity and that privileges things like experience and educational pedigree—which we know yield bad diversity outcomes and subpar business results—is not serving organizations well either.
Josh Millet is the founder and CEO of Criteria, which provides hiring and talent screening tools for a variety of international enterprise clients.
(5)