New York City Proposes Bill to Regulate Algorithms Used in Hiring

By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.

This morning another New York story has caught my eye, after the post I wrote yesterday on the odious Andrew Cuomo’s proposals to increase sin taxes – a notoriously inefficient and regressive form of taxation – to make up some of the cavernous COVID-19 budget shortfalls. Today, my focus is on reporting featured in Wired.com, New York City Proposes Regulating Algorithms Used in Hiring,:

The New York City Council is the source of this proposal

This bill would regulate the use of automated employment decision tools, which, for the purposes of this bill, encompass certain systems that use algorithmic methodologies to filter candidates for hire or to make decisions regarding any other term, condition or privilege of employment. This bill would prohibit the sale of such tools if they were not the subject of an audit for bias in the past year prior to sale, were not sold with a yearly bias audit service at no additional cost, and were not accompanied by a notice that the tool is subject to the provisions of this bill. This bill would also require any person who uses automated employment assessment tools for hiring and other employment purposes to disclose to candidates, within 30 days, when such tools were used to assess their candidacy for employment, and the job qualifications or characteristics for which the tool was used to screen. Violations of the provisions of the bill would incur a penalty..

The Wired piece usefully reminds us:

In 1964, the Civil Rights Act barred the humans who made hiring decisions from discriminating on the basis of sex or race. Now, software often contributes to those hiring decisions, helping managers screen résumés or interpret video interviews.

That worries some tech experts and civil rights groups, who cite evidence that algorithms can replicate or magnify biases shown by people. In 2018, Reuters reported that Amazon scrapped a tool that filtered résumés based on past hiring patterns because it discriminated against women.

Let me restate that: just because one outsources part of decision-making to a supposedly ‘neutral’ technological process, does not mean one has successfully eliminated the embedded biases that are the reason d’être for anti-discrimination frameworks in the first instance.

New York state and New York city have their own anti-discrimination frameworks, with requirements that exceed the federal baseline (as for that matter do other states, including California). And it is states and cities that have taken the lead in limiting the use of algorithms as well, such as facial regulation software. Now, some may attribute the lack of federal attention to these issues may be due to the generally anti-regulation bias of the departing Trump administration, and look to the incoming Biden administration to devote attention to these issues. To them, I advise: don’t hold your breath. Biden I predict will be every bit the servant of Big Tech as has been his recent predecessors – if we allow him to continue in that course.

Wired certainly falls into that trap:

The [New York city proposal] is a part of a recent movement at all levels of government to place legal constraints on algorithms and software that shape life-changing decisions—one that may shift into new gear when Democrats take control of the White House and both houses of Congress.

More usefully, Wired provides a  précis of recent state attempts to control the use of facial recognition software:

More than a dozen US cities have banned government use of face recognition, and New York state recently passed a two-year moratorium on the technology’s use in schools. Some federal lawmakers have proposed legislation to regulate face algorithms and automated decision tools used by corporations, including for hiring. In December, 10 senators asked the Equal Employment Opportunity Commission to police bias in AI hiring tools, saying they feared the technology could deepen racial disparities in employment and hurt economic recovery from COVID-19 in marginalized communities. Also last year, a new law took effect in Illinois requiring consent before using video analysis on job candidates; a similar Maryland law restricts use of face analysis technology in hiring.

Implementation and Enforcement

The Wired piece notes that it’s often par for the course for a city or state to pass a measure – as San Francisco did with facial recognition – and fail to amend its statutory framework to incorporate the new measure. Still, the New York measure does appear promising:

The New York City proposal launched by Democratic council member Laurie Cumbo would require companies using what are termed automated employment-decision tools to help screen candidates or decide terms such as compensation to disclose use of the technology. Vendors of such software would be required to conduct a “bias audit” of their products each year and make the results available to customers.

The Wired piece flags a key issue: the NYC measure might serve as a form of virtue signalling, allowing a potentially discriminatory algorithm to be used once it has cleared the fairness audit bar:

Some civil rights groups and AI experts also oppose the bill—for different reasons. Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, organized a letter from 12 groups including the NAACP and New York University’s AI Now Institute objecting to the proposed law. Cahn wants to regulate hiring tech, but he says the New York proposal could allow software that perpetuates discrimination to get rubber-stamped as having passed a fairness audit.

Others, including Julia Stoyanovich, director of the Center for Responsible AI at New York University., echo similar concerns:

Like Cahn, Stoyanovich is concerned that the bill’s auditing requirement is not well defined. She still thinks it’s worth passing, in part because when she organized public meetings on hiring technology at Queens Public Library, many citizens were surprised to learn that automated tools were widely used. “The reason I’m in favor is that it will compel disclosure to people that they were evaluated in part by a machine as well as a human,” Stoyanovich says. “That will help get members of the public into the conversation.”

And in addition to the enforcement issues, two other regular bugbears  that at minimum distort and often outright neuter effective regulation make a familiar appearance: the imbalance between the resources the regulated industry can bring to bear and the public entity – in this case, New York city – can exercise, and the suggestion that an industry can be trusted to self-regulate. Unlike Holden, I’m not so sure that invoking the transparency mantra will be sufficient – no matter how necessary:

Robert Holden, chair of the City Council’s Committee on Technology, has his own concerns about the cash-strapped city government’s capacity to define how to scrutinize hiring software. He’s also been hearing from envoys from companies whose software would fall under the proposed rules, which have prompted more industry engagement than is usual for City Council business. Some have assured him the industry can be trusted to self-regulate. Holden says what he’s learned so far makes clear that more transparency is needed. “It’s almost like the Wild West out there now,” Holden says. “We really have to provide some transparency.”

Holden says the bill likely faces some negotiations and rewrites, as well as possible opposition from the mayor’s office, before it could be scheduled for a final vote by the council. If passed, it would take effect January 2022.

****

We’ll Allow You to Opt Out, But on Our Sweet Time

And while I’m thinking about facial recognition, let me mention another tangential point, not necessarily directly relevant but because it also distorts discussions of regulation: the idea that once a new technology is adopted, one can opt out. I read with great amusement a recent piece in Conde Nast Traveler,How to Opt Out of Facial Recognition at the Airport (and I may revisit this issue at greater length later this week). For now, let me tell you why I was so amused. These opt out provisions in reality prove to be a sick joke. Recently, when I passed through LAX a week before Christmas, I waited. And waited. And waited for Homeland Security to provide a hand pat-down as I was obviously physically unable at that time to go through the full body scanner – so I politely requested to opt out. Even though the airport was nearly empty late on that Friday morning. Homeland Security was sending a not so subtle message: we may allow you to opt out. But you do so on our sweet time.

 

 

 

Print Friendly, PDF & Email