Candice’s latest rejection email for a housing rental in Washington DC looked much like all her others – a generic response with no proper explanation of the decision.

“It will just say, ‘unfortunately, right now we didn’t accept your application,’” said Candice, 36, who has lived in the US capital for most of her life and is now looking for a larger home for herself and her three children.

“I felt it was computer-generated. And of course, computers – they’re faulty,” she told Context, asking to be identified by only her first name.

Candice, who said she had received about 10 such rejections in recent months, is currently unemployed but benefits from welfare assistance that would cover her rent in full.

She and other would-be tenants think their applications for rental housing are falling foul not of landlords, but of automated screening programmes that scan credit scores, eviction or criminal histories and even social media activity to determine if an applicant is a rental risk.

The widely used programmes are facing increased scrutiny from lawmakers in Washington DC and beyond amid broader concern about the potential of algorithms to lock in bias and perpetuate inequality.

Susie McClannahan, who manages the fair housing rights programme at the Equal Rights Center civil rights group and has worked with Candice, calls it the “black box of algorithmic discrimination.”

Rental applicants are “being denied at properties for reasons they don’t know, and that the provider might not even know,” McClannahan said, adding that some third-party screening systems used data they were banned from using, such as old criminal convictions.

“For renters with housing vouchers and low-income’s making it harder for them to find housing in a city that’s already in the midst of a housing crisis,” she said.

A sign is seen outside of a home in Washington in July. Credit: Sarah Silbiger/Reuters

City lawmakers are taking note. In September, they debated legislation to ban “discrimination by algorithms,” including in housing – one of several efforts nationwide.

And last month, the White House released a “Blueprint for an AI Bill of Rights,” warning that “discrimination by algorithms” is unacceptable.

Regulatory action on the issue is likely in the coming year, said Ben Winters, counsel at the Electronic Privacy Information Center watchdog group.

“We’re at a transition point,” he said.

A booming business

The tenant-screening industry, worth around $1 billion, is drawing interest from tech startups and venture capital, according to the Tech Equity Collaborative, a watchdog group.

There are hundreds of tenant-screening tools available in the United States, supplanting a process traditionally undertaken by landlords, said Cynthia Khoo, a senior associate with Georgetown University’s Center on Privacy & Technology.

While that process was also open to discrimination, she said today’s automated tools operate far more efficiently, at greater scale and greater speed, and with access to far more data.

“These are new technological tools being used to carry out the same age-old discrimination we’re familiar with,” she said, adding that they were even less transparent.

As regulators in California and Colorado, and at the Federal Trade Commission, work on the issue, many are watching the capital’s Stop Discrimination by Algorithms Act as a potential blueprint.

“This is the most robust legislation in the US,” Khoo said of the bill.

The current draft states that algorithms cannot discriminate against any groups already protected under local law, said Winters, while applicants would have to be alerted to the use of these systems and given explanations if rejected.

Most firms using these tools would have to audit their algorithms to make sure they knew what the programmes were doing, he said, and applicants would be able to sue over potential infractions.

In response to a request for comment, the Consumer Data Industry Association, a trade group, referred to testimony it gave in opposition to the Stop Discrimination by Algorithms Act, as well as a letter sent to the DC Council in October by nine financial services groups.

The letter noted that companies were already prohibited from discrimination in credit or other financial services, and that the DC bill would increase the potential for fraud and hit credit access.

“Algorithms make credit decisions more accurate, fair, faster and more affordable by judging applicants on their credit worthiness,” the groups said.

“Algorithms also eliminate some of the risk of the biases that can be found in human interactions and can help identify products and services designed to benefit communities, including historically underserved populations, helping close the racial wealth gap.”

A person types on a laptop computer in Manhattan in September 2020. Credit: Andrew Kelly/Reuters.

‘Tainted’ data

Yet some question whether algorithms drawing on public data can be objective when the data itself is tainted, said Catherine D’Ignazio, an associate professor of urban science and planning at the Massachusetts Institute of Technology.

Often data such as credit scores that seems objective is actually the result of decades of racism or marginalisation – thus baking bias into the math, she said.

The idea of algorithmic fairness suggests that “everyone starts equally and is treated equally. But history hasn’t treated people equally.”

Still, recognising this disconnect offers an opportunity for change for the better, D’Ignazio said.

“Tainted” historical data can also skew home valuations, said John Liss, founder of True Footage, whose company launched last year with an eye to addressing appraisal gaps between white and minority homeowners by using a combination of automation and human oversight.

For years, home appraisals often did not seem tied to data, Liss said – to the particular detriment of Black and Hispanic homeowners.

While bringing automation into the appraisal process helps to address this in part, he said, “automated valuation models are extremely dangerous because they’re tainted” by historical data.

For True Footage, he said, the key is to have human appraisers, increasingly drawn from historically marginalised communities, involved in interpreting the data.

“There’s a place for technology,” Liss said. “(But) having a human at the wheel to interpret the data is much more accurate.”

This article first appeared on Context, powered by the Thomson Reuters Foundation.