Social media company Facebook has started assigning its users a reputation score, predicting their trustworthiness on a scale from zero to one, The Washington Post reported on Tuesday. The company has developed this system over the past year in order to help users identify malicious actors.

The company developed this assessment mechanism as part of its effort against fake news, said Tessa Lyons, the product manager in charge of fighting misinformation, told the newspaper in an interview. She, however, did not elucidate how the scores are calculated.

A user’s trustworthiness score is not meant to be an absolute indicator of a person’s credibility, and users are not assigned a single unified reputation score, she said. The score is one measurement among thousands of new behavioral clues that the company is now taking into account as it attempts to understand risk and malevolent actors using its platform. The company is also monitoring which user often flags content published by others as problematic and which user or publisher is considered trustworthy by others.

“One of the signals we use is how people interact with articles,” Lyons told the Los Angeles Times. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true.”

Some experts, however, have voiced concerns about Facebook’s new move. “Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Claire Wardle, the director of First Draft, a research lab within the Harvard Kennedy School that helps the social media company fact-check content. “But the irony is that they cannot tell us how they are judging us – because if they do, the algorithms that they built will be gamed.”

Last week, the company admitted that it had been “too slow” to prevent the spread of misinformation and hate on its platform in Myanmar, where security forces have carried out the ethnic cleansing of a section of the population known as the Rohingya.

The company has been under intense scrutiny in the past few months after it became public that British political consultation firm Cambridge Analytica had accessed private information of 87 lakh Facebook users, including five lakh Indian users. The company also failed to identify alleged Russian interference in the 2016 presidential election in the United States.