Home Science Face recognition raises ethical concerns! Is your face under surveillance?
Face recognition raises ethical concerns! Is your face under surveillance?

Face recognition raises ethical concerns! Is your face under surveillance?

by YCPress

The face recognition system has brought many conveniences to our cities. However, in many countries, the voice of resistance to face recognition is constantly rising. Researchers, civil liberties advocates, and legal scholars are all troubled by the rise of facial recognition technology. 

They are tracking its use, exposing its hazards and launching campaigns to seek protection and even completely prohibit the use of technology. 

However, the trend of technological development is immense, and more people think that the existence of this technology is “inevitable”, but the moral and ethical issues behind it are worthy of our deep consideration.

Recently, a series of reports in “Nature” discussed the moral ethics behind the face recognition system. Some scientists are analyzing the inherent inaccuracies and prejudices of face recognition technology, issuing warnings about the discrimination behind them, and calling for strengthening supervision and increasing technological transparency.

A survey of 480 researchers engaged in research in the fields of face recognition, artificial intelligence and computer science by “Nature” magazine shows that people are generally concerned about the ethics of face recognition research, but there are also differences.

Some get data without consent

In order for the face recognition algorithm to work properly, a large image data set must be trained and tested. Ideally, these images must be captured multiple times under different lighting conditions and different angles. 

In the past, scientists generally recruited volunteers to collect photos from various angles; but now, most people are collected face images without permission.

Among the 480 interviewees in Nature, when asked what they think about the research that uses facial recognition methods to identify or predict personal characteristics (such as gender, age, or race) from appearance, about two-thirds People said that such research can only be carried out with the informed consent of the facial recognition person, or after discussions with representatives of groups that may be affected.

Most people believe that research using facial recognition software should be approved in advance by an ethical review agency (such as an institutional review board). 

They believe that they are most uncomfortable using face recognition for real-time surveillance in schools, workplaces, or when private companies monitor public places, but they usually support the use of face recognition systems by police in criminal investigations.

Legally speaking, it is not yet clear whether European scientists can collect photos of personal faces for biometric research without people’s consent. The EU’s General Data Protection Regulation does not provide researchers with a clear legal basis. 

In the United States, some states have stated that it is illegal for commercial companies to use personal biometric data without their consent.

Respondents strongly believe that there should be other regulations to regulate the use of facial recognition technology by public institutions. More than 40% want to prohibit real-time large-scale surveillance.

There is gender and racial prejudice

Face recognition systems are usually proprietary and confidential, but experts say that most systems involve a multi-stage process that uses deep learning to train large-scale neural networks on large amounts of data.

The National Institute of Standards and Technology (NIST) stated in a report released at the end of last year that the accuracy of face recognition has been significantly improved, and that deep neural networks are effective in recognizing images. 

But NIST also confirmed that, compared to people of color or women, most face recognition is more accurate for white male faces. In particular, faces classified as African American or Asian in the NIST database are 10 to 100 times more likely to be misidentified than those classified as white. Compared with men, women are more likely to misreport.

Craig Watson, an electrical engineer who leads the NIST imaging team, believes that this inaccuracy is likely to reflect the imbalance in the composition of each company’s training database, and some companies may have begun to solve this problem.

Awaiting strict legislation and supervision

Researchers working on facial recognition or analysis technology pointed out that facial recognition has many uses, such as finding missing children, tracking criminals, making it easier to use smartphones and ATMs, and helping robots by identifying their identity and emotions. Interaction with humans, in some medical research, can also help diagnose or remotely track agreed participants.

Facial recognition technology has benefits, but these benefits need to be assessed based on risk, which is why it needs to be properly and carefully monitored.

At present, many researchers and companies such as Google, Amazon, IBM, and Microsoft are calling for stricter regulatory measures on face recognition systems.

Woodrow Hartzog, a computer scientist and law professor at Northeastern University in Boston, Massachusetts, who studies facial monitoring, said that facial recognition technology is “the most dangerous invention in history” and that if US lawmakers allow companies to use facial recognition, they should Write rules to prohibit the collection and storage of “facial fingerprints” from gyms to restaurants, and prohibit the use of facial recognition technology in conjunction with automated decision-making (such as predictive policing, advertising positioning, and employment).

Need to study and think carefully

Anil Jain, a computer scientist at Michigan State University East Lansing, said: “In our society, we need a lot of legitimate and legal facial and biometric applications.” But some scientists said that researchers must also recognize

However, the technology of remotely identifying or classifying faces without people’s knowledge is fundamentally dangerous, and efforts should be made to resist its use to control people.

As one of the premier conferences in the field of artificial intelligence, the Neural Information Processing System Conference is the first time this year to require this kind of ethical consideration, that is, scientists who submit papers on face recognition must add a statement explaining the ethical issues and ethical issues in their work. Potential negative consequences.

In addition, the “Natural Machine Intelligence” magazine is also trying to require the authors of some machine learning papers to include a statement in the article, taking into account broader social impact and ethical issues.

Karen Levy, a sociologist who studies the ethics of technology at Cornell University in Ithaca, New York, believes that scholars who study face recognition are aware of moral and ethical issues and “feel like a real awakening in the scientific community.”