Face recognition is more widely used, and it is imperative to increase the privacy management system

With the accumulation of data, the leap of computing power and the optimization of algorithms, artificial intelligence is making life more efficient. Voice recognition and image recognition make the identity authentication more reliable, and it can prove that "you are you" in just a few seconds; intelligent diagnosis and treatment and autonomous driving also let people see new opportunities to overcome diseases and reduce accidents; artificial intelligence can also easily Defeating the masters of Go and writing beautiful verses... His autonomy and creativity are blurring the distinction between man and machine.

However, when privacy violations, data leaks, algorithmic biases and other incidents emerge in an endless stream, people have to reflect: the continuous advancement and widespread application of artificial intelligence bring huge benefits. In order to make it truly beneficial to society, the same cannot be ignored. There are also value guidance, ethical regulation, and risk regulation for artificial intelligence.

Face recognition is more widely used, and it is imperative to increase the privacy management system

"Swipe face" is more widely used, and the threat to privacy deserves attention

"Swipe face" enters the station, "swipe face" payment, "swipe face" sign-in, "swipe face" law enforcement...Face recognition technology is entering a broader application scenario. Compared with fingerprints, iris, etc., human face It is a biological feature with weak privacy. Therefore, the threat that this technology poses to citizens' privacy protection is particularly worthy of attention. "Face images or videos are also data in a broad sense. If they are not properly kept and used reasonably, they will easily invade the privacy of users." said Duan Weiwen, a researcher at the Institute of Philosophy, Chinese Academy of Social Sciences.

Through data collection and machine learning to "portrait" the user's characteristics, preferences, etc., Internet service providers then provide some personalized services and recommendations. From a positive point of view, it is a kind of interaction between the supply and demand sides. But for consumers, this exchange is not equal. In terms of the frequent incidents of personal data infringement, the comparison between personal data rights and institutional data rights has become unbalanced. In terms of data collection and use, consumers are passive, and enterprises and institutions are active. Duan Weiwen said, “Data has actually become a resource monopolized by enterprises and a factor that drives the economy.” If businesses only proceed from their own interests, they will inevitably overuse or improperly disclose personal data.

"In the era of big data, any behavior of an individual on the Internet will become the precipitation of data, and the collection of these data may eventually lead to the leakage of personal privacy." Li Lun, director of the Institute of Artificial Intelligence and Ethical Decision-Making, Hunan Normal University, believes. Users have become objects of observation, analysis and monitoring.

Algorithms should be more objective and transparent, to avoid discrimination and "killing familiarity"

In the era of information explosion, many data processing, analysis, and applications are realized by algorithms, and more and more decisions are being replaced by algorithms. From content recommendation to advertising, from credit limit assessment to crime risk assessment, algorithms are everywhere-the autonomous driving it operates may be safer than the driver, and the diagnosis results it draws may be more accurate than doctors, and more and more People begin to get used to a "scoring" society constructed by algorithms.

As a kind of information technology, while removing the “fog” of information and data, algorithms also face ethical challenges: using artificial intelligence to assess crime risks, algorithms can affect penalties; when self-driving cars are in danger, algorithms can decide Which party to sacrifice; the algorithm applied to the weapon system can even determine the target of the attack...This raises a question that cannot be ignored: how to ensure the fairness of the algorithm?

Cao Jianfeng, a senior researcher at the Legal Research Center of Tencent Research Institute, believes that even as a mathematical expression, an algorithm is essentially "an opinion expressed in mathematical or computer code." Algorithm design, model, purpose, success criteria, data usage, etc. are all subjective choices of programmers. Prejudices are intentionally or unintentionally embedded in algorithms to make them coded. "Algorithms are not objective, and algorithm discrimination is not uncommon in many areas where algorithmic decision-making plays a role."

"Algorithmic decision-making is mostly a kind of prediction. Past data is used to predict future trends. The algorithm model and data input determine the results of the prediction. Therefore, these two elements have become the main source of algorithm discrimination." Cao Jianfeng explained, In addition to subjective factors, the data itself also affects the decision-making and prediction of the algorithm. "Data is a reflection of social reality. The data may be incorrect, incomplete or outdated. The training data itself may also be discriminatory. An algorithm system trained with such data will naturally bear the stigma of discrimination."

In March 2016, Microsoft's artificial intelligence chatbot Tay was launched. During the process of interacting with netizens, it "goed astray" in a short period of time, combining sexism and racial discrimination. In the end, Microsoft had to let it "lay off". Cao Jianfeng believes that algorithms tend to solidify or amplify discrimination, so that discrimination persists in the entire algorithm. Therefore, if the algorithm is applied to crime assessment, credit loan, employment assessment and other occasions where people's vital interests are concerned, once discrimination occurs, it may harm the interests of individuals and even society.

In addition, deep learning is still a typical "black box" algorithm. Even the designer may not know how the algorithm makes decisions. Therefore, it is technically difficult to discover whether there is discrimination and the root cause of discrimination in the system. "The "black box" feature of the algorithm makes its decision logic lack of transparency and interpretability." Li Lun said that with the emergence of big data "familiarity" and algorithm discrimination, the society has gradually increased doubts about algorithms. In the process of using data, governments and enterprises must increase transparency to the public and return the right of choice to individuals.

Strengthen verification and supervision, and increase punishment for data abuse and other behaviors

In July 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan" (hereinafter referred to as the "Plan"). The "Plan" emphasizes the promotion of self-discipline in the artificial intelligence industry and enterprises, effectively strengthening management, and intensifying punishments for data abuse, infringement of personal privacy, and violation of ethics.

"Although there are more and more applications of'face brushing', artificial intelligence is still in its infancy. It is necessary to increase the protection of data and privacy, and pay attention to and prevent decision-making errors and social injustices caused by algorithm abuse." Regarding the protection of personal data rights, Duan Weiwen suggested that all parties to data transactions should be made to be responsible for their actions, so that everyone knows how their data is processed, especially when it is used for other purposes, so as to reduce data abuse and let people know clearly. Is your "face" safe?

Duan Weiwen believes that it is necessary to further strengthen the ethical design of artificial intelligence, conduct full-process inquiries and verifications on the theoretical assumptions, internal mechanisms, and practical contexts of algorithms, and start with the results of algorithmic decisions and injustices in their impacts, and reverse-check them. Whether the mechanism and process has deliberately or unconsciously misunderstood and misled, reveals the existing problems, and prompts its correction and improvement.

In Cao Jianfeng’s view, to deal with the ethical issues brought about by artificial intelligence, one is to build an internal and external constraint mechanism for algorithmic governance, and to embed the laws and ethics of human society and values ​​into the artificial intelligence system; the other is to implement artificial intelligence research and development. Ethical principles encourage R&D personnel to abide by basic ethical codes; the third is to carry out necessary supervision of algorithms to improve the transparency of the algorithm’s own code and the transparency of algorithmic decision-making; the fourth is to provide for algorithmic decision-making and discrimination as well as personal and property damage caused by Legal relief.

"We are living in an era of human-machine symbiosis. Various conflicts and contradictions will inevitably occur between humans and machines, and it is difficult to completely resolve them with laws and systems alone." Li Lun said that people should also work hard to improve their scientific literacy. To actively protect its own rights, society should also establish a public platform for discussing ethical issues of artificial intelligence as soon as possible, so that all parties can fully express their opinions and promote the formation of consensus.

CCTV Tester

CCTV Tester,Cctv Camera Tester,Monitor Tester,IP Camera Test Monitor

Chinasky Electronics Co., Ltd. , https://www.chinacctvproducts.com