Discussions about the risks of ARTIFICIAL intelligence are not new. In the past few years, many researchers have addressed the potential hazards and risks of ARTIFICIAL intelligence (and its closely related technologies, especially machine learning and big data). It is noteworthy that in the public discourse space, most of the popular discussions on the ethical hazards and risks of intelligent technology regard “human beings” as potential victims, and a large number of narratives are constructed in the form of “machine VS human beings”.

Discussions about the risks of ARTIFICIAL intelligence are not new. In the past few years, many researchers have addressed the potential hazards and risks of ARTIFICIAL intelligence (and its closely related technologies, especially machine learning and big data). It is noteworthy that in the public discourse space, most of the popular discussions on the ethical hazards and risks of intelligent technology regard “human beings” as potential victims, and a large number of narratives are constructed in the form of “machine VS human beings”.

There are endless discussions about human VS machine, and a large number of related novels and movies have been created.

Discussions about how smart technology is dominated by, and in turn reinforces, the current social power structure, thereby deepening the oppression of vulnerable and marginalized groups, are sparse and unsystematic. This paper will introduce the ethical hazards and risks of smart technology, and focus on how it reinforces social and economic inequities.



Imagining a Future World: A Discussion of the Annihilation of Mankind

Some of the discussion about the ethical hazards and risks of smart technology has entered the pop culture space and aroused public interest. Usually, it takes the form of futurology: an attempt to predict what the future will look like when AI technology matures, especially when it is capable of improving itself. Such as super intelligence, artificial intelligence to destroy human beings, technological singularity and other discussions, triggered widespread public concern about the ethical issues involved.

Discussions on this topic are selling well, and the public is also paying attention to the ethics of ARTIFICIAL intelligence.

Other related ethical hazards and risks are taking place more realistically than the grand, slightly sci-fi narrative of human survival. The impact of smart technology on employment, for example, is a recurring theme. Kevin Kelly believes that the replacement of human beings by machines in work is an irreversible process. Only human beings can do some work or do it better than machines for the time being, but machines will eventually outperform human beings, and most human beings will no longer need to engage in production work in a few decades. A McKinsey study specifically listed various occupations as likely to be replaced by machines, with “predictable physical tasks” such as assembly line work, food preparation and packaging being among the most vulnerable.The McKinsey report goes a step further and analyses which occupations will be replaced.

As well as causing widespread structural unemployment, there are concerns about the battle between cyberspace and physical space: will artificial intelligence, which control vast resources and even automated weapons, attack what they shouldn’t? The danger doesn’t even have to be in the form of war: given that they already play such an important role in the social economy, can their misbehavior (or even just inappropriate optimization) have extremely bad consequences? It should be noted that the above common discussions generally emphasize the impact of AI on humanity as a whole, but do not highlight the different impacts it has on different groups of people in the current society.

The same trend appears in the analysis of the risks of smart technology. Many of these analyses focus on technology rather than taking social and political factors into account. For example, researchers from Stanford University believe that statistical models obtained through machine learning, especially deep learning, have the following characteristics and security risks:

  • Opacity: it is difficult or impossible to understand the logic;

  • Integral indivisibility: unable to understand the relationship between input and output through partial division of the office;

  • Vulnerability: Small changes in inputs can cause significant and unpredictable changes in outputs;

  • Not fully understood.

In contrast, Cathy O ‘Neil, author of Weapons of Math Destruction, also mentioned the dangerous characteristics of many intelligent algorithm tools, which have a wide impact on People’s Daily work and life:

  • They are secret, often the trade secrets of a company;

  • They’re opaque, and the people they affect don’t understand how these algorithms work;

  • They have a wide range of applications;

  • Their definition of “success” is questionable, and the people they influence may not agree with it;

  • They create harmful feedback loops.





Is the future folding?

Compared to the previous set of features, the features O ‘Neil identified have one noteworthy point: specific people. She points out an extremely important, but by no means always obvious, point: smart technology affects people differently, and the same technology can benefit some people while hurting others. Take, for example, a teacher performance evaluation algorithm introduced in Illinois in 2010, which led to widespread opposition and even demonstrations by Teachers in Chicago.Chicago teachers oppose algorithms for teacher performance evaluation

As Linnet Taylor has perceptively pointed out, in ethical assessment, people tend to talk about the potential harm caused by intelligent technology in the abstract and the benefits it brings in the concrete, so that the real benefits always trump the vague and unknown harm, and the project passes the assessment. By bringing social and political factors into the discussion, O ‘Neil’s focus on specific populations gives us an important perspective on the damage and risks that smart technology can bring.

The first thing to note from this point is that the impact of smart technology on the Labour market is not uniform. Erik Brynjolfsson and Andrew McAfee, in Race with the Machines, show that less-educated, lower-paid workers are more likely to be replaced by smart technology and less likely to acquire new vocational skills, thus aggravating structural unemployment. Paul Krugman was right when he pointed out that the world would not necessarily be a better place for all-powerful and efficient workbots, because those without the ability to own robots would be miserable. Although this research is very few, but some of the existing research shows that: under the highly automated and intellectualized work environment, education and low skill levels of workers, are facing increasing labor environmental degradation, the intensity of labor, income, such as the lack of labor and social security challenges, this kind of phenomenon is common in the “share of the economy” form. In extreme cases, workers are alienated into “ghosts on the digital machine” and “slaves on the production line”.

The author Qiu Linchuan discusses the slave under the new situation – I slave

In addition, smart technology may be deepening prejudice and discrimination against the socially disadvantaged. Wendy Chun argues that “machine learning is like the laundering of prejudice”. Through machine learning, prejudice and discrimination are packaged into models and algorithms, making injustice more secretive and far-reaching: LinkedIn’s search engine favors male job applicants, Google’s Adsense advertising platform is racially biased, and controversial predictive policing is structurally discriminatory against AfricAn-Americans and Muslims, Low-income people will find it harder to escape poverty because of smart technology. Gender, race, religion, income… All kinds of prejudice and discrimination in reality seem to find a foothold in smart technology.

Smart technology is also being used to control public sentiment. By manipulating what users see from their news feeds, Facebook has managed to modulate the mood of their posts, proving that emotions can be contagious among large numbers of online users. JTRIG has been using Youtube, Facebook, Twitter, blogs, forums, emails, text messages, its own website and other channels to manipulate public sentiment, according to a leaked document. To eliminate “criminal, security and defence threats”. In politics, smart technology can induce voters to make one-sided judgments (Cathy O ‘Neil, 2015); In business, it indoctrinates consumers into becoming “manufactured slaves” addicted to the ever-changing supply of consumer goods.

JTRIG (Joint Threat Research Intelligence Group) is a part of GCHQ (Government Communications Headquarters)


As early as the mid-1980s, researchers debated whether computer ethics were unique. According to Johnson, computer ethics merely presents the standard moral problem in a new form, forcing us to continue the old moral code in a new field, and is not itself a unique new topic. However, James Moor believes that computer ethics itself is a unique new topic, as computers transform/strengthen existing ethical issues and create new ethical issues that have not appeared in the past.

These two viewpoints have important enlightening significance for us to comprehensively understand the ethical issues of intelligent technology. We should not only fully understand the uniqueness of intelligent technology and its unique impact on ethical issues, but also recognize the old conflicts, struggles and ethical norms hidden behind the new technology, so as to accurately grasp the ethical direction of intelligent technology and make it develop in a direction beneficial to the general public. The pictures in this article are from the network, if there is infringement, please contact delete.

Thou is the copyright of this article. If you need to reprint this article, please leave a comment.