Artificial intelligence (AI) has become so ubiquitous that it has infiltrated most aspects of our lives, from what books we decide to read, which flights to book, what to buy online, whether a job application is successful, whether we receive a bank loan and even how we treat cancer, according to the BBC. All of these things can now be determined automatically using sophisticated software systems. With the stunning advances AI has made in the past few years, there are many ways it could help change our lives for the better.

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

Over the past two years, the rise of AI has been unstoppable. A lot of money is being poured into AI start-ups, and many established tech companies — including giants like Amazon, Microsoft and Facebook — have opened new research LABS. It is no exaggeration to say that software now means AI. Some predict that AI is about to change dramatically, even more than the Internet.

Image: AI is already proving its worth in many practical tasks, from tagging images to diagnosing diseases

We’ve asked technologists about the impact of a rapidly changing world full of brilliant machines. It is worth noting that almost all of their answers revolve around ethics. For Peter Norvig, Google’s head of research and a machine learning pioneer, the key to the recent successes of data-driven AI technology is to figure out how to ensure that these new systems improve society as a whole, not just control its subjects. “AI is already proving its worth in many practical tasks, from tagging images and understanding language to helping diagnose diseases,” Says Novg. The challenge now is to make sure everyone can benefit from this technology.”

The biggest problem is that the complexity of software often means it’s almost impossible to explain exactly why an AI system makes the decisions it does. Today’s AI is largely based on a successful technique called machine learning, but you can’t take the lid off and see inside its workings. For this reason, we can only choose to believe it. With that comes the challenge of finding new ways to monitor and audit in many areas, especially where AI is playing a big role.

For Jonathan Zittrain, a professor of Internet law at Harvard Law School, one of the big dangers is that increasingly sophisticated computer systems may prevent them from being subject to necessary censorship. “Our systems are getting more complex with the help of technology,” he said. “I’m concerned that human autonomy is being reduced. If we set up a system and then forget about it, the system will evolve on its own with consequences we may regret later. There is no clear moral rationale for this.”

Photo: ARTIFICIAL intelligence will let robots do more complex jobs, like this shopping assistant robot serving customers in Japan

This is where other technology experts worry. “How can we prove that these systems are safe?” asks Missy Cummings, director of the Humans and Autonomy Lab at Duke University. Cummings, who was the Navy’s first female fighter pilot, is now an expert on drones.

AI does need to be regulated, but it is not yet clear how. “Right now, we have no universally accepted methodology, no industry standard for testing these systems,” Cummings said. It’s very difficult to impose broad regulation on these technologies.” In a fast-changing world, regulators often find themselves helpless. In many key areas, such as the criminal justice system and healthcare, companies are already using AI to explore attempts to make parole decisions or diagnose diseases. But by leaving the decision to the machine, we risk losing control. Who can guarantee that the machine will make the right decision in every case?

‘A lot of serious questions about values are being written into these AI systems,’ said Danah Boyd, principal researcher at Microsoft Research. ‘Who is ultimately responsible?’ “Regulators, civil society and social theorists are increasingly keen to see these technologies as being fair and ethical, but these concepts are vague,” says Boyd.

One area fraught with ethical questions is where AI will help robots perform more complex tasks and lead to more human workers being displaced. China’s Foxconn, for example, plans to replace 60,000 workers with robots. Ford’s factory in Cologne, Germany, is also using robots to coordinate work with human workers.

Photo: In many factories, human workers are already working alongside robots. Some believe this could have a huge impact on human mental health

What’s more, if increasing automation has had a huge impact on employment, it will also have a negative impact on people’s mental health. Ezekiel Emanuel, a bioethicist and former medical adviser to President Obama, says: “If you think about what makes people’s lives meaningful, you find three things: meaningful relationships, intense interests and meaningful work. Meaningful work is one of the most important things that defines one’s life. In some areas, losing jobs when factories close can lead to an increased risk of suicide, substance abuse and depression.”

As a result, we may need to see more ethical demands. “Companies are following market incentives, and that’s not a bad thing, but we can’t just rely on ethics to control it,” says Kate Darling, an expert on law and ethics at the Massachusetts Institute of Technology. It helps put regulation in place, and we’re already seeing it in privacy as well as in new technologies, and we need to figure out how to deal with it.”

Darling noted that many big-name companies, such as Google, have set up ethics committees to oversee the development and deployment of AI. It is argued that such a mechanism should be widely adopted. “We don’t want to stifle innovation, but at some point we might want to create some kind of structure,” Darling says.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

Few details are known about who will be on Google’s ethics committee and what it can actually do. But in September 2016, Facebook, Google and Amazon formed a joint group with the goal of finding solutions to the security and privacy threats posed by AI. OpenAI is a similar organization that aims to develop and promote open source AI for the benefit of all. Google’s Novig said: “It’s important that machine learning technologies are studied openly and disseminated through open publications and open source code, where we can share all the rewards.”

If we can develop industry standards and ethical standards, and fully understand the risks of AI, then it is important to establish regulatory mechanisms with ethicists, technologists and business leaders at the core. This is the best way to use AI for the benefit of humanity. “Our work is to reduce concerns about sci-fi movies in which robots take over the world and focus more on how technology can be used to help people think and make decisions, rather than completely replace it,” Zittrain said.