The paper contains 3239 words and is expected to last 10 minutes

Photo source: Unsplash

For expats, Google was the best at answering questions, Facebook was initially the best place to find friends and family, Amazon was the best place to shop, and YouTube was the best place to find any content video.

Among them, regardless of the management criticism, Google has far surpassed other enterprises in both scientific research and innovation capacity in the world, and has always been in the forefront of high-tech research and development, which is universally recognized and convinced.


Google has launched a new service for interpretable ARTIFICIAL intelligence, or XAI, in response to the ai trend. There aren’t many tools on offer, but the broad direction is right.


Artificial intelligence is interpretable


Ai will increase global productivity, change working patterns and lifestyles, and create enormous wealth.


According to Gartner, a leading information technology research and consulting firm, the global AI economy will grow from about $1.2 trillion last year to about $3.9 trillion by 2022, while McKinsey believes the global AI economy will reach about $13 trillion by 2030.


Artificial intelligence technologies, especially deep learning (DL) models, are revolutionizing business and technology with jaw-dropping performance in application areas — image classification, object detection, object tracking, gesture recognition, video analysis, composite image generation — and these are just the tip of the iceberg.

Photo source: freelang.blog.sohu.com

The technology has applications in healthcare, information technology services, finance, manufacturing, autonomous driving, video games, scientific discovery, and even the criminal justice system.


However, deep learning is different from classical machine learning (ML) algorithms/techniques. Deep learning models use millions of parameters and create highly non-linear images or data sets with extremely complex internal representations.


Therefore, we often call it the perfect black box ML technique. After training them with large data sets, very accurate predictions can be obtained from them. However, it is still very difficult to use the understanding model to classify specific images into internal features and representations of a class of data.


Source: CMU ML blog


What is Interpretable artificial intelligence (xAI)?


Quite simply, as the name suggests, you want the model not only to give a prediction, but also to give some explanation of why the prediction turned out the way it did.

But why?


The main reasons why ai systems provide interpretable functions are:


· Improved readability

· Determine the rationality of decisions made by the machine

· Helps define responsibilities and obligations to make good decisions

· Avoid discrimination

· Reduce social prejudice


There’s still a lot of debate about this, but a consensus is emerging that it’s not right to second-guess rationality. The goal of interpretability should be built into the AI model/system as an integral part of the core design phase, not as an accessory.


Here are some common methods:


· Fully understand data — make distinguishing features more intuitive

· Fully understand the model — activate the visual neural network layer

· Understand user psychology and behavior — combine behavioral models with statistical learning, generate data and integrate interpretation in the process


Even DARPA has launched an entire program to build and design XAI principles and algorithms for future AI/ ML-driven defense systems.


In the core design phase, interpretable goals should be incorporated into the AI model/system.

Google has launched a new service to address this problem

As business analysts and economists predicted, Google (or its parent company Alphabet) has a big stake in the trajectory of the vast AI economy.


Back in 2017, Google was known for its “AI first” strategy.


As a result, Google may be feeling the pressure to become the industry leader by offering explicable AI services that make AI less mysterious and more accessible to the general user base.

Photo source: fawww.elecfans.com


Google Cloud: Wants to lead xAI technology

Google is a leader in attracting talent in artificial intelligence and multipurpose languages, and is the undisputed giant in today’s world’s information economy. However, its cloud service ranks only third compared to Amazon and Microsoft.


Top Cloud Providers 2019

However, as this article points out, while the traditional “infrastructure-as-a-service” wars have largely been defined, new technologies such as artificial intelligence and multipurpose languages have opened up new themes, strategies and approaches for users to experiment with.


Along those lines, at an event in London this week, Google’s cloud computing division unveiled a new service that it hopes will overtake Microsoft and Amazon.


Professor Andrew Moore, a leading artificial intelligence researcher, introduced the service in London.

(Source: Official Google blog)

Professor Andrew Moore explains the launch of artificial intelligence services for Google Cloud in London


“Interpretable AI is a set of tools and frameworks that help you develop interpretable and inclusive machine learning models and deploy them with confidence. From there, you can understand the properties of AutoML tables and AI platforms, and intuitively study model behavior using hypothesis tools.”


At the beginning — goal limits

Initially, the objectives and scope were rather limited. The service will provide some information about the performance and potential defects of face and target detection models.


Over time, however, GCP hopes to provide broader insight and visibility, making the inner workings of ai systems less mysterious and making everyone trust them more.


New technologies such as artificial intelligence and machine learning are opening up new themes, strategies and approaches for cloud service providers.


Prof Moore concedes that AI helped Google at a time when it was at a low point in working on interpretable problems.


“One of the things that fascinates us about Google is that we build accurate and realistic machine learning models, but we have to understand it. In many large systems, we model for smartphones, ranking search systems or question answering systems to try to understand the implications.”


With the model card, Google hopes to give users a better explanation.


Google Face Detection Model Card (source: ZDNet)

Google introduced a situational analysis What-if tool to encourage users to combine new interpretable tools with this scenario analysis framework.


“The interpretive capabilities of AI can be combined with what-if tools to get a complete picture of model behavior,” said Tracy Frey, director of cloud strategy at Google.

Google AI’s What-if tool

It is currently available as a free add-on. Provide free interpretable AI tools to users of AutoML tables or AI platforms.


All in all, it seems like a good start. However, even within Google, not everyone is enthusiastic about xAI.

The bigger problem: prejudice

Photo source: Unsplash

Previously, Peter Norvig, Google’s head of research, spoke of explainable AI,


“You can ask for help, but cognitive psychologists have found that when you do that, you’re not really in the decision-making process. They make a decision, and then you ask questions, and then they give an explanation, which may not really be an explanation.”


So, essentially, the decision-making process is limited by psychology, and when you put it on a machine, it doesn’t make any difference. Do we really need to change these mechanisms for the sake of machine intelligence? What if changes are made that aren’t appropriate for users?


Instead, he argues, more attention should be given to tracking and identifying biases and fairness in machine decision-making.


To do this, looking at the inner workings of the model is not necessarily the best point. We can look at all the output decisions made by the system over time to identify specific patterns that hide bias mechanisms.


Should bias and fairness be more important to future AI systems than a single explanation?


If you apply for a loan and are rejected, an interpretable AI service might pop up a statement that reads: “Your loan application has been rejected due to lack of adequate proof of income.” . Anyone ever ML model is set up, however, know that this process is not one-dimensional, the mathematical model of this decision is dependent on the collected data set to draw the specific structure and the overall weight, it is likely to be some classes in the society bias, because in this society, the flow of income and economic problems is very important.


Thus, the debate on this issue should revolve around relative importance, that is, simply allowing a system to present a preliminary, diluted interpretation, and building a system that is less biased and more fair.

Photo source: Unsplash

This may seem easy, but it is not easy to achieve. Regardless of the standard and degree of fairness, prejudice will always arise inexplicably due to different views and sensitive points of each person, as well as the influence of time, place, people and emotions. It’s really hard to achieve fairness if you want it to go to zero.


Wait to see what Google does next…

Leave a comment like follow

We share the dry goods of AI learning and development. Welcome to pay attention to the “core reading technology” of AI vertical we-media on the whole platform.



(Add wechat: DXSXBB, join readers’ circle and discuss the freshest artificial intelligence technology.)