There are a thousand Hamlets for a thousand people. There may not be a thousand versions of Google, but everyone may have a slightly different idea of the company.

It started out as a search engine, moved into Gmail, has an advertising business, and is the owner of Android, the world’s most popular mobile operating system (with more than 2 billion active devices). In recent years Google has started selling phones, TV sticks, tablets, and home stereos…

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

It’s getting harder and harder to define Google.

At Google I/O’s annual developer conference today, CEO Sundar Pichai pointed out that Google has always done what it does best: Using cutting-edge computing to solve the world’s most complex problems, “problems that affect People’s Daily lives.”

By embracing mobile computing early, Google has reaped dividends in the pc-mobile transition. For most smartphone users, Google has become the most important part of their daily digital lives. Only numbers can capture just how much people love Google: Google Maps is used to navigate more than a billion miles a day, and users spend more than a billion hours a day on YouTube.

And in countries where Google’s services are temporarily unavailable, Google has found another way to reach out — thanks to China, the number of active Android devices worldwide recently just passed 2 billion.

However, a new paradigm shift is coming, and this time the key word will be artificial intelligence. Pichai found that the advent of artificial intelligence was once again forcing Google to change the way it conceived its products. Slowly, you will find ai in all of Google’s products.

For example, Google launched Photos last year, which has become the most popular cloud-based photo collection service with more than 500 million users, thanks to free upload space, face detection and automatic photo sorting provided by image recognition technology.

For example, Google search, its function has long been not only search text, to meet more expectations of users, you can use voice input, can search images, but also directly and accurately answer questions, rather than give a bunch of don’t know whether reliable web pages;

Or Gmail, a simple email system. How can it be made more fun and useful? Google found that it would be much better to automatically flag and deal with spam, rather than having users manually flag them, or to automatically recognize the content of emails and provide several contextual default responses instead of typing.

Other Google products that have been revolutionized by machine learning/AI include YouTube, Maps, Android, Chrome, and more. In fact, Over the past two years, Google has pieced together machine learning technologies developed and adopted internally under the name TensorFlow. This is a framework that includes many common deep learning techniques, features, and paradigms, and is used in almost all of Google’s products.

“Every core Google product you can think of has machine learning and deep learning behind it.” Pichai said in his I/O 17 talk.

This year, Google decided to go all-in on ARTIFICIAL intelligence. Pichai announced a change in the company’s core slogan from Mobile First to AI First. Almost all of the big announcements at this year’s developer conference have to do with artificial intelligence.

The first is Cloud TPU. TensorFlow Processing Unit (TensorFlow Processing Unit) is a processor designed for Google’s deep learning framework TensorFlow, which is installed in servers in data centers. A year ago, Google announced that it was developing a dedicated TPU deep learning processor, which attracted a lot of attention.

Today, Cloud TPU, as the second generation product, should not disappoint, when pichai was officially announced, the audience erupted into a round of applause.

Cloud TPU adopts a unique computing architecture completely developed by Google. One board has four TPU computing cores, and the theoretical computation power reaches 180 TFlops (teraflop calculation), which can significantly accelerate the training and operation of machine learning models.

In the past, GPU has been praised as the most useful deep learning processing tool by companies such as Nvidia, and Google has been mainly using GPGPU (general graphics processor) to undertake internal research and business computing. However, with the advent of new deep neural network model, GPU performance sometimes fails to keep up with it while maintaining universality. As a result, TPU has become a weapon for Google to replace Gpus in deep learning.

If you think the Cloud TPU is such a simple processor, you’re underestimating its power: Like Lego, it can be pieced together into a more powerful supercomputer… Currently it should support up to 64 blocks, which is a whopping performance of over 11 PFlops (peTAFlop).

As an important use scenario for ARTIFICIAL intelligence, Google’s new release this year also made a big move in image recognition.

Many in the audience were excited about a new camera called Google Lens. It has some of the most basic recognition functions, such as taking a photo to recognize flowers (I’m guessing flowers aren’t the only type of object that can be recognized) and scanning a wi-fi username/password/barcode to automatically connect the phone to the network, eliminating the need to manually find the network and enter the password.

Even more interesting, if you’re in a new city and don’t know which restaurant to go to, You can open Google Lens and scan any restaurant. It will automatically show you the relevant information in Google’s database, including roll call, dishes, ratings, closing hours, and so on.

While other companies are busy scoring high points in the image-recognition race, Google is thinking more about how to use technology to create more interesting features. A good example is Google Photos, where product managers found that people who went out to parties took a lot of Photos and often forgot to share them with their friends. So they created a new feature for Google Photos called Suggested Sharing that automatically identifies faces in Photos, finds friends and asks, “Would you like to share it with her?”

If the person is close to you, such as a family member, a new feature called Shared Libraries will make it easier to share family photos. Also based on facial recognition, the feature automatically syncs family members with photos containing a given face from a certain date. Obviously, shared photo collections are nothing new, and iOS photo albums have been around for years, but what makes Google Photos different is that you don’t have to manually scroll through hundreds of photo albums — artificial intelligence is just too easy.

At Google, the rethinking of products with artificial intelligence has gone to the system level.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

Android O, the new version of the Android operating system due out later this year, already has a lot of tiny machine learning-based features built into the system. One feature that surprised me was called Smart Text Selection. You can probably agree that copying and pasting on a phone is an excruciating experience, because it’s hard to select exactly what to copy with your finger on a square-inch screen, and you’ll have to do it all over again.

In Android O, double tap your finger on something you want to copy, such as a person’s name, proper noun, address, or phone number, and you’ll see it intelligently highlight exactly what you want to copy. If it’s an address, a pop-up menu suggests you navigate to the map. If it is a phone number, you can make a call or send a short message.