According to Bloomberg News on May 8, the White House will invite Google, Amazon, Facebook, Microsoft, Nvidia, Intel, Tesla and other technology companies on May 10. And us corporate giants such as Financial services company Master and pharmaceutical company Pfizer will participate in the AI Summit, where discussions will cover AI development, specific applications, and regulatory policy.

 

It is worth noting that the speed of AI development and the scope of its future influence have not only attracted the attention of governments around the world, including the United States, but also become the main battlefield of A new wave of global giants. In addition, the full launch of this wave of A war will for the first time span the industry and no one will be immune from it.

 

Google’S DEVELOPER conference (Google I/O 2018), which kicked off on Monday (U.S. time), will cover everything from healthcare, media, home entertainment, mobile devices, enterprise applications, and transportation.

 

If you want to set the tone for Google’s next step, from AI First to AI Only would be the best answer.

 

But just before Google’s most important annual conference, the issue of self-driving safety was put back into the spotlight when Waymo, a self-driving car spun off from Google, was hit by a car coming the wrong way during a road test in Chandler, Ariz. Fortunately, Waymo was more of a victim than a perpetrator in the crash, and neither Google nor Waymo suffered much damage.

 

Google, which will mark its 20th anniversary soon, has grown into a giant during this period. It has established an unshakable position in Internet services, digital advertising, digital media entertainment such as YouTube, cloud computing, and cutting-edge technologies such as autonomous driving, artificial intelligence, and quantum computers. Past search sites have become a complex Alphabet empire.

 

Alphabet empire map (data compiled by DeepTech)

 

Google CEO Sundar Pichai started his Google I/O keynote with a funny reference to the previous burger emoji. Google’s burger emoji started with a story about putting cheese at the bottom of the heap and causing a lot of Internet discussion. Back to the point, he pointed out that, Technology should be more practical, accessible and beneficial to society, so Google wants to bring AI to everyone. Machine learning to detect and prevent blindness caused by diabetes was published last year.

 

Google’s Burger Emoji meme

As an example of accessibility, he mentioned that subtitles tend to be poor when multiple people are talking on TV. Google is trying to solve the problem with hidden captioning and machine learning, such as identifying who is speaking and generating captions even when multiple people are talking, or even Shouting.

 

Figure 1: Live demonstration of multi-person conversation recognition

In addition, for people with some illnesses, Google has demonstrated the application of machine learning to Morse Password devices, which will provide algorithmic keyboard Gboard to help people with disabilities communicate.

Sundar Pichai points out that more than 5 billion photos are viewed in Google Photo every day, and a new AI feature called Smart Actions knows who’s in the Photo, and in addition to sharing the Photo with them, The AI also fixes brightness, recognizes documents, or converts them to PDF.

On the machine learning chip side, the latest version of TPU3.0 was released. Not only is it eight times more powerful in terms of performance than last year’s version, But Google has also designed a liquid-cooled system for it that can execute larger, more complex and accurate models and solve more difficult problems based on the new architecture.

Then Scott Huffman, vice president of Google Assistant, showed a viral video of a grandma struggling with the Google Home smart speaker, pointing out that the user experience still has a lot of room for improvement, so he demonstrated a few new features: Multiple Actions — Enhance intelligent voice assistant’s ability to “talk” with humans naturally and back and forth.

Picture | content of “dialogue” back and forth

Google has updated its new voice model to make Google Assistant speak more like real people. Google Assistant is deployed on more than 500 million devices worldwide, across 5,000 different devices, including more than 40 car brands alone.

In addition to improving natural language processing, Another area of improvement Google is focusing on Visually Visually assisting, Lilian Rincon, head of Google Assistant product manager, says, for example, if you ask A Starbucks coffee shop, The phone will display the cafe menu at the same time.

Photo by Lilian Rincon and Visually Assistive

Not only that, but Google Assistant can also call you to make hairstyling appointments and restaurant reservations. The live demonstrations are amazing, and not only does Google Assistant speak like a real human voice, the dialogue is very smooth.

Sundar Pichai demonstrates Google Assistant’s phone booking hair design

Next, the spotlight turned to Android, which was announced by Dave Burke, the head of Android engineering

Android P, “Android P is an important pillar of Google’s integration of mobile and ARTIFICIAL intelligence”, “smartphones should be smarter, it should learn from the user and adapt to you”, so they teamed up with DeepMind Adaptive Battery, AI to manage battery life.

Figure 丨 Android P

Google also released the new Android P developer preview today, Android P Beta, Google Pixel, and selected devices from seven mobile companies to test the release, They include Nokia, Vivo, Oppo, Xiaomi, ONEPLUS, SONY and Essential.

Cloud + artificial intelligence + chip, and Microsoft, Amazon, Apple head-to-head

 

After artificial intelligence has become a prominent science in recent years, all the technology giants are actively engaged in the competition between the giants is also becoming more and more fierce, especially before Google in order to develop chips, mobile phones and other hardware, from Apple to hire a lot of chip talents, including the creation and lead Apple chip competition analysis team John Bruno.

Figure 丨 John Bruno

In early April, John Giannandrea, Google’s head of AI and search, left the company. The next day, Apple CEO Tim Cook announced that Giannandrea had joined Apple. Jeff Dean, head of Google Brain, has been promoted to head Google AI, leading the company’s machine learning and AI strategy division, which has led Google to separate its search business from artificial intelligence. Ashwin Ram, amazon’s Alexa AI senior manager, joined Google in March as AI technical director for Google cloud services.

 

Google and Amazon share similar strategies in the layout of ARTIFICIAL intelligence among the three tech giants. The AI technologies developed by Google and Amazon start from the optimization of internal business, and then expand to the cloud and terminal. Apple’s AI is currently focused on terminal devices.

 

Ai-related machine learning has long been an important technology to support Google’s own businesses or products. “As an AI-first company,” Google CEO Sundar Pichai noted during a recent conference call, AI is in everything we do.” Therefore, at present, artificial intelligence has been researched in various units within Google, and related technologies have been applied into products, including search, cloud services, Google Assistant, Google Photos, Google Lens, and even AR.

 

In the part of Cloud, it turns AI tools or trained models into Cloud services. In 2016, Google released TPU, a special application chip specially designed for machine learning, which was originally intended to better support TensorFlow machine learning framework. The next year, it released a new generation of Cloud TPU. The beta version of Cloud TPU was officially launched in February this year, which changed TPU from self-use to pay-as-you-go service.

 

Since TensorFlow is currently the most used deep learning framework, especially if Cloud TPU is commercialized, it will attract more people to use the service. Other companies are also aware of the crisis. Microsoft followed Google’s lead in developing its own deep learning chip last August. Jointly launched Project Brainwave with Intel. Based on Altera’s FPGA solution and equipped with self-developed deep neural network (DNN) software, the Project Brainwave deep learning acceleration platform emphasizes low latency and real-time processing of AI tasks.

 

At Microsoft’s Build conference, the company announced that it will provide a preview of Azure machine learning hardware acceleration model supported by Project Brainwave in Azure, as the first step to provide FPGA AI chip services externally. The first application is image recognition acceleration. In the past, FPGA programming was difficult. Microsoft emphasized that it would greatly reduce the complexity of programming through Project Brainwave. Currently, it uses customers such as Jabil, a large foundry.

 

Microsoft’s Project Brainwave clearly aims to take on Google. In particular, AI cloud services will be a very important battlefield in the future. From AI hardware and modules to application services in various scenarios, AI will become as a Service. Enterprises do not need to spend huge economic and time costs to build hardware infrastructure and training models from the beginning. As the world’s second-largest cloud services company, Microsoft is unlikely to cede opportunity to Google for nothing. Despite the first-mover advantage of The TensorFlow environment and TPU hardware, Google will still face a certain amount of pressure as rival solutions emerge.

 

On the terminal side, Google has launched TensorFlow Lite, a lightweight solution for mobile and embedded devices, to address the growing need for edge computing. Convert the trained TensorFlow model to a file format suitable for TensorFlow Lite through TensorFlow Lite Converter, and then deploy it to Android and iOS applications.

 

In addition, it also puts its self-developed Pixel Visual Core chip in its Pixel smartphone to greatly improve its camera function, and Google Home, a smart speaker equipped with an intelligent voice assistant, covers a lot of voice and language models based on machine learning training. According to Google’s latest data, There are more than 5,000 smart home hardware supported by Google Assistant, which can execute more than 1 million commands. As Apple’s strengths lie in hardware and chips, the competition between the two companies will only get fiercer in the future.

 

Targeting AI and hardware talents in China

 

At the end of last year, Google set up an AI China center in Beijing, led by Fei-fei Li, the chief scientist of Google Cloud, and Jia Li, the head of Google Cloud research and development. It was seen as a return after a seven-year absence. However, some industry sources told DT jun that, Google’s biggest purpose is not to let its Internet services back to the Chinese market, “because BAT’s services have been deeply rooted and wide, Google itself knows that it is too late, not for the market, but for talent”.

 

As China ranks among the world’s leading AI groups, the strength of AI has been widely recognized internationally, So China has become the most important stop for Google’s strategy. In addition to recruiting large local AI talents, artificial intelligence research also needs huge data. According to the data of China Internet Network Information Center (CNNIC), The number of Internet users in China has reached 751 million, and the frequency of the public using various mobile Internet applications is very high, far beyond that of European and American countries. Enthusiastic usage habits provide more data, which is a very important resource for Google, which prioritizes AI.

 

In addition, in recent years, Google is in the layout of the chip or hardware especially active, Taiwan and China has become the important region of gather talent, not only bought HTC phones generation of labor division, the previously acquired intelligent household hardware company Nest also open a number of jobs in Taiwan, and, more importantly, open a new office in city of hardware in shenzhen earlier this year, It is rumoured to be up to 300, also for hardware development and supply chain talent.

 

Android P supports full screen

 

Android is the most popular operating system for mobile devices in the world. It has been nearly ten years since Android 1.0 came out in September 2008. As the system has matured, Android updates are more focused on refining existing technologies.

Google first released the Android P Developer Preview (DP) in early March, with the final version expected in q3 2018. The DP 1 version of Android P has seen major changes to the user interface (UI), notably the support for a “DisplayCutout” screen gap, which is also known as the iPhone X-like Sensor Housing, commonly known as “bangs”.

 

Why did Apple use Sensor Housing at the top of the screen in order to create a full-screen phone, so it removed the physical Home button, replaced fingerprint recognition with facial recognition, and even pushed the visible area to the edge, but where do you put the Sensor or camera to support facial recognition? So hidden beneath a small black area in the middle of the top of the panel is what netizens call the “bangs,” or “Notch,” as the industry calls it.

 

With the new laser cutting process technology, companies can cut any shape based on the array of sensors on the full-screen phone panel. Notch can be installed with multiple sensors and front-facing lenses, allowing the phone to achieve full-screen design. Google is also following the trend by making Android P available for a full screen with a special cut, calling it “DisplayCutout,” and offering companies and developers three design options.

In addition, Android P adds support for IEEE 802.11MC Wi-Fi, also known as wi-fi round-trip-time, which allows you to use in-room location, Others include tighter integration with Google Assistant, a new notification display, and more detailed privacy Settings.

 

Figure | Test DisplayCutout screen design by using emulator

   

Can Google in its 20s hold true to its original vision

 

After 20 years of steady growth, a good earnings report, and leadership in many technologies, Google’s future looks good.

 

But whether the company, with its vast resources and technological capabilities, can maintain its founding spirit, especially its most famous “Don’t Be Evil” spirit, when Facebook broke out in the data gate scandal, it was named as the next person to step on the data protection landmine is Google. Since 2013, Google has been repeatedly fined by the GOVERNMENTS of the United States, especially European countries, for violating data protection laws. But even more alarming than the invasion of privacy is Google’s involvement in the Pentagon’s plans to develop AI weapons.

 

The reason is that the computer vision technology used by Project Maven is applied in war zones such as Iraq and Syria. The Pentagon uses Google’s TensorFlow API to analyze images taken by drones. Plus Alphabet Board member Eric Schmidt is also a member of the Defense Innovation Board (DBI).

 

It is for these reasons that more than 3,100 Google employees have decided to issue an open letter, demanding that Google should not be involved in war, that Google and the ecosystem should not develop war technology, and that the company withdraw from Project Maven, a US Department of Defense Project. Speaking for himself and not for any company, Schmidt later said that “Silicon Valley has to work on providing AI services to the military,” and suggested that “companies agree on acceptable norms.”

 

“The widespread adoption of AI creates new opportunities, but powerful tools bring new questions and responsibilities, and with them come new responsibilities as the social fabric of the world becomes woven together,” Google co-founder Sergey Brin wrote in his annual founders’ letter.

Figure 丨 Sergey Brin

He began with a famous quote from Charles Dickens: “It was the best of times, it was the worst of times.” It noted that Alphabet was thinking hard about issues including how AI could affect jobs, the challenges of developing fair and transparent algorithms, and concerns that the technology would be used to “manipulate humans”.

 

When a company becomes a behemoth, its every move is held to a higher standard by society and within the company. Given Google’s current financial and technical strength, it will undoubtedly play a significant role in the next decade. This may not fit the joyous atmosphere of Google I/O. There may not even be much discussion, but as Sergey Brin says: “I am optimistic about the use of technology, but we are walking a path that requires a great deal of responsibility, care and humility.” This is also what the outside world expects from Google in the future.

-End-