Google (Google), founded in 1998, is recognized as the world’s largest search engine company, the multinational technology company announced in August 2015 to adjust the enterprise structure, founded the Alphabet “umbrella company”, Google became Alphabet’s subsidiary, comprehensive transformation to cover multiple fields of high-tech enterprises.

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include editing robots, writing robots and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

This year, Google CEO Sundar

Pichai announced that Google will move from Mobile

Move First to AI



First (ai First) transformation. In fact, Google has long made heavy use of AI(artificial intelligence) and machine learning (one of the core technologies of AI) in its core products, and it has acquired other AI companies and constantly developed new AI products. What Google has achieved so far with AI gives a strong impression that Google is having fun!

DeepMind acquisition: a bargain

Why does Google make it look like it’s having a good time? That may be down to its acquisition of DeepMind, an artificial intelligence company. DeepMind was founded in 2010 by Demis Hassabis, an artificial intelligence programmer and neuroscientist, and acquired by Google in 2014 for £400m ($600m at the exchange rate at the time).

DeepMind’s AlphaGo beat Fan Hui, the European champion, 5-0 in 2014.

In March, AlphaGo beat former world No. 1 Lee Sedol 4-1;

On November 7, Fan Hui, who has joined the AlphaGo team, announced that AlphaGo will return to the game next year, and whether it will play Ke Jie of China remains to be seen.

Earlier this month, It was announced that DeepMind is working with Blizzard to develop tools on DeepMind’s platform that will allow third parties to teach ai to play the real-time game StarCraft II.

DeepMind seems to have been teaching AI to play games, and doing it exceptionally well. Of course, DeepMind is a company Google bought, not Google’s headquarters, but Google paid £400m for it, and its research cannot be independent of headquarters. How could DeepMind have such a good time if headquarters had not agreed? DeepMind, ostensibly playing games, is at heart a unique way of developing technology.

At present, AI is still in the stage of weak artificial intelligence, and there are still many difficult problems to be solved before deeper and wider practical application. It’s an area of immense potential, but also a challenging one that requires armies of superbrains to tackle. Demis Hassabis told The Verge in an interview this year that DeepMind’s goal is not just to win games, but to have fun and be inspired. Playing games is another kind of test-bed — a platform to try to write algorithms, to test them. Eventually, the hope is to apply technology to solve real-world problems.

Hassabis wants to create an “integrated intelligence” — one that can perform any task through self-learning, just like humans. AlphaGo, previously the best player in the world of Go, has many hurdles to jump before it can reach “general intelligence” — it first has to master the messy real world. So the DeepMind team came up with starCraft. The game is an interesting testing environment for current AI research, providing a useful bridge to the real world. In other words, DeepMind sees the game as an important transition for AI to adapt to the real world, because starcraft’s complexity is closer to real-world problems than chess, and these composite systems are still a serious challenge for even the most powerful computers. The insights gained from StarCraft could enable AI to solve problems faster and more efficiently than before. On the other hand, chess is played in rotation, whereas Starcraft is played in real time. That’s why we see Google’s AI playing StarCraft II.

For DeepMind, the support of Google is very important, without Google, there would be no DeepMind today, they are a win-win. — DeepMind has grown rapidly since it was acquired by Google, adding dozens of researchers and acquiring Dark Blue Labs and Vision Factory, two companies founded by Oxford University professors, in October 2015. Google has given DeepMind considerable autonomy to optimise research progress; DeepMind, on the other hand, would need to train AlphaGo with a lot of hardware and develop different versions of it to play in the cloud. Without the resources provided by Google, DeepMind would not be able to make progress in the short term.

Google’s support for DeepMind, one of its most powerful ai arms, comes as the company tries to solve all other problems by solving ai problems. DeepMind is actually involved in many of Google’s products as well as its own independent research. Hassabis says DeepMind interacts weekly with Google Brain, an AI project launched in the Google X lab in 2011. DeepMind focuses on enhanced learning, While Google Brain focuses more on deep learning.

To say that DeepMind is a good deal is to mention some of DeepMind’s other great achievements. It is indeed a very technologically advanced AI company:

In July, Hassabis said DeepMind’s AI systems were helping Google save huge sums on electricity bills. The system reduces power consumption by controlling parts of the data center — operating related equipment such as servers and cooling systems. Google also says the technology does improve power efficiency (PUE) by 15%. Google consumed 4,402,836 megawatt-hours of electricity in 2014, much of it from data centers. According to the U.S. energy information administration, the enterprises pay the price range of about 25 ~ 40 dollars per megawatt hour, no matter in which standard, if the data center power consumption reduced by 10%, it is possible in a few years for Google to save hundreds of millions of dollars in electricity (so, Google acquisition DeepMind spent $600 million to “save” come back soon). “The AI system controls 120 variables in the data center, including fans, cooling systems and Windows,” Hassabis said. DeepMind may in future ask Google to add sensors to its data centres in order to further save power through richer information.

In healthcare, DeepMind recently signed a deal with Moorfields Eye Hospital in the UK to use AI to study the case histories of 1.6 million patients in London hospitals. The goal is to teach a computer program to recognize two common eye diseases, diabetic retinopathy and age-related macular degeneration. Ophthalmologists still make mistakes about 10 to 20 percent of the time in diagnosing both diseases by analyzing a combination of medical maps and asking patients about their condition. Ai can scan millions of documents and records with machines, learn from them and make faster and more accurate diagnoses. This is DeepMind’s second collaboration with the NHS. The previous project involved monitoring patients’ kidney function using a smartphone App at The Royal Free Hospital in London.

In September, Foreign media reported that DeepMind had made a huge leap forward in the field of speech generation, with its WaveNet system halving the gap between machine-generated speech and human voice quality. The advance will bring forward the day when machines can sound “naturally” like humans.

In a paper published online in Nature in October, the DeepMind team proposed a hybrid computing system that combines the best of neural networks and digital computers. Called a “differentiable Neural computer” (DNC), it combines a neural network with an external memory that can be read and written. It overcomes the shortcoming that neural network can’t store data for a long time. Herbert Jaeger, a professor at Jacobs University Bremen in Germany, believes that the DNC’s expanded memory could extend deep learning applications to big data.

Google’s powerful creativity has injected vitality into AI

In fact, in addition to DeepMind’s strides, the creative headquarters of Google is also flourishing in the development of AI. Let’s take a look at what Google has done so far (I’ve only sketched out some of the main points, but I’d have to write a long article on them all) :

Open source deep learning system: In 2011, Google established the AI department, and in November 2015, Google opened the second generation of deep learning system Tensorflow. Built on DistBelief, a deep learning infrastructure developed by Google in 2011, TensorFlow is faster, smarter and more flexible than DistBelief, and can be used in areas ranging from speech recognition to photo recognition. In addition, TensorFlow writes operations that run on a variety of heterogeneous systems with little change. With open source, all engineers can help Google modify and improve the technology. Today, Tensorflow marks its first year of open source, and The Google blog posts the results of the year — more than 480 people have contributed directly to the system, which is now the most popular machine learning project on GitHub. Tensorflow has been improved, brought to IOS and Raspberry PI, and integrated with big data architectures that connect to Go,Rust, and Haskell. The system has also led to significant improvements in Google Translate, among others.

Infuse AI into products and services: There are currently a lot of machine learning features in Google Search, Google Now, GmailY, and the open source Android mobile operating system. For example, deep learning is used to improve search engines, identify Android phone commands, identify Google+ social network images, and develop Android phone speech recognition system with convolutional neural network.

Developing its own AI chip: To improve the performance of its AI software, Google is also developing and using a new microprocessor chip, THE TPU, which it says has a seven-year advantage over current processors commonly used in machine learning and can compute 10 times faster. Google has also said it is likely to develop more dedicated processors for specific AI tasks.

Developing quantum computers: Google is trying to develop quantum computers that are far more powerful than traditional computers, but it hasn’t said how. According to foreign media reports, In 2013 Google bought a D-Wave computer, in 2014, hired John Martinis of the University of California, Santa Barbara, to design superconducting qubits, Google quantum artificial Intelligence Laboratory has also announced, Its quantum computer, D-Wave2x, runs 100 million times faster than simulators running on conventional computer chips. Google’s quantum computer could appear sooner than expected, perhaps by the end of next year, according to New Scientist on August 31. Quantum computer is a kind of physical device that follows the laws of quantum mechanics for high-speed mathematical and logical operations, storage and processing of quantum information. It is not only super fast, but also can solve much more complex problems compared with ordinary computers. It is very similar to humans in finding solutions to problems and can perform many tasks for which humans are uniquely qualified.)

Release intelligent Assistant: In May, Google I/O, the company’s annual developer conference, announced Google Assistant, an intelligent voice Assistant that combines speech recognition, artificial intelligence and natural speech and will be available to third-party developers across Google’s hardware and software products. The smart voice assistant will go head-to-head with Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana.

Smart speaker launch: Google

I/O also announced a physical device for Google Assistant, the Google Speaker

Home, which will be the heart of Google’s smart Home, will allow people to talk to it to play music, check the weather, schedule their day and control other smart devices in the Home. Unlike Amazon’s Echo, the speaker will use Google’s database to understand users’ needs.

Driverless Car: Google X LABS is developing Google Driverless Car, which can start, drive and stop automatically. On May 8, 2012, the Department of Motor Vehicles issued a red license plate to Google’s self-driving car, three months after the state of Nevada allowed driverless cars on the road. On September 23, 2016, Google’s self-driving car collided with a commercial truck in Mountain View.

It is not hard to see that Google is indeed heading to the top of the ARTIFICIAL intelligence (AI)

The direction of FIRST has been comprehensively transformed. Not only has ai technology been injected into its core business, but other businesses have also been expanded around or based on AI.

During these transitions, Google’s philosophy has not changed — Google still believes that technological superiority is business superiority and that “if we do it well, there will be business opportunities”. That’s why we’re seeing a lot of other companies coming to market with AI technology, but Google is more focused on AI. I do believe that once Google has a head start on AI technology, it won’t be too hard for them to apply it to real products. Google’s leadership has realized that “the best software works best when it meets the best hardware.”

However, the development of technology needs the long-term support of abundant financial, human and material resources. What matters most is how Google withstands the pressure from all parties to take this road to the end. At the same time, the application and design of products also need to learn from other competitors, such as security, privacy protection, product design and promotion, other giants have their own strengths.

Finally, I would like to mention Google’s culture. I believe that Google’s unique culture fosters curiosity and creativity and thus becomes an important driving force for technology development. Google’s culture, in short, embodies the idea of putting people first. Specifically, Google, as a company, cares deeply about its employees. It believes that employees are the creativity of the company. If companies hit a wall, it’s because they can’t hire the brightest and most capable people fast enough. Therefore, Google believes that it must be responsible to its employees to keep them in the company and serve the company for a long time. Google has always provided generous benefits and benefits to its employees, and paid attention to their mental state. Projects are often carried out in groups, and the working style and hours are flexible. For example, Google provides employees with excellent dining facilities, gym, laundry, massage room, barber shop, shuttle bus, etc. Almost everything a hard-working engineer needs can be found in Google headquarters. Google engineers can spend 20 per cent of their working time on projects of their own choosing, with the aim of encouraging all creative employees to develop ideas.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

To close, Google co-founder Lawrence Edward Page said, “I’ve seen a lot of companies and why I think they haven’t stood the test of time,” “Often it’s simply because they’ve missed the future. In my opinion, I was just focused on that and thinking, where is the future really going to go, how do we create that future, how do we get everybody in the company to really focus on it and do it now?” “I believe that only curiosity can do that, that people are willing to do things that haven’t been thought of before, to study things that haven’t been studied before, to take risks, and that’s the real value of the company.” (The remarks are excerpted from an interview Page gave to TED two years ago.)

Next time, let’s keep an eye on how another tech giant is making headway in artificial intelligence, and welcome your interaction, guidance, and correction.