By Sundar Pichai, CEO of Google and Alphabet

It’s great to be back at our I/O Developer Conference this year. As I drove into the Mountain View campus this morning, I felt a sense of normalcy I hadn’t felt in a long time. Of course, nothing would have been the same without our developer community on the scene. Over the past year, the COVID-19 outbreak has had a profound impact on communities around the world and continues to take a toll. Countries like Brazil and my home country, India, are now experiencing the most difficult times of the outbreak. Our thoughts are with everyone affected by the COVID-19 outbreak and we hope that things will get better in the future.

In the past year we have looked at many things. For Google, it also gives new meaning to our mission of “organising the world’s information and making it universally available for everyone’s benefit.” We continue to accomplish this mission by pursuing a single goal: to be a Google for everyone. That means being there for people when it matters, and giving everyone the tools to grow their knowledge, achieve success, improve their health, and increase their happiness.

Offer to help at important times

Sometimes that means helping in important moments, like this past year, when Google Classroom helped 150 million students and educators engage with online teaching. In other moments, we work to help in small moments to make a big difference for everyone. For example, we’re adding safer routes to maps, an AI-powered feature that identifies roads, weather and road conditions that can make you stop suddenly; Our goal is to reduce the number of such cases by 100 million a year.

Reimagine the future of work

One of the most important ways we can help is to reimagine the future of work. Over the past year, we’ve seen work change in ways we’ve never seen before, with offices and co-workers replaced by countertops and pets. Many companies, including Google, will continue to work on flexible schedules, even as it becomes safe to work in the same office again. Collaboration tools have never been more important, and today we are announcing a new Smart Canvas experience at Google Workspace that will allow you to collaborate in richer ways.

Smart Canvas and Google Meet

The next generation of responsible artificial intelligence

Over the past 22 years, we’ve made remarkable progress, thanks to progress in some of the most challenging areas of AI, including translation, imaging, and voice. These advances have driven improvements across all of Google’s products, allowing us to communicate with others in another language with the help of Google Assistant’s translation mode, review cherished memories in Google Photos, or solve math puzzles using Google Lens.

We’ve also improved the core search experience for billions of people with the help of artificial intelligence, taking advantage of giant leaps in computers’ ability to process natural language. However, there are still times when computers just don’t understand us. That’s because the complexity of language is endless: we use it to tell stories, make jokes and share ideas — weaving together concepts we learn in life. The richness and flexibility of language make it one of humanity’s most important tools and one of the greatest challenges of computer science.

Today, I’m pleased to share with you our latest work on natural language understanding: LAMDA. LAMDA is a language model for conversational scenarios. It’s open domain, which means it can be designed to start conversations about any topic. For example, Lamda knows quite a bit about Pluto, so if a student wants to learn more about space, they can ask this model a question about Pluto, and it will give reasonable answers, making the learning process more interesting and engaging. If the student wants to move on to another topic — say, how to make a paper airplane — Lamda can continue the conversation without any retraining.

This is one of the reasons we believe that LAMDA will fundamentally make information and computing power more accessible and usable.

We’ve been developing language models for years. We are committed to ensuring that Lamda meets our extremely high standards for fairness, accuracy, security and privacy, as well as our artificial intelligence principles. We look forward to adding dialogue capabilities to products such as Google Assistant, Search, and Workspace, and exploring how we can open up the capabilities of the Lamda model to developers and enterprise customers.

LAMDA is a huge step forward in the field of natural dialogue, but it’s still just text-based training. When people communicate with each other, they communicate through images, text, voice and video. Therefore, we need to build multimodal models (MUM) that allow people to naturally ask questions involving different types of information. With the help of a multimodal model, you will one day be able to plan a road trip by asking Google to “find a route with beautiful mountain views.” This is an example of how we’re moving toward interacting with search engines in a more natural and intuitive way.

Pushing the frontiers of computing

Translation, image recognition, and speech recognition lay the foundation for complex models such as LAMDA and multimodal models. Our computing architecture is how these developments are driven and supported, and TPU, as our custom machine learning process, is an important part of that. Today, we are announcing the next generation of TPUs: the TPU V4. The TPU V4 is powered by a V4 chip and is more than twice as fast as its predecessor. A single TPU POD has the computing power of exascale floating-point calculations per second, equivalent to 10 million laptop computers combined. This is the fastest system we’ve ever implemented, and it’s a historic milestone for us. Before that, if you wanted to achieve exascale floating-point calculations per second, you needed a custom supercomputer. We will soon deploy many TPU V4 PODs in our data centers, many of which will run on 90 percent or close to 90 percent carbon-free energy. TPU V4 will be available to Google Cloud customers later this year.

(left) TPU V4 chip tray; (Right) TPU V4 POD in an Oklahoma data center

It’s particularly exciting to see the pace of innovation. Looking into the future, there are some types of problems for which traditional computer information processing techniques will not be able to solve in a reasonable amount of time. Quantum computing can help. Reaching a quantum milestone is a huge achievement for us, but it’s just the beginning. We will continue to work toward the next milestone in quantum computing: building an error-corrected quantum computing system that can help us improve battery efficiency, create more sustainable energy, and improve drug discovery. To this end, we have opened a state-of-the-art Quantum AI campus, which includes our first quantum data center and quantum processing chip manufacturing facility.

New Quantum AI Park

Make users more secure

At Google, we know that our products must help our users while keeping them safe. Advances in computer science and artificial intelligence allow us to make continuous improvements. We keep more of our users safe by stopping malware, phishing, spam and potential cyber attacks than anyone else.

Our focus on data minimization drives us to strive to do more with less data. Two years ago, I announced the “auto-delete” feature at I/O, which encourages users to continuously auto-delete their activity records. Since then, we’ve enabled this feature for all new Google accounts by default. We will now automatically delete active records that have been retained for more than 18 months, and users can set it for a shorter period. It is now active for more than 2 billion Google accounts.

All our products conform to three important principles. With one of the most advanced security infrastructures in the world, our products are safe by default. We collect and use data in a strict and responsible way, so every product we develop is designed to protect users’ privacy. We create easy-to-use privacy and security Settings for easy user control.

Long-term study: Project Starline

Thanks to videoconferencing, we’ve been able to stay in touch with family and friends over the past year and continue our studies and work. But there is no substitute for face-to-face contact.

A few years ago, we started a Project called Project Starline to use technology to explore more possibilities. Using high-resolution cameras and custom depth sensors, the user’s shape and appearance is captured from multiple angles and then fused together to create an extremely detailed real-time 3D model. The resulting data is thousands of megabits per second, and in order to send such large images over existing networks, we have developed new compression and streaming algorithms that reduce the data by more than a factor of 100. We also developed a breakthrough light field display that shows the real person across the screen from the user. The technology is complex, but people don’t feel it, so they can focus on what’s most important.

We’ve spent thousands of hours testing the technology in our own offices, and the results are very positive, and our major partner companies are excited. In the meantime, we are working with medical institutions and the media to get early feedback. In advancing remote collaboration, we have made technological advances that can improve our entire suite of communications products. We look forward to sharing more information in the coming months.

A person is talking to someone through Project Starline

Solve complex sustainable development issues

The other research we do is we hope that through our work, we can advance solutions to sustainable development issues. Sustainability has been our core value for more than 20 years. In 2007, we became the first major company to go carbon neutral. In 2017, for the first time, we used 100% renewable energy in our operations, and we have continued to do so. Last year, we eliminated our carbon legacy.

Our next goal is even more ambitious: to operate with carbon-free energy by 2030. This represents a very different approach from the current approach, and a project on the scale of quantum computing, with a huge challenge. It also represents the very difficult problem that we have to solve, which is how to use carbon-free energy, all the time, in every place that we operate.

Last year, we launched the first carbon smart computing platform, and we will soon be the first company to implement carbon smart load migration across time and place in a network of data centers. By next year, we will be able to move more than a third of our non-product computing to places where carbon-free energy can be better used. We are applying Cloud AI to advanced drilling and fiber optic sensing technology to deliver geothermal energy to multiple locations. This technology will be available next year in our Nevada data center.

Such inputs require carbon-free energy operations that operate 7×24 hours a day, which is what we’re doing in Mountain View, California. We are building our new office to the highest sustainability standards. When construction is complete, the buildings will feature a first-ever dragon-scale solar skin with 90,000 silver solar panels generating nearly 7 megawatts of electricity. It will create the largest geothermal reactor system in North America, designed for heating in the winter and cooling in the summer. It’s amazing to see them come to life!

(left) A rendering of a new office park in East Charleston, Mountain View, California; (right) display of solar skin model of dragon scale

A tech event

I/O is not just a technology event, but a user event, a creator event — including the millions of developers around the world who are here online today. In the past, we’ve seen people use technology in many ways: to make people healthier and safer, to keep learning and growing, to connect with each other, to help people get through tough times together. It is this inspiring diversity that constantly inspires us to be more determined to help.

I am looking forward to meeting you offline in next year’s I/O activities. I wish you all health and safety.

If you’d like to review the Google I/O 2021 keynote, check out the video below:

https://www.qq.com/video/r003…