Sundar Pichai, CEO of Google and Alphabet

We are delighted to be back with our I/O Developer Conference this year. Driving into the Mountain View campus this morning, I felt a sense of normalcy that I hadn’t felt in a long time. Of course, it wouldn’t have been the same without our developer community on the ground. The COVID-19 outbreak has had a profound impact on communities across the globe over the past year and continues to take its toll. Countries like Brazil and India, where I come from, are going through the hardest time of the outbreak. Our thoughts are with everyone affected by COVID-19, and we hope for better things in the future.

Over the past year we have looked at many things. For Google, it also brings new meaning to our mission to integrate the world’s information for universal use and benefit. We continue to pursue this mission by pursuing one goal: to be the Google for everyone. This means helping people at important times, and providing everyone with the tools to increase knowledge, achieve success, improve health and improve well-being.

Help at important times

Sometimes that means helping in important moments, like over the past year, When Google Classroom helped 150 million students and educators participate online. At other times, we focus on helping in small moments to make a big difference for everyone. For example, we’re adding safer routes to maps, an AI-driven feature that can identify roads, weather and road conditions that would make you stop suddenly; Our goal is to prevent 100 million of these cases from occurring each year.

Reimagine the future of work

One of the most important ways we can help is by reimagining the future of work. Over the past year, we’ve seen work change in unprecedented ways, with offices and co-workers replaced by countertops and pets. Many companies, including Google, will continue to work flexibly, even as it becomes safe to share the same office again. Collaboration tools have never been more important, and today we’re announcing a new “Smart Canvas” experience in Google Workspace that will allow people to collaborate in richer ways.

The convergence of “smart Canvas” and Google Meet

Responsible next generation of AI

Over the past 22 years, we’ve made remarkable progress, thanks to advances in some of the most challenging areas of AI, including translation, graphics, and speech. These advances drive improvements across all of Google’s products, allowing us to communicate with others in another language with the help of Google Assistant’s translation mode, review precious memories on Google Photos, or solve math puzzles with Google Lens.

Leveraging a giant leap in computers’ ability to process natural language, we’ve also improved the core search experience for billions of people with the help of artificial intelligence. However, there are still times when computers just can’t understand us. That’s because the complexity of language is endless: we use it to tell stories, joke around and share ideas — weaving together concepts we’ve learned in life. The richness and flexibility of language makes it one of humanity’s most important tools and one of computer science’s greatest challenges.

Today, I am pleased to share with you our latest research on natural language understanding: LaMDA. LaMDA is a language model for dialogue scenarios. It’s open field, which means it can be designed to start a conversation on any topic. For example, LaMDA knows quite a bit about Pluto, so if a student wants to learn more about space, they can ask the model questions about Pluto, and it will give reasonable answers, making the learning process more interesting and engaging. If the student wanted to move on to another topic — how to make paper airplanes, for example — LaMDA could continue the conversation without any retraining.

This is one of the reasons we believe LaMDA can fundamentally make information and computing power more accessible and usable.

We’ve been developing language models for years. We are committed to ensuring that LaMDA meets our very high standards for fairness, accuracy, security and privacy, while adhering to our AI principles. We look forward to adding conversational capabilities to products like Google Assistant, Search, and Workspace, and exploring how to open up the capabilities of the LaMDA model to developers and enterprise customers.

LaMDA is a huge step forward in the field of natural dialogue, but it’s still just text-based training. When communicating with each other, people communicate through images, text, voice and video. Therefore, we need to build a multi-modal model (MUM) that allows people to naturally ask questions involving different types of information. With the help of multimodal models, one day you’ll be able to plan a road trip by asking Google to help you “find a route with beautiful mountain views.” This is an example of how we’re moving toward a more natural and intuitive way to interact with search engines.

Push the frontiers of computing

Translation, image recognition, and speech recognition lay the foundation for complex models such as LaMDA and multimodal models. Our computing architecture is the way to drive and support these developments, and TPU, as our customized machine learning process, is an important part of that. Today, we announced the next generation of TPU: TPU V4. The TPU V4 is driven by a V4 chip and is more than twice as fast as its predecessor. One TPU POD has the computing power of one exascale floating-point calculation per second, equivalent to 10 million laptops combined. This is the fastest system we’ve ever implemented, and it’s a historic milestone for us. Until then, if you wanted to do exascale calculations per second, you needed a custom supercomputer. We will soon have a lot of TPU V4 PODS in our data centers, many of which will run on 90 percent or close to 90 percent carbon free energy. TPU V4 will be available to Google Cloud customers later this year.

(left) TPU V4 chip tray; (right) TPU V4 POD in an Oklahoma data center

It’s particularly exciting to see the pace of innovation. Looking ahead, there are some types of problems for which traditional computer information processing techniques will not be able to solve in a reasonable amount of time. Quantum computing can help. Achieving the quantum milestone is a huge achievement for us, but it is just the beginning. We will continue our efforts to achieve the next milestone in quantum computing: building error-correcting quantum computing that can help us improve battery efficiency, create more sustainable energy and improve drug discovery. To that end, we have opened a state-of-the-art quantum AI campus that includes our first quantum data center and quantum processing chip manufacturing facility.

A new quantum AI park

Make users more secure

At Google, we know that our products must help our users while keeping them safe. Advances in computer science and artificial intelligence allow us to keep improving. We keep more users safe by preventing malware, phishing, spam and potential cyber attacks than anyone else.

Our focus on data minimization drives us to focus on doing more with less. Two years ago, I announced auto-Delete at I/O, a feature that encourages users to automatically delete their activity on a continuous basis. Since then, we’ve turned it on for all new Google accounts by default. Now, we will automatically delete any active records that have been retained for more than 18 months, and users can set shorter periods. It is now available for more than 2 billion Google accounts.

All our products conform to three important principles. With one of the most advanced security infrastructures in the world, our products are secure by default. We strictly collect and use data in a responsible manner, so every product we develop is designed to protect user privacy. We create easy-to-use privacy and security Settings for easy user control.

Long-term study: Project Starline

Thanks to video conferencing, we have been able to stay in touch with family and friends, and continue to study and work over the last year. But there is no substitute for spending time face to face.

A few years ago, we started a Project called Project Starline to use technology to explore more possibilities. Using high-resolution cameras and custom depth sensors, the user’s shape and appearance are captured from multiple angles and then fused together to create extremely detailed real-time 3D models. The resulting data amounts to thousands of megabits per second, and in order to be able to send images of this size over existing networks, we have developed new compression and streaming algorithms that reduce data by more than 100 times. We’ve also developed ground-breaking light-field displays that show the real person facing the user’s screen. The technology is complex, but people don’t feel it, so they can focus on what matters most.

We have spent thousands of hours testing the technology in our own offices and the results have been very positive and our major partner businesses are excited. In the meantime, we’re working with the medical community and the media to get early feedback. In advancing remote collaboration, we have made technological advances that improve our entire communications product suite. We look forward to sharing more information on this in the coming months.

A person is talking to someone through Project Starline

Addressing complex sustainable development issues

The other thing that we do is we hope that through our work, we can help solve the problem of sustainable development. For more than 20 years, sustainable development has been our core value. In 2007, we became the first major company to become carbon neutral. In 2017, we achieved 100% renewable energy in our operations for the first time ever, and we’ve stuck to that. Last year, we completely solved the carbon legacy.

Our next goal is even more ambitious: to operate on carbon-free energy by 2030. This represents a very different change from existing methods, and is a hugely challenging project on the scale of quantum computing. It also means that we have a very difficult problem to solve, which is how to use carbon-free energy in every place we operate, every moment of every day.

Last year, we launched the first carbon smart computing platform, and we will soon be the first company to implement carbon smart load migration across a network of data centers across time and place. By next year, we will be able to move more than a third of our non-product calculations to places where we can make better use of carbon-free energy. We are applying Cloud AI to advanced drilling and fiber optic sensing technologies to deliver geothermal energy to multiple locations. This technology will be available at our Nevada data center next year.

That kind of input requires 7 by 24 hours of carbon-free energy running, and in Mountain View, California, we’re doing just that. We are building our new offices to the highest sustainability standards. When construction was completed, the buildings featured the first dragon scale solar skin, which was equipped with 90,000 silver solar panels and produced nearly 7 megawatts of electricity. It will be the largest geothermal reactor system in North America, providing heating in winter and cooling in summer. It’s amazing to see them come to life!

(left) renderings of the new east Charleston office park in Mountain View, Calif. (right) solar skin model of dragon scale

A technology extravaganza

I/O isn’t just a tech event, it’s a user event, it’s a creator event — including the millions of developers from around the world who are joining us online today. In the past, we’ve seen people use technology in all sorts of ways: to make people healthier and safer, to continue to learn and grow, to connect with each other, to help people get through tough times together. And it’s this inspiring diversity that continues to inspire us to be more determined to help.

I look forward to meeting you offline at next year’s I/O event. I wish you all health and safety.

If you’d like to review the Google I/O 2021 keynote, check out the video here.