Chestnut xiao check from the sunken the temple qubit commentary | QbitAI number

Google’s annual TensorFlow developer conference took place in California early this morning. It’s supposed to be a software event, but it feels like a hardware event.

Google unveiled two pieces of AI hardware at the conference: a development board with TPU that costs just 1,000 yuan; A computing stick that can speed up Linux machine learning reasoning. They replaced software as the main event.

Of course, there is the usual update to TensorFlow 2.0 Alpha, with the version number increasing and the Logo changing to the current flat design.

With AI models running on mobile devices becoming more mainstream, TensorFlow Lite for deployment on edge devices is finally coming to version 1.0.

Here’s a look back at some of the highlights.

Coral

The Coral Dev Board is a $150 minicomputer with a detachable modular system and a custom TPU chip similar to raspberry PI.

The EdGED-TPU in Coral is about a quarter the size of a coin, comes with 1GB of LPDDR4 memory, 8GB of eMMC storage, Linux or Android on Mendel, and local offline computing.

It does not train machine learning models, but simply uses TensorFlow Lite to make inferential predictions, and is therefore more energy efficient than a full-stack framework.

Coral is able to run a deep feedforward neural network on high-resolution video at 30 frames per second, ora single model like MobileNet V2 at more than 100 frames per second.

This girl showed off an interesting image classification application using Coral. All it takes is a pad, a camera, and a few buttons.

The four buttons on the right each represent a category.

Show AI an orange and press the yellow button a dozen times.

The next time you see an orange, the yellow light will come on.

Then show the AI TF logo, also orange, even press the red button a dozen times.

Then, when you see the sign again, the red light will go on.

At this point, even if the orange is placed in front of the camera, the AI will not be confused and will not hesitate to turn on the yellow light.

Google also announced a Coral USB accelerator that also contains an Edge TPU that can run on any 64-bit ARM or x86 Debian Linux platform.

The Coral USB Accelerator costs $75 and speeds up machine learning reasoning in raspberry PI and Linux systems.

Google is not the first to release such a product. Intel released a USB Neural Network accelerator a few years ago, but the Neural Network accelerator only supports Caffe, while Coral supports Caffe, TensorFlow and ONNX.

Since PyTorch can be converted to ONNX, Coral can actually support PyTorch as well.

“Nice feature rundown at Hackaday,” tweeted Yann LeCun of rival Facebook.

There’s also a $25, 5-megapixel camera accessory.

In the preceding two paragraphs has hardware can go to a website to order, and check the detailed technical document: https://coral.withgoogle.com/

TF 2.0 Alpha

The TensorFlow team expressed their love for Keras.

Using the TF.keras high-level API can effectively simplify the use of TF.

The team said that the setting of TF.Keras was used to make small models, how to expand the scale?

Estimators are powerful tools.

In 2.0, Tf.Keras has incorporated Estimators’ energy:

This way, you don’t have to choose between a simple API and an extensible API.

Keras and other apis are thrown away to reduce duplication.

In addition, Eager Execution becomes the default. In addition to being faster to execute, debugging is also easier:

“Objects such as variables, layers, and gradients can be examined using Python debuggers.”

Quick learning

For an Alpha version of the silky starter TF 2.0, head to TensorFlow’s freshly designed and friendly site with tutorials and guides:

https://www.tensorflow.org/alpha

Take a look at the “Hello World” example for beginners and experienced drivers:

The beginner version uses the Keras Sequential API, the simplest way to get started;

The old driver edition shows how to write forward propagation with imperative, how to write custom training loops with GradientTape, and how to compile a line of code with tF.function automatically.

Then, read Effective TensorFlow 2.0 and other guides.

And there’s the AutoGraph Guide, the Code Upgrade Guide, and other Keras-related guides, too.

There are new

Along with TF 2.0 Alpha, there are also two deep learning courses, zero base edible.

One of them, which Ng was involved in developing, is called Introduction to TensorFlow for ARTIFICIAL intelligence, Machine learning and Deep Learning:

This is a practical course that will teach you how to build neural networks in TensorFlow, train your computer vision network, and use convolution to improve the network.

The course is divided into four weeks.

Week 1, learn about a new programming paradigm.

Week two, introduction to computer vision.

Week 3, CNN was used to enhance computer vision.

For the fourth week, feed the network real-world images.

The other course, which is free at Yoda, is called Introduction to TensorFlow for deep learning.

The first class is still learning the syllabus, and the third class is already training my model:

Now, the first four classes are online. Section 5 to be continued.

TF Lite for mobile phones

After introducing TF 2.0, Google TensorFlow Lite engineer Raziel Alvarez took the stage to launch TF Lite 1.0.

TensorFlow Lite is a cross-platform solution for mobile and embedded devices. Google wants to make TensorFlow available on more devices.

In addition to PCS and servers, there are devices in our lives that use machine learning models, such as phones, smart speakers, and smartwatches, and none of them can run TensorFlow.

The challenges of getting these devices to run TensorFlow are limited computing power, limited storage, and limited batteries.

There must be a Lite framework for deploying machine learning models on mobile and IoT devices.

That’s why TensorFlow Lite was born. It was first introduced at Google’S I/O developer conference in May 2017 and has been deployed on more than 2 billion devices, mainly through Google apps and two Chinese apps — IQiyi and netease.

Then Google invited Lin Huijie, machine learning engineer from netease in China, to introduce the application of TensorFlow Lite in Youdao.

Lin Huijie said netease has achieved 30 to 40 percent acceleration in image translation with it.

Google says it is very easy to deploy TF Lite for mobile devices by packaging the model with TensorFlow and converting it to TF Lite with the TF Lite converter.

After being optimized by TF Lite, the performance of the device on CPU reaches 1.9 times of the original, and the performance on Edge TPU is up to 62 times improved.

other

In addition to these hardware and software offerings, Google today announced TensorFlow Federated, and TensorFlow Privacy.

TensorFlow Federated is an open source framework for training AI models with data from different locations. TensorFlow Privacy makes it easier for developers to train AI models with strong Privacy protections.

Also unveiled at the press conference are tensorflow.js 1.0; Swift for TensorFlow 0.2.

– the –

Subscribe to AI Insider for AI industry information

Buy AI books

Sincere recruitment

Qubit is looking for editors/reporters to work in Zhongguancun, Beijing. Looking forward to the talented and enthusiastic students to join us! For more details, please reply “Wanted” on the QbitAI chat screen.

Qubit QbitAI · Head number signing author

վ’ᴗ’ ի Tracks new developments in AI technology and products

Click “good-looking” if you like!