The holidays are almost over. Are you ready? It’s time to start work. Let’s read some news to calm down.
More exciting articles please add wechat “AI front” (ID: AI-front)

Content introduction:

  • Chinese Academy of Sciences “gait recognition” technology: without looking at the face, within 50 meters to recognize you in the crowd

  • Oracle releases autonomous database

  • Apple poached init.ai’s AI team into the Siri development team

  • DeepMind is opening a new AI lab in Montreal

  • Google’s new open source project: Machine learning using only camera and browser

  • Amazon has released a new AI framework compiler

Chinese Academy of Sciences “gait recognition” technology: without looking at the face, within 50 meters to recognize you in the crowd

Experts from the Institute of Automation of the Chinese Academy of Sciences recently introduced a new biometric recognition technology — gait recognition: a camera can accurately identify a specific object within 50 meters after two blinks of the eye, just by looking at the posture of a walk. According to Huang Yongzhen, associate researcher at the Institute of Automation, iris recognition usually requires the target to be within 30 centimeters and face recognition within 5 meters, while gait recognition can be as far as 50 meters and within 200 milliseconds with ultra-high-definition cameras.

In addition, gait recognition does not require the active cooperation of the object of recognition. Even if a person walks casually with a mask and his back to the ordinary surveillance camera dozens of meters away, the gait recognition algorithm can also judge his identity.

Gait recognition can also calculate the density of a large range of people, and count the scale of 1000 people in 1000 square meters 100 meters away in real time. These technologies can be widely used in security, public transportation, commercial and other scenarios.

The reporter learned from the Institute of Automation of the Chinese Academy of Sciences that China is currently in the forefront of gait recognition technology in the world. Galaxy Water Drop Technology, for example, is a world leader in gait data and algorithms, with an outdoor gait database nearly 100 times larger than the second largest database.

News sources

http://tech.sina.com.cn/it/2017-10-03/doc-ifymkwwk8032386.shtml

Oracle releases autonomous database

Speaking at the OpenWorld conference in San Francisco, Oracle CHIEF Technology Officer Larry Ellison said Oracle’s autonomous database and network security system were developed together. Why? The network security system alerts the database so it can patch itself in real time.

Ellison also touted Oracle’s next-generation autonomous database as more secure and labor-saving. More automation won’t take dba jobs, ellison says, but will free up those professionals to handle more important tasks like security. “It’s not like these administrators are sitting around doing nothing,” Ellison said.

Ellison said Oracle 18C could cut AWS costs in half. Ellison said Oracle will also put that in writing. Oracle 18C runs locally, in Oracle’s public cloud, and in customers’ cloud. A data warehouse version of Oracle 18C will be available in December. The OLTP version will not be available until June 2018.

News sources

https://techcrunch.com/2017/10/02/larry-ellison-pokes-aws-while-unveiling-intelligent-database-service-at-oracle-openwor ld-keynote/

Oracle adds AI development service to platform offerings

Apple poached init.ai’s AI team into the Siri development team

Apple this week “bought” init. ai, and init. ai’s AI team is joining Apple. Init.ai is a startup that designs intelligent assistants for customer service representatives. Init.ai can only handle some communication and interactions with users automatically. The startup’s research and development focus is on creating artificial intelligence that supports natural language processing and machine learning, and learning to analyze conversations between humans.

Apple did not buy Init.ai directly, nor did it buy any of the company’s patents, which means apple will not use init. ai’s technology in its products. Apple is focusing on init. ai’s AI team, which will be responsible for apple’s Siri.

News sources

http://nlp.hivefire.com/articles/87600/apple-acquires-init-ai-to-help-siri-get-smarter/

http://www.cnbeta.com/articles/tech/657889.htm

DeepMind is opening a new AI lab in Montreal

The lab will be led by Doina Precup, a professor in McGill’s Computer Science department. Precup will continue her work at the university and split her time with DeepMind.

Precup said, “The lab will focus on enhanced learning, which is my specialty, in-depth learning and these algorithms.”

“We try to build rewards so that automated algorithms can learn what the right thing to do is, especially when it comes to completing a series of actions,” Precup said.

Precup, who works on technology applications involving power systems, robotics and gaming, said the lab will not focus on developing specific applications of ARTIFICIAL intelligence. “We’re a basic research lab, so the focus is on developing algorithms and understanding their properties,” Precup said.

The lab will start as two teams, but Precup says she expects it to grow over the next year.

This will be DeepMind’s second research laboratory in Canada.

News sources

http://montrealgazette.com/business/local-business/google-affiliated-ai-company-deepmind-to-open-research-lab-in-montrea l

Google’s new open source project: Machine learning using only camera and browser

Google recently announced a new project called Teachable Machine.

The project allows users to use only the camera to collect data and design machine learning. From helping users find their favorite photos to sorting cucumbers for farmers in Japan, machine learning is changing the way people use code to solve problems.

The designers wanted to make it easier for people to get on the ground, so they created the Teachable Machine, which lets users use the browser’s camera to collect data and design Machine learning without having to program it. Teachable Machine is built with a library called deeplearn.js, which makes it easy for web developers to train and run neural networks in the browser. The code has now been open-source to help developers do some new experiments.

Project Website:

https://teachablemachine.withgoogle.com/

Open source address:

https://github.com/googlecreativelab/teachable-machine

News sources

https://www.blog.google/topics/machine-learning/now-anyone-can-explore-machine-learning-no-coding-required/

Amazon has released a new AI framework compiler

Amazon is addressing the challenges of AI development with a new end-to-end compiler solution. The NNVM compiler developed by AWS and a team of researchers from the University of Washington’s School of Computer Science and Engineering aim to deploy in-depth learning frameworks across many platforms and devices.

“You can choose from multiple artificial intelligence (AI) frameworks to develop AI algorithms. You can also choose from a variety of hardware to train and deploy the AI model. Diversity of frameworks and hardware is critical to maintaining the stability of an artificial ecosystem. However, this diversity also poses several challenges for AI developers, “Mu Li, chief scientist for AI at AWS, wrote in a post.

According to Amazon, AI developers face three major challenges today: switching between AI frameworks, maintaining multiple backends, and supporting multiple AI frameworks. The NNVM compiler solves this problem by compiling the front-end workload directly to the hardware back end. “Today, AWS is pleased to announce that the UW research team has produced an end-to-end compiler based on the TVM stack that compiles workloads directly from a variety of deep learning front ends into optimized machine code.” The TVM stack developed by the team is an intermediate representative stack designed to bridge the gap between deep learning frameworks and hardware backends.

“While deep learning has become indispensable for platforms ranging from mobile and data center Gpus to the Internet of Things and specialist accelerators, there are still significant engineering challenges in deploying these frameworks,” said Allen School ph. D. Our TVM framework enables developers to quickly and easily deploy in-depth learning across a range of systems. We provide a solution that uses NNVM and runs on all frameworks, including MXNet and model interchange formats such as ONNX and CoreML, with significant performance improvements.”

According to Amazon, the NNVM compiler consists of two components of the TVM stack: NNVM for computing graphs and TVM for tensor operators.

“NNVM uses graphical optimizers to provide specifications for computational graphs and operators that operators use to implement and optimize targeted hardware. We have worked hard to prove that this compiler can match or even exceed the state-of-the-art performance of two very different hardware: ARM CPUS and Nvidia Gpus. “We expect the NNVM compiler to greatly simplify the design of new AI front-end frameworks and back-end hardware, and help users deliver consistent results across a variety of front-end and back-end frameworks.”

News sources

Amazon releases new compiler for AI frameworks


Today’s recommendation,

Click on the image below to read it

Revealing Haier real-time computing platform: Technology selection and practice in multi-business scenarios



Course recommended

Deep learning with zero foundation, how to learn practical skills such as dialogue/landlord robot with convolutional network, image classification with Tensorflow, face recognition, and imitation of master painting style within 9 weeks?

We invited two teachers gao Yang, a senior big data expert, and Wei Zheng, a senior software architect, to give a lecture on “Deep learning, from beginner to master”. Students will get started and advance deep learning in the most simple and understandable way. For more information, please scan the QR code of the poster and click to read the original article to go directly to the “Course link”.