As a media company focused on the AI industry, we were pretty excited when the Kirin 970 and Apple A11 were announced, but it’s been over a month now, and we haven’t heard anything from anyone other than the manufacturer’s specs and the media frenzy.

Will mobile AI revolutionize new experiences on mobile phones? Will chips like the Kirin 970 and Apple’s A11 become the most important drivers of mobile AI? In order to avoid falling into empty talk, we respectively interviewed developers in BAT’s artificial intelligence business department, mobile terminal development, live broadcast products and deep learning image processing. I want to know what mobile AI chips look like in their eyes.

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

BAT developer: Model strength is more important than side deployment

As we have discussed before, we can see a lot of traces of cloud AI in various apps, such as selfie beauty and voice assistant. AI already exists in mobile devices. The reality is that machine learning developers work on algorithms, crunch data, and train models on one end, while mobile developers work on other data structures, programming, and so on. The two are connected via a cloud server, but otherwise have little overlap.

When it comes to mobile AI, most developers are concerned about porting their models to applications, whether they will increase the size of the install package, and whether they will need to upload the update package to the app store for approval every time they update their models.

Many of today’s big companies are pursuing AI at almost no cost, and the sheer amount of data at their disposal has spoiled developers. Training complex models with huge amounts of data, updating them all the time and trying to make them better.

It seems a little early for these large enterprises to consider which end to deploy before they get a model that pleases them and the industry. A product operator from one of Alibaba’s smart products told us that they are currently more concerned with polishing the algorithm model better than comparing end-to-end environments.

This illustrates the state of AI development: the transition from algorithmic to product-led makes it difficult for developers to move spontaneously from one development environment to another before users vote with their feet. In particular, some large enterprises with superior resources may feel insecure due to differences in algorithms rather than nuances in user experience.

Developers in transition: The endless possibilities for custom AI

But a developer who has been through the pc-to-mobile transition suggests that mobile AI may be more than just “offline AI.”

In the process of moving from PC to mobile, developers have found that user behavior varies greatly from device to device. Similarly, when it comes to mobile devices — phones, tablets, smartwatches, even smart speakers — there’s still a big difference in user behavior. For example, users are more likely to open videos on tablets, while users are more likely to open text messages on mobile phones. In cloud-based intelligent recommendation algorithms, data from these two devices are often mixed together.

When mobile devices have the ability to process locally, developers can make recommendation algorithms better match different devices. According to the parameters announced by Apple A11 and Kirin 970, it is of little significance to transplant intelligent recommendation algorithms, which rely on the network itself, to the local mobile terminal, but it will get a very different user experience to transfer the reasoning results from the cloud to the local mobile terminal for further processing.

This also means that algorithms have the potential to better integrate with devices and, by extension, individual users. With the rich sensing capabilities of mobile devices, data that could not be easily obtained, such as location, weather, light, duration of use, user’s travel schedule, and even temperature and heart rate, can be integrated into existing algorithm models through local computing. In a future with mobile AI, food delivery apps will put your favorite hot pot at the top of the recommendation list on rainy days, and the fastest food delivered at work.

In short, from a developer’s perspective, the greatest value of mobile AI is the transformation from “general purpose AI” to “custom AI.”

Migrating practical developers: Mobile AI requires a more unified programming environment

While the concept of “custom AI” is fascinating, when it comes to practical development, it’s not as easy as you might think.

A senior PE engineer working in the live broadcasting platform told us that currently the mobile GPU/AI chip is still a relatively new concept, with chaotic API interface and lack of unity of programming and computing methods.

For now, Apple A11 only opens up GPU capabilities to a framework called CoreML, whereas most developers currently use mainstream frameworks like Tensorflow and Caffe. This certainly has implications for future migrations: should developers emasculate the existing framework or reprogram it on CoreML?

If the vendor provides a high level of abstraction, mobile AI developers will have to adapt the hardware abstraction layer individually. If vendors prefer to provide low-level abstractions, differences can be masked by higher-level abstractions, thus improving development efficiency.

This is especially true for systems like Android, which have a chaotic ecosystem.

Taking the current common CPU as an example, different vendors have different power and heat control logic. The same neural network may run smoothly on one CPU but may trigger overheat protection on another, resulting in dramatic fluctuations in the network’s computing performance.

Therefore, most developers only dare to embed some networks with small computation or functions with low call frequency in their products, and cannot choose to use recurrent neural networks in continuous functions.

Deep learning developers: Chips are just the beginning of mobile AI

For users, the emergence of mobile AI chips means better product experience on the terminal, but for developers, mobile AI cannot be achieved overnight by the chip.

We interviewed Jason, CEO of Shenhei Technology, China’s version of Prisma, and he told us that most mobile AI chips are generally optimized for machine learning, but not for some specific computing methods. For example, convolutional computation required in deep learning is more suitable to be deployed in the cloud.

And when planning a cloud-to-terminal migration, developers are constrained by the original framework. Tensorflow, for example, is easy to deploy on iOS and Android, making migration easier. Small-team frameworks such as Caffe and Torch, however, are relatively weak in productization and currently do not help developers migrate their code between two mobile devices. This also gives Tensorflow more advantages, especially in consumer applications.

Mobile AI chips won’t solve every problem, but many teams are trying to accelerate mobile AI deployment from the software side. Seattle-based AI startup Xnor. AI, which just raised funding this year, uses binary neural networks to reduce the storage size of neural network models, thus making computing faster and cheaper. Finally, the deep learning model can be deployed in embedded devices without network dependence.

The emergence of hardware is just a cornerstone of the mobile AI ecosystem. At the business level, there are API compatibility, software optimization, and even 5G networks in the future.

So, mobile AI chips are useless?

After talking to developers, we started thinking about what mobile AI chips mean to them. While the advent of hardware doesn’t immediately create an ecosystem, the bet on mobile AI by phone makers has certainly given most developers the confidence to get more people into the space.

More importantly, mobile AI chips give developers the possibility of launching lightweight products without having to worry about cloud computing costs or server outages as users grow. It can be said that the emergence of mobile AI chips has lowered the barriers to entry for consumer AI applications.

Back to the two mobile AI chips. Apple’s A11 performance weakness isn’t obvious, but the mere opening up of GPU power to CoreML and Metal 2 (a graphics processing software for games) has clearly dampened the enthusiasm of many developers. This approach further solidifies the integrity and stability of the iOS ecosystem, but is more suitable for high-barrier custom development.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

No one would dispute that attention is one of the scarcest, most valuable and most rationally utilized resources in this age of information fragmentation. Relying on the independently developed Giiso engine, the Zhisou team created the first intelligent media platform Tianzexun APP, which can intelligently answer users’ various relevant information according to various commands or text interaction commands. And can be based on the user’s personalized use characteristics and continuous learning, continuous tracking of users interested in the unique content. At present, the day smart news APP6.0 version has been updated iteration, can be used to download the application market.

Above, Chen Ruchu’s humble opinion! I’m sorry if I offended you.

For more information about ai robots, please visit Giiso Zhisou: http://www.giiso.com/. Thank you