Historical experience tells us: good tools get twice the result with half the effort, while bad tools cause trouble everywhere.

When AI is in full swing in various scenarios, many people are eager to learn how to enter the “draught”, to their own enterprises also add a buff, also can be added to the brick. And has been “standing in the wind” people, but there are still many people hesitate – technology is to understand, but the application is really difficult to do. Investigate its reason, get the answer nothing more than: slow hand resource is little, complex investment is high.

AI development, and in particular the deep learning technology that drives AI into industrial production, is complex, expensive and time consuming. So what to do? At this point, you need tools that save development time, support large-scale data training, and allow you to deploy multiple devices and hardware flexibly. It needs to be open source, mature, and capable of supporting industry-level applications. There are plenty of deep learning frameworks out there, so let’s pick one that’s right for you.

First of all, it has to develop itself. Experience tells us that the international situation is so dazzling and ever-changing that any choice of leaving the control of the underlying technology to other countries is a tragedy. Being “pulled from the bottom” is definitely a pain for everyone and every enterprise. Second, real knowledge comes from practice. Ultimately, AI is a combination of science and business that “should be honed in practice and ultimately serve practice”. Therefore, it is important for tools like deep learning framework to be mature and easy to learn and use. The most fundamental element that determines the “maturity and usability” of a framework is the “scenario” — the framework works, the scenario comes first. Does the company launching the framework have a scenario? Is the scene rich? Is there a large amount of scene data? These are all things to consider when choosing a framework.

Finally, look at deployment. If the essence of business is to create value, then the essence of AI development is to create value indirectly. The goal of all AI development is ultimately to return to value, which is landing implementation, deployment. So finding a development tool that is easy to adapt to all kinds of hardware can save a lot of work in the last step.

What is the experience of AI development with Flyblade Framework 2.0?

Take the flying paddle frame known as the light of domestic goods for example, first of all, it satisfies the independent research, there is no “wrapped foreign framework kernel actually do local Localization” behavior. Secondly, in terms of development, the flying-blade framework, which has just been upgraded to the official version 2.0, can support users to complete the development of all types of model algorithms in deep learning related fields by using dynamic graph. This marks that the flying-blade dynamic graph function has become mature and complete, isn’t it very powerful?

In addition, from the official information released, the current flying oar official algorithm model library has reached 270+, covering computer vision, natural language processing, voice, recommendation and other fields, basically has covered all industries to the mainstream application requirements.

On the training level, the flying-blade framework 2.0 has realized the training of hundred-billion-scale dense parameter model on the basis of supporting trillions of sparse parameters. To put it simply, it can train data and models on a very large scale, for which a machine is very inefficient and difficult to “fit”. That’s trillions of data and super-large models. Therefore, in order to achieve the “silky smooth” training experience efficiently and with high quality, flying paddle can support the segmentation of parameters to multiple GPU cards for training, so as to truly support the hundred-billion-scale dense parameter model training in different scenarios.

It goes through development, training, and at the end of the deployment, it’s like the video is cut and ready for release. In the release, landscape or portrait, clarity requirements, size restrictions and so on all depend on its carrier. The same is true for AI model deployment, where there are many limitations to the model due to the suitability requirements of different hardware. Currently, many chip manufacturers, including Intel, Nvidia, ARM and so on, have launched support for flying paddle. Fly blade with ascended, sea ray, kunpeng, the godson, ShenWei in-depth CPU such adaptation, and combining the kirin, system communication, PWC operating system, and baidu kunlun, sea ray DCU, Cambrian, mainland bits, micro, qualcomm, nvidia AI rui core chip depth fusion, and waves, CST dawn server vendors cooperation forming hard and soft infrastructure, the integration of the whole stack AI. Up to now, flying paddle has been adapted and is being adapted chip or IP models to reach 29, in the industry leading position.

Full acceleration, straight up and down. Of course, it depends on your own development habits, but in general, the production capacity of the flying-rotor model is very strong with both ease of use and performance. For developers who want to switch to AI and get into the hot air, it’s also a good idea to try it out with a flying paddle after you’ve nailed the language. Interested friends can start your try!