Xiao check product from concave the temple qubit commentary | QbitAI number

“I’m going to PyTorch!”

After seeing the new 1.3 feature, some developers took to Twitter to shout.

PyTorch 1.3 was announced on the first day of the PyTorch Developer conference.

The new version not only supports Android iOS mobile deployment, but also allows users to call cloud TPU on rival Google’s Colab.

PyTorch has also been integrated into Aliyun for Chinese developers who can’t easily use Google wool, making it easier for aliyun users to use It.

There’s also a wave of new tools for interpretability, encryption, and graphic-speech capabilities.

Compatible with new tools, it’s no wonder that casual diehard fans are becoming more loyal to Facebook’s open source library.

Cow force! Personally, Facebook has a much better open source library than Google and has the best support.

React vs. Angular, Pytorch vs. Tensorflow. These are just two examples. Facebook’s framework was late to market with strong support and continued to improve, while Google’s framework came out early with few compatible upgrades and ended up being abandoned.

My loyalty to Facebook’s open source library has been growing.

New features in PyTorch 1.3

PyTorch 1.3 brings three experimental new features.

Named tensor

Make tensors easier to use by allowing users to name tensor dimensions, so that they can be called by name instead of tracking tensor dimensions by location.

Before you upgrade, you need to name tensors with comments in your code:

# Tensor[N, C, H, W]
images = torch.randn(32.3.56.56)
images.sum(dim=1)
images.select(dim=1, index=0)
Copy the code

After the upgrade, you can write directly in the code, which greatly improves the readability:

NCHW = [' N ', 'C', 'H', 'W'] images = torch. Randn (32.3.56.56, names=NCHW)
images.sum('C')
images.select('C', index=0)
Copy the code

In addition, this feature improves security by automatically checking that the API is being used correctly, as well as rearranging the size by name. \

You can name it like this:

You can also name it this way:

Use align_to to reorder:

Isn’t it more convenient?

Quantitative support

When developing ML applications, it is important to make efficient use of computing resources on the server side and on the device.

To support more efficient deployments on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantification in eager mode. Quantization refers to the technique of performing calculations and storage with reduced precision.

Current experimental quantization features include support for post-training quantization, dynamic quantization, and quantization-aware training. It leverages the latest quantization kernel backends of FBGEMM and QNNPACK for x86 and ARM cpus, respectively, which are integrated with PyTorch and now share a common API.

Specific API documentation can be found at pytorch.org/docs/master…

Facebook offers a quantified, practical example for developers: pytorch.org/tutorials/a…

The mobile terminal

In addition, to efficiently run machine learning on edge devices, PyTorch 1.3 supports end-to-end workflows from Python to iOS and Android.

Of course, this feature is still in the early experimental version and has been optimized for end-to-end. The new version focuses on:

1. Size optimization, build level optimization and selective compilation according to user needs.

2. Improved performance on mobile CPU and GPU.

3. Advanced apis: Extend mobile native apis to cover common pre-processing and integration tasks needed to incorporate machine learning into mobile applications, such as computer vision or NLP tasks.

Mobile deployment details: pytorch.org/mobile/home…

A new tool

Interpretable tool Captum

As AI models become more complex, it becomes increasingly important to develop new approaches to model interpretability. To meet this need, Facebook launched Captum.

Captum can help developers using PyTorch understand why their model generates a particular output. Captum provides advanced tools to understand how specific neurons and layers influence the predictions made by models.

Captum algorithm includes Integrated Gradients, Conductance, SmoothGrad, VarGrad and DeepLift.

The following case shows how model interpretability algorithms can be applied to a pre-trained ResNet model and then visualized by superimposing the attributes of each pixel on an image.

noise_tunnel = NoiseTunnel(integrated_gradients)

attributions_ig_nt, delta = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(["original_image"."heat_map"],
                                      ["all"."positive"],
                                      np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1.2.0)),
                                      np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1.2.0)),                                      cmap=default_cmap, 
                                      show_colorbar=True)
Copy the code

Learn more: www.captum.ai/

Encryption tool CrypTen

Practical use of ML through cloud or machine learning as a service (MLaaS) platforms presents a number of security and privacy challenges.

Users of these platforms may not want or be able to share unencrypted data, preventing them from taking full advantage of machine learning tools. To address these challenges, the machine learning community is exploring technological approaches of varying maturity. These include homomorphic encryption, Secure Multiparty Computation, and Trusted execution Environments, on-device computation and differential privacy.

To better understand how to apply some of these technologies, Facebook has launched CrypTen, a new community-based research platform designed to advance the privacy ML field.

CrypTen more details: ai.facebook.com/blog/crypte…

CrypTen is now open source on GitHub: github.com/facebookres…

Multi-mode AI system tools

Digital content on the Web today is often not a single form, but a combination of multiple forms, including text, images, audio and video. PyTorch provides a new ecosystem of tools and software libraries to handle machine learning ML systems.

Detectron2

Detectron2 is the target detection library implemented in PyTorch. It AIDS computer vision research with increased flexibility and improves maintainability and scalability to support use cases in production.

More details: ai.facebook.com/blog/-detec…

Making: github.com/facebookres…

Fairseq speech extension

Language translation and audio processing are key components in systems and applications such as search, translation, voice, and assistant. Thanks to the development of new architectures such as transformers and the development of large-scale pre-training methods, great strides have recently been made in these areas.

Facebook has expanded Fairseq (seQ2SEQ application framework for language translation) to include end-to-end learning support for speech and audio recognition tasks. These extensions to Fairseq can speed up the exploration and prototyping of new speech research.

Making: github.com/pytorch/fai…

The cloud support

Another exciting announcement at today’s Developer conference was PyTorch’s announcement of full support for Google Cloud TPU, as well as the integration of PyTorch from Alibaba Cloud. In addition, PyTorch expands its hardware ecosystem by adding support for two AI hardware.

Google cloud TPU

Thanks to the work of engineers from Facebook, Google, and Salesforce, the new PyTorch includes cloud TPU support, including experimental support for supercomputer Cloud TPU Pods. Google Colab also provides PyTorch support for cloud TPU.

Ali cloud

Alibaba Cloud’s integration involves PyTorch 1.x’s one-click solution, data Science Workshop Notebook service, distributed training using Gloo/NCCL, and seamless integration with Alibaba IaaS such as OSS, ODPS and NAS.

Hardware ecological expansion

In addition to the main GPU and CPU partners, the PyTorch ecosystem also supports dedicated machine learning accelerators. Examples include Intel’s recently unveiled NNP-I reasoning chip and Habana Labs’ AI processor.

Both companies have separately blogged about their hardware support for the PyTorch Glow optimized compiler, enabling developers to take advantage of these market-specific solutions.

Giving back to the AI community

Finally, Facebook would like to thank the AI community for its contribution to the PyTorch ecosystem, which has benefited from some of the developers’ excellent open source projects over the past few years. Such as:

1. Mila SpeechBrain: The All-in-one Speech Toolkit

Bye Kaldi! PyTorch Speech Kit is coming with SpeechBrain, which supports multiple speech tasks and delivers the most powerful speech you can

2. SpaCy: Advanced software library for NLP

HuggingFace PyTorch-Transformers: The latest pre-training model library for NLP.

Call 27 NLP pretraining models in one API: BERT, GPT-2 all covered, as Easy as Importing NumPy

PyTorch Lightning: an ML library similar to Keras

“Keras on PyTorch, Distributed Training out of the box, say goodbye to Endless Debug”

Recently, Facebook held its first online global PyTorch Summer hackathon, with nearly 1,500 developers competing. Facebook announced the winners today. They are:

1. Torchmeta: Provides an extension to PyTorch to simplify the development of meta-learning algorithms in PyTorch.

Open-unmix: a system that uses PyTorch for end-to-end music mixing.

3. Endless AI-generated Tees: This is a shop selling AI-designed T-shirts to the world. They use StyleGAN, built by PyTorch, and are trained in modern art.

The author is a signed author of netease News · netease “Every Attitude”

– the –

Machine learning beginner

The public account created by Dr. Huang Haiguang has more than 22,000 followers on zhihu. Github ranks among the top 120 in the world (31,000). This public number is committed to the direction of artificial intelligence science articles, for beginners to provide learning routes and basic information. Original works include: Personal Notes on Machine learning, notes on deep learning, etc.

Past wonderful review \

  • All those years of academic philanthropy. – You’re not alone

  • Conscience recommendation: Introduction to machine learning information summary and learning recommendations \

  • Machine Learning Course Notes and Resources (Github star 12000+, baidu cloud image provided)

  • Ng deep learning notes, videos and other resources (Github standard star 8500+, providing Baidu cloud image)

  • Statistical Learning Methods of Python code implementation (Github 7200+) \

  • Carefully organized and translated mathematical materials related to machine learning

  • Introduction to Deep Learning – Python Deep Learning, annotated version of the original code in Chinese and ebook

  • Word2vec (original translation)\

Note: If you join our wechat group or QQ group, please reply”Add group

To join Knowledge Planet (4200+ user ID: 92416895), please reply”Knowledge of the planet