1. Basic theory

The deep residual shrinkage network is based on three parts, including the residual network, attention mechanism and soft threshold.

Its functional features include:

1) Since soft thresholding is a common step of signal denoising algorithm, the deep residual shrinkage network is more suitable for data with strong noise and high redundancy. At the same time, the gradient of soft thresholding is either 0 or 1, which is similar/consistent with the ReLU activation function.

2) Since the threshold of soft thresholding is set adaptively through the attention mechanism similar to SENET, the deep residual shrinkage network can set the threshold separately for each sample according to the situation of each sample, so it is applicable to the situation of different noise content in each sample.

3) When the data noise is very weak and there is no noise, the deep residual shrinkage network may also be applicable. The premise is that the threshold can be trained to a value very close to zero, so that soft thresholding is as good as nonexistent.

4) It is worth noting that the threshold value of the soft threshold function should not be too large, otherwise all outputs will be 0. Therefore, the attention module of the deep residual-shrinkage network is specially designed, which is obviously different from that of the general SENET.

Literature sources of this method:

M. Zhao, S. Zhong, X. Fu, B. Tang, M. Pecht, Deep residual shrinkage networks for fault diagnosis, IEEE the Transactions on Industrial Informatics, vol. 16, no. 7, pp. 4681-4690, 2020. (https://ieeexplore.ieee.org/d)…

2. PyTorch code

This article PyTorch code is in the code (https://github.com/weiaicunza)… Based on the modified, so to download this code to the local. Mainly changes the models/resnet. Py (https://github.com/weiaicunza.) And utils. Py (https://github.com/weiaicunza)… The code.

Residual shrinkage, on the other hand, the network core code, is derived from zhihu forefront writing an article on the network used in fault diagnosis of residual contraction (https://zhuanlan.zhihu.com/p/)… .

Specifically, the name of resnet.py file is changed to rsnet.py, which means Residual Shrunk Network. The modified rsnet.py code is as follows:

import torch import torch.nn as nn class BasicBlock(nn.Module): expansion = 1 def __init__(self, in_channels, out_channels, stride=1): super().__init__() self.shrinkage = Shrinkage(out_channels, gap_size=(1, 1)) #residual function self.residual_function = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels * BasicBlock.expansion, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(out_channels * BasicBlock.expansion), self.shrinkage ) #shortcut self.shortcut = nn.Sequential() #the shortcut output dimension is not the same with residual function #use 1*1 convolution to match the dimension if stride ! = 1 or in_channels ! = BasicBlock.expansion * out_channels: self.shortcut = nn.Sequential( nn.Conv2d(in_channels, out_channels * BasicBlock.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(out_channels * BasicBlock.expansion) ) def forward(self, x): return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x)) class Shrinkage(nn.Module): def __init__(self, channel, gap_size): super(Shrinkage, self).__init__() self.gap = nn.AdaptiveAvgPool2d(gap_size) self.fc = nn.Sequential( nn.Linear(channel, channel), nn.BatchNorm1d(channel), nn.ReLU(inplace=True), nn.Linear(channel, channel), nn.Sigmoid(), ) def forward(self, x): x_raw = x x = torch.abs(x) x_abs = x x = self.gap(x) x = torch.flatten(x, 1) # average = torch.mean(x, dim=1, keepdim=True) average = x x = self.fc(x) x = torch.mul(average, x) x = x.unsqueeze(2).unsqueeze(2) # soft thresholding sub = x_abs - x zeros = sub - sub n_sub = torch.max(sub, zeros) x = torch.mul(torch.sign(x_raw), n_sub) return x class RSNet(nn.Module): def __init__(self, block, num_block, num_classes=100): super().__init__() self.in_channels = 64 self.conv1 = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True)) #we use a different inputsize than the original paper #so conv2_x's stride is 1 self.conv2_x = self._make_layer(block, 64, num_block[0], 1) self.conv3_x = self._make_layer(block, 128, num_block[1], 2) self.conv4_x = self._make_layer(block, 256, num_block[2], 2) self.conv5_x = self._make_layer(block, 512, num_block[3], 2) self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, out_channels, num_blocks, stride): """make rsnet layers(by layer i didnt mean this 'layer' was the same as a neuron netowork layer, ex. conv layer), one layer may contain more than one residual shrinkage block Args: block: block type, basic block or bottle neck block out_channels: output depth channel number of this layer num_blocks: how many blocks per layer stride: the stride of the first block of this layer Return: return a rsnet layer """ # we have num_block blocks per layer, the first block # could be 1 or 2, other blocks would always be 1 strides = [stride] + [1] * (num_blocks - 1) layers = [] for stride in strides: layers.append(block(self.in_channels, out_channels, stride)) self.in_channels = out_channels * block.expansion return nn.Sequential(*layers) def forward(self, x): output = self.conv1(x) output = self.conv2_x(output) output = self.conv3_x(output) output = self.conv4_x(output) output = self.conv5_x(output) output = self.avg_pool(output) output = output.view(output.size(0), -1) output = self.fc(output) return output def rsnet18(): """ return a RSNet 18 object """ return RSNet(BasicBlock, [2, 2, 2, 2]) def rsnet34(): """ return a RSNet 34 object """ return RSNet(BasicBlock, [3, 4, 6, 3])

Then, line 62-64 in the utils.py file:

    elif args.net == 'resnet18':
        from models.resnet import resnet18
        net = resnet18()

Is amended as:

    elif args.net == 'rsnet18':
        from models.rsnet import rsnet18
        net = rsnet18()

Then in the run window type:

python train.py -net rsnet18 -gpu

I’m ready to run the program.

3. Other code

Paper, the author provides TFLearn and Keras code in the making, see links: https://github.com/zhao62/Dee…

TensorFlow 2.0 code has also been written: https://blog.csdn.net/qq_3675…