As shown above, the log is divided into three parts

Part ONE

First line: Load the initial weights. The second line:

  1. Learning Rate: the current Learning Rate, which is expressed by scientific notation when the decimal point is greater than 4 digits.
  2. Momentum: the current Momentum parameter.
  3. Decay: Current weight attenuation regularization.

The third line: Resizing, which standardizes the input images. Line 4: CFG configuration file random=1 (open random multi-scale training) image size, then the image width=height, and between 320 and 608 random value, random change every 10 rounds.

Part TWO

1. On the overall quantity of output

One batch of all the training images. The batch size is determined according to the subdivisions parameter set in the.cfg file. For example, batch = 12 and Subdivision = 4 in the. CFG file. Four groups of images are exported, and three images are sent to the network for each group. As shown in the following screenshot, Region 82, Region 94, Region 106 are grouped into four groups.

Region 82, Region 94, and Region 106 represent parameters of different sizes predicted on three different scales (82, 94, 106). (I haven’t understood the specific details, but I will fill in when I understand)

2. The parameters are described
parameter instructions
Loaded The time taken to load the batch.
Avg IOU Within the current subdivision, the average IOU of the sample is expected to approach 1.
Class The accuracy of object classification is marked, and the value is expected to approach 1.
Obj This value is expected to approach 1.
No Obj The value is expected to be smaller and smaller but not zero.
.5R = Recall/count, the ratio of positive samples detected by the current model to actual positive samples in all subdivision samples. We expect it to go to 1.
.75R
count Number of positive sample tags in all current Subdivision images.

(It is said to contain the number of positive sample pictures, but sometimes it is greater than the number of pictures, so it is considered to be the number of labels, if there is any error, please leave a comment.)

Iii. Part III

Batch output: a description of the results of this batch of training.

parameter instructions
22201 The number of iterations currently trained.
0.907749 Total loss.
Avg 0.907749 The average loss, the smaller the better, is generally less than 0.060730 AVg to terminate the training.

(This number is from the Internet, if you know why, please leave a message)
0.000001 rate Current learning rate.
5.067780 seconds The total time spent on training in the current batch.
1776080 images Total number of pictures participated in the training.

(Not the total number of pictures, some pictures participated in several training.)