In HEVC, only translational motion is considered for motion compensation, whereas in the real world there are various motions, such as scaling, rotation, head motion, and other irregular motions. A block-based affine transform motion compensation prediction is proposed in VTM5. As shown in the figure below, the affine motion vector of a block is generated by two control points (four parameters) or three control points (six parameters).

 

The block-based affine motion compensation method is as follows:

1. First divide the block into 4×4 brightness sub-blocks.

2. For each brightness subblock, press the following formula to calculate the motion vector of the center pixel from the affine vector, and then round it to 1/16 precision.

For the 4-parameter affine motion model, the motion vector of the sub-block with center pixel (x,y) is calculated as follows:

 

For the 6-parameter affine motion model, the motion vector of the sub-block with center pixel (x,y) is calculated as follows:

 

Where (mv0x,mv0y),(mv1x,mv1y),(mv2x,mv2y) are the motion vectors of control points at the upper left, upper right and lower left corners respectively.

3. After the motion vectors of each sub-block are calculated (as shown below), the predicted values of each sub-block are obtained by motion compensation interpolation filtering according to the motion vectors.

 

4. The chromaticity component is also divided into 4×4 sub-blocks, and its motion vector is equal to the average of the relevant 4 4×4 brightness sub-blocks.

Similar to the traditional prediction method of interframe motion vector, affine motion vector has two prediction methods: affine merge mode and affine AMVP mode

Affine Merge Prediction

The AF_MERGE mode can be used for CU with width or height greater than or equal to 8. In this mode, the CPMV (control point motion vector) of the current CU is generated by the motion information of the adjacent CU in its airspace. Up to five CPMV prediction candidates are generated, and an index is transmitted to indicate which candidate is ultimately used. The affine Merge List is generated from the following three CPMV candidates:

1. Inherit the CPMV candidate of its neighbor CU.

2. CPMV is constructed from MV of translation motion of neighbor CU.

3.0 vector.

There are at most two candidates of type 1 in VTM5, one inherited from the left neighbor CU and the other inherited from the upper neighbor CU. As shown in the figure below, the scanning order of CU on the left is A0->A1, and that of CU on the top is B0->B1->B2. Only the first valid CU in the scan order is inherited for the left and top, respectively. No pruning is performed between inherited candidates.

 

When a neighboring CU is selected, the CPMV of that neighboring CU is used to generate candidates in the current CU affine Merge list. As shown in the figure below, if block A in the lower left corner is selected, when A is A 4-parameter affine motion model, the two CPMV of the current CU can be calculated according to v2 and v3; when A is A 6-parameter affine motion model, the three CPMV of the current CU can be calculated according to v2, V3 and V4.

 

For a type 2 candidate, each control point is generated by a specific spatial and temporal neighbor, as shown in the figure below. CPMVk(k=1,2,3,4) represents the KTH control point. For CPMV1, it is generated by MV of the first valid block in B2->B3->A2. For CPMV2, it is generated by MV of the first valid block in B1->B0. For CPMV3, it is generated by MV of the first valid block in A1->A0. CPVM4, if present, is generated by TMVP.

 

After the MV of the four control points was obtained, the affine Merge candidate was constructed based on the motion information. The following combination of control points MV is used to construct the candidate:

{CPMV1, CPMV2, CPMV3}, {CPMV1, CPMV2, CPMV4}, {CPMV1, CPMV3, CPMV4}, {CPMV2, CPMV3, CPMV4}, { CPMV1, CPMV2}, { CPMV1, CPMV3}

The combination of 3 CPMV constructs the 6-parameter Affine Merge candidate, and the combination of 2 CPMV constructs the 4-parameter Affine Merge candidate. In order to avoid scaling calculation, if the reference images of the control points are different, the relevant combinations are discarded.

If the affine Merge list is not filled with candidates of type 1 and type 2, it is filled with the 0 vector.

Affine AMVP Prediction

Affine AMVP mode can be used for CU whose width and height are 16 or greater. In merge mode, the predictive CPMV is used directly, whereas in AMVP, the optimal CPMV for the current CU and the residual of the predictive CPMV need to be transferred. The Affine AVMP candidate list has two candidates, which are generated by the following four categories of CPMV candidates:

1. Inherit the CPMV candidate of its neighbor CU.

2. CPMV is constructed from MV of translation motion of neighbor CU.

3. Translation motion MV from neighbor CU.

4.0 vector.

Affine AMVP candidate constructs of type 1 are the same as Affine Merge. The only difference is that the neighbor CU’s reference image must be the same as the current CU’s.

Affine AMVP candidate constructs for type 2 are the same as Affine Merge. In addition, the reference image index of the neighbor block is also checked by selecting the block encoded between the first frames in the scan order and having the same reference image as the current CU. If CU is a 4-parameter affine model and both Mv0 and Mv1 are valid, add them to the Affine AMVP list. When the current CU is a 6-parameter affine model and all three CPMV are valid, add them to the Affine AMVP list. Otherwise, the candidate of type 2 is invalid.

If the candidate items in the Affine AMVP list are still less than 2 after type 1 and type 2 are added, then mv0, Mv1 and mv2 are added in order to predict MV of all current CU control points with translation motion MV. And then finally, if the list isn’t full, fill it with the 0 vector.

In the definition of VTM5 affine AMVP has three candidates as follows:

struct AffineAMVPInfo
{
  Mv       mvCandLT[ AMVP_MAX_NUM_CANDS_MEM ];  ///< array of affine motion vector predictor candidates for left-top corner
  Mv       mvCandRT[ AMVP_MAX_NUM_CANDS_MEM ];  ///< array of affine motion vector predictor candidates for right-top corner
  Mv       mvCandLB[ AMVP_MAX_NUM_CANDS_MEM ];  ///< array of affine motion vector predictor candidates for left-bottom corner
  unsigned numCand;                       ///< number of motion vector predictor candidates
};
Copy the code

Here is the code for affine AMVP list:

void PU::fillAffineMvpCand(PredictionUnit &pu, const RefPicList &eRefPicList, const int &refIdx, AffineAMVPInfo &affiAMVPInfo)
{
  affiAMVPInfo.numCand = 0;

  if (refIdx < 0)
  {
    return;
  }

  / /! < CPMV candidate that inherits its neighbor CU
  // insert inherited affine candidates
  Mv outputAffineMv[3];
  Position posLT = pu.Y().topLeft(a); Position posRT = pu.Y().topRight(a); Position posLB = pu.Y().bottomLeft(a);// check left neighbor
  if(!addAffineMVPCandUnscaled( pu, eRefPicList, refIdx, posLB, MD_BELOW_LEFT, affiAMVPInfo ) )
  {
    addAffineMVPCandUnscaled( pu, eRefPicList, refIdx, posLB, MD_LEFT, affiAMVPInfo );
  }

  // check above neighbor
  if(!addAffineMVPCandUnscaled( pu, eRefPicList, refIdx, posRT, MD_ABOVE_RIGHT, affiAMVPInfo ) )
  {
    if(!addAffineMVPCandUnscaled( pu, eRefPicList, refIdx, posRT, MD_ABOVE, affiAMVPInfo ) )
    {
      addAffineMVPCandUnscaled( pu, eRefPicList, refIdx, posLT, MD_ABOVE_LEFT, affiAMVPInfo ); }}if ( affiAMVPInfo.numCand >= AMVP_MAX_NUM_CANDS )
  {
    for (int i = 0; i < affiAMVPInfo.numCand; i++)
    {
      affiAMVPInfo.mvCandLT[i].roundAffinePrecInternal2Amvr(pu.cu->imv);
      affiAMVPInfo.mvCandRT[i].roundAffinePrecInternal2Amvr(pu.cu->imv);
      affiAMVPInfo.mvCandLB[i].roundAffinePrecInternal2Amvr(pu.cu->imv);
    }
    return;
  }
  / /! < CPMV is constructed from MV of translation motion of neighbor CU
  // insert constructed affine candidates
  int cornerMVPattern = 0;

  //------------------- V0 (START) -------------------//
  AMVPInfo amvpInfo0;
  amvpInfo0.numCand = 0;

  // A->C: Above Left, Above, Left
  addMVPCandUnscaled( pu, eRefPicList, refIdx, posLT, MD_ABOVE_LEFT, amvpInfo0 );
  if ( amvpInfo0.numCand < 1 )
  {
    addMVPCandUnscaled( pu, eRefPicList, refIdx, posLT, MD_ABOVE, amvpInfo0 );
  }
  if ( amvpInfo0.numCand < 1 )
  {
    addMVPCandUnscaled( pu, eRefPicList, refIdx, posLT, MD_LEFT, amvpInfo0 );
  }
  cornerMVPattern = cornerMVPattern | amvpInfo0.numCand;

  //------------------- V1 (START) -------------------//
  AMVPInfo amvpInfo1;
  amvpInfo1.numCand = 0;

  // D->E: Above, Above Right
  addMVPCandUnscaled( pu, eRefPicList, refIdx, posRT, MD_ABOVE, amvpInfo1 );
  if ( amvpInfo1.numCand < 1 )
  {
    addMVPCandUnscaled( pu, eRefPicList, refIdx, posRT, MD_ABOVE_RIGHT, amvpInfo1 );
  }
  cornerMVPattern = cornerMVPattern | (amvpInfo1.numCand << 1);

  //------------------- V2 (START) -------------------//
  AMVPInfo amvpInfo2;
  amvpInfo2.numCand = 0;

  // F->G: Left, Below Left
  addMVPCandUnscaled( pu, eRefPicList, refIdx, posLB, MD_LEFT, amvpInfo2 );
  if ( amvpInfo2.numCand < 1 )
  {
    addMVPCandUnscaled( pu, eRefPicList, refIdx, posLB, MD_BELOW_LEFT, amvpInfo2 );
  }
  cornerMVPattern = cornerMVPattern | (amvpInfo2.numCand << 2);

  outputAffineMv[0] = amvpInfo0.mvCand[0];
  outputAffineMv[1] = amvpInfo1.mvCand[0];
  outputAffineMv[2] = amvpInfo2.mvCand[0];

  outputAffineMv[0].roundAffinePrecInternal2Amvr(pu.cu->imv);
  outputAffineMv[1].roundAffinePrecInternal2Amvr(pu.cu->imv);
  outputAffineMv[2].roundAffinePrecInternal2Amvr(pu.cu->imv);

  if ( cornerMVPattern == 7 || (cornerMVPattern == 3 && pu.cu->affineType == AFFINEMODEL_4PARAM) )
  {
    affiAMVPInfo.mvCandLT[affiAMVPInfo.numCand] = outputAffineMv[0];
    affiAMVPInfo.mvCandRT[affiAMVPInfo.numCand] = outputAffineMv[1];
    affiAMVPInfo.mvCandLB[affiAMVPInfo.numCand] = outputAffineMv[2];
    affiAMVPInfo.numCand++;
  }


  if ( affiAMVPInfo.numCand < 2 )
  {
    // check corner MVs
    for ( int i = 2; i >= 0 && affiAMVPInfo.numCand < AMVP_MAX_NUM_CANDS; i-- )
    {
      if ( cornerMVPattern & (1 << i) ) // MV i exist{ affiAMVPInfo.mvCandLT[affiAMVPInfo.numCand] = outputAffineMv[i]; affiAMVPInfo.mvCandRT[affiAMVPInfo.numCand] = outputAffineMv[i]; affiAMVPInfo.mvCandLB[affiAMVPInfo.numCand] = outputAffineMv[i]; affiAMVPInfo.numCand++; }}// Get Temporal Motion Predictor
    if ( affiAMVPInfo.numCand < 2 && pu.cs->slice->getEnableTMVPFlag()) {const int refIdxCol = refIdx;

      Position posRB = pu.Y().bottomRight().offset( - 3.- 3 );

      const PreCalcValues& pcv = *pu.cs->pcv;

      Position posC0;
      bool C0Avail = false;
      Position posC1 = pu.Y().center(a);#if! JVET_N0266_SMALL_BLOCKS
      bool C1Avail =  ( posC1.x  < pcv.lumaWidth ) && ( posC1.y < pcv.lumaHeight ) ;
#endif
      Mv cColMv;
      if ( ((posRB.x + pcv.minCUWidth) < pcv.lumaWidth) && ((posRB.y + pcv.minCUHeight) < pcv.lumaHeight) )
      {
        Position posInCtu( posRB.x & pcv.maxCUWidthMask, posRB.y & pcv.maxCUHeightMask );

        if ( (posInCtu.x + 4 < pcv.maxCUWidth) &&           // is not at the last column of CTU
          (posInCtu.y + 4 < pcv.maxCUHeight) )             // is not at the last row of CTU
        {
          posC0 = posRB.offset( 4.4 );
          C0Avail = true;
        }
        else if ( posInCtu.x + 4 < pcv.maxCUWidth )           // is not at the last column of CTU But is last row of CTU
        {
          // in the reference the CTU address is not set - thus probably resulting in no using this C0 possibility
          posC0 = posRB.offset( 4.4 );
        }
        else if ( posInCtu.y + 4 < pcv.maxCUHeight )          // is not at the last row of CTU But is last column of CTU
        {
          posC0 = posRB.offset( 4.4 );
          C0Avail = true;
        }
        else //is the right bottom corner of CTU
        {
          // same as for last column but not last row
          posC0 = posRB.offset( 4.4); }}#if JVET_N0266_SMALL_BLOCKS
      if ( ( C0Avail && getColocatedMVP( pu, eRefPicList, posC0, cColMv, refIdxCol ) ) || getColocatedMVP( pu, eRefPicList, posC1, cColMv, refIdxCol ) )
#else
      if ( (C0Avail && getColocatedMVP( pu, eRefPicList, posC0, cColMv, refIdxCol )) || (C1Avail && getColocatedMVP( pu, eRefPicList, posC1, cColMv, refIdxCol ) ) )
#endif
      {
        cColMv.roundAffinePrecInternal2Amvr(pu.cu->imv); affiAMVPInfo.mvCandLT[affiAMVPInfo.numCand] = cColMv; affiAMVPInfo.mvCandRT[affiAMVPInfo.numCand] = cColMv; affiAMVPInfo.mvCandLB[affiAMVPInfo.numCand] = cColMv; affiAMVPInfo.numCand++; }}/ /! < 0 vector
    if ( affiAMVPInfo.numCand < 2 )
    {
      // add zero MV
      for ( int i = affiAMVPInfo.numCand; i < AMVP_MAX_NUM_CANDS; i++ )
      {
        affiAMVPInfo.mvCandLT[affiAMVPInfo.numCand].setZero(a); affiAMVPInfo.mvCandRT[affiAMVPInfo.numCand].setZero(a); affiAMVPInfo.mvCandLB[affiAMVPInfo.numCand].setZero(a); affiAMVPInfo.numCand++; }}}for (int i = 0; i < affiAMVPInfo.numCand; i++)
  {
    affiAMVPInfo.mvCandLT[i].roundAffinePrecInternal2Amvr(pu.cu->imv);
    affiAMVPInfo.mvCandRT[i].roundAffinePrecInternal2Amvr(pu.cu->imv);
    affiAMVPInfo.mvCandLB[i].roundAffinePrecInternal2Amvr(pu.cu->imv); }}Copy the code

Here is the code for the Affine Merge List:

void PU::getAffineMergeCand( const PredictionUnit &pu, AffineMergeCtx& affMrgCtx, const int mrgCandIdx )
{
  const CodingStructure &cs = *pu.cs;
  const Slice &slice = *pu.cs->slice;
  const uint32_t maxNumAffineMergeCand = slice.getMaxNumAffineMergeCand(a);for ( int i = 0; i < maxNumAffineMergeCand; i++ )
  {
    for ( int mvNum = 0; mvNum < 3; mvNum++ )
    {
      affMrgCtx.mvFieldNeighbours[(i << 1) + 0][mvNum].setMvField( Mv(), - 1 );
      affMrgCtx.mvFieldNeighbours[(i << 1) + 1][mvNum].setMvField( Mv(), - 1 );
    }
    affMrgCtx.interDirNeighbours[i] = 0;
    affMrgCtx.affineType[i] = AFFINEMODEL_4PARAM;
    affMrgCtx.mergeType[i] = MRG_TYPE_DEFAULT_N;
    affMrgCtx.GBiIdx[i] = GBI_DEFAULT;
  }

  affMrgCtx.numValidMergeCand = 0;
  affMrgCtx.maxNumMergeCand = maxNumAffineMergeCand;

  bool enableSubPuMvp = slice.getSPS() - >getSBTMVPEnabledFlag() && !(slice.getPOC() == slice.getRefPic(REF_PIC_LIST_0, 0) - >getPOC() && slice.isIRAP());
  bool isAvailableSubPu = false;
  if ( enableSubPuMvp && slice.getEnableTMVPFlag() )
  {
    MergeCtx mrgCtx = *affMrgCtx.mrgCtx;
    bool tmpLICFlag = false;

    CHECK( mrgCtx.subPuMvpMiBuf.area() = =0| |! mrgCtx.subPuMvpMiBuf.buf,"Buffer not initialized" );
    mrgCtx.subPuMvpMiBuf.fill( MotionInfo());int pos = 0;
    // Get spatial MV
    const Position posCurLB = pu.Y().bottomLeft(a); MotionInfo miLeft;//left
    const PredictionUnit* puLeft = cs.getPURestricted( posCurLB.offset( - 1.0 ), pu, pu.chType );
    const bool isAvailableA1 = puLeft && isDiffMER( pu, *puLeft ) && pu.cu ! = puLeft->cu && CU::isInter( *puLeft->cu );
    if ( isAvailableA1 )
    {
      miLeft = puLeft->getMotionInfo( posCurLB.offset( - 1.0));// get Inter Dir
      mrgCtx.interDirNeighbours[pos] = miLeft.interDir;

      // get Mv from Left
      mrgCtx.mvFieldNeighbours[pos << 1].setMvField( miLeft.mv[0], miLeft.refIdx[0]);if ( slice.isInterB() )
      {
        mrgCtx.mvFieldNeighbours[(pos << 1) + 1].setMvField( miLeft.mv[1], miLeft.refIdx[1]); } pos++; } mrgCtx.numValidMergeCand = pos; isAvailableSubPu =getInterMergeSubPuMvpCand( pu, mrgCtx, tmpLICFlag, pos
      , 0
    );
    if ( isAvailableSubPu )
    {
      for ( int mvNum = 0; mvNum < 3; mvNum++ )
      {
        affMrgCtx.mvFieldNeighbours[(affMrgCtx.numValidMergeCand << 1) + 0][mvNum].setMvField( mrgCtx.mvFieldNeighbours[(pos << 1) + 0].mv, mrgCtx.mvFieldNeighbours[(pos << 1) + 0].refIdx );
        affMrgCtx.mvFieldNeighbours[(affMrgCtx.numValidMergeCand << 1) + 1][mvNum].setMvField( mrgCtx.mvFieldNeighbours[(pos << 1) + 1].mv, mrgCtx.mvFieldNeighbours[(pos << 1) + 1].refIdx );
      }
      affMrgCtx.interDirNeighbours[affMrgCtx.numValidMergeCand] = mrgCtx.interDirNeighbours[pos];

      affMrgCtx.affineType[affMrgCtx.numValidMergeCand] = AFFINE_MODEL_NUM;
      affMrgCtx.mergeType[affMrgCtx.numValidMergeCand] = MRG_TYPE_SUBPU_ATMVP;
      if ( affMrgCtx.numValidMergeCand == mrgCandIdx )
      {
        return;
      }

      affMrgCtx.numValidMergeCand++;

      // early termination
      if ( affMrgCtx.numValidMergeCand == maxNumAffineMergeCand )
      {
        return; }}}if ( slice.getSPS() - >getUseAffine()) {///> Start: inherited affine candidates
    const PredictionUnit* npu[5];
    int numAffNeighLeft = getAvailableAffineNeighboursForLeftPredictor( pu, npu );
    int numAffNeigh = getAvailableAffineNeighboursForAbovePredictor( pu, npu, numAffNeighLeft );
    for ( int idx = 0; idx < numAffNeigh; idx++ )
    {
      // derive Mv from Neigh affine PU
      Mv cMv[2] [3];
      const PredictionUnit* puNeigh = npu[idx];
      pu.cu->affineType = puNeigh->cu->affineType;
      if( puNeigh->interDir ! =2 )
      {
        xInheritedAffineMv( pu, puNeigh, REF_PIC_LIST_0, cMv[0]); }if ( slice.isInterB()) {if( puNeigh->interDir ! =1 )
        {
          xInheritedAffineMv( pu, puNeigh, REF_PIC_LIST_1, cMv[1]); }}for ( int mvNum = 0; mvNum < 3; mvNum++ )
      {
        affMrgCtx.mvFieldNeighbours[(affMrgCtx.numValidMergeCand << 1) + 0][mvNum].setMvField( cMv[0][mvNum], puNeigh->refIdx[0]); affMrgCtx.mvFieldNeighbours[(affMrgCtx.numValidMergeCand <<1) + 1][mvNum].setMvField( cMv[1][mvNum], puNeigh->refIdx[1]); } affMrgCtx.interDirNeighbours[affMrgCtx.numValidMergeCand] = puNeigh->interDir; affMrgCtx.affineType[affMrgCtx.numValidMergeCand] = (EAffineModel)(puNeigh->cu->affineType); affMrgCtx.GBiIdx[affMrgCtx.numValidMergeCand] = puNeigh->cu->GBiIdx;if ( affMrgCtx.numValidMergeCand == mrgCandIdx )
      {
        return;
      }

      // early termination
      affMrgCtx.numValidMergeCand++;
      if ( affMrgCtx.numValidMergeCand == maxNumAffineMergeCand )
      {
        return; }}///> End: inherited affine candidates

    ///> Start: Constructed affine candidates
    {
      MotionInfo mi[4];
      bool isAvailable[4] = { false };
#if JVET_N0481_BCW_CONSTRUCTED_AFFINE 
      int8_t neighGbi[4] = { GBI_DEFAULT };
#endif
      // control point: LT B2->B3->A2
      const Position posLT[3] = { pu.Y().topLeft().offset( - 1.- 1 ), pu.Y().topLeft().offset( 0.- 1 ), pu.Y().topLeft().offset( - 1.0)};for ( int i = 0; i < 3; i++ )
      {
        const Position pos = posLT[i];
        const PredictionUnit* puNeigh = cs.getPURestricted( pos, pu, pu.chType );

        if ( puNeigh && CU::isInter( *puNeigh->cu )
          )
        {
          isAvailable[0] = true;
          mi[0] = puNeigh->getMotionInfo( pos );
#if JVET_N0481_BCW_CONSTRUCTED_AFFINE 
          neighGbi[0] = puNeigh->cu->GBiIdx;
#endif
          break; }}// control point: RT B1->B0
      const Position posRT[2] = { pu.Y().topRight().offset( 0.- 1 ), pu.Y().topRight().offset( 1.- 1)};for ( int i = 0; i < 2; i++ )
      {
        const Position pos = posRT[i];
        const PredictionUnit* puNeigh = cs.getPURestricted( pos, pu, pu.chType );


        if ( puNeigh && CU::isInter( *puNeigh->cu )
          )
        {
          isAvailable[1] = true;
          mi[1] = puNeigh->getMotionInfo( pos );
#if JVET_N0481_BCW_CONSTRUCTED_AFFINE 
          neighGbi[1] = puNeigh->cu->GBiIdx;
#endif
          break; }}// control point: LB A1->A0
      const Position posLB[2] = { pu.Y().bottomLeft().offset( - 1.0 ), pu.Y().bottomLeft().offset( - 1.1)};for ( int i = 0; i < 2; i++ )
      {
        const Position pos = posLB[i];
        const PredictionUnit* puNeigh = cs.getPURestricted( pos, pu, pu.chType );


        if ( puNeigh && CU::isInter( *puNeigh->cu )
          )
        {
          isAvailable[2] = true;
          mi[2] = puNeigh->getMotionInfo( pos );
#if JVET_N0481_BCW_CONSTRUCTED_AFFINE 
          neighGbi[2] = puNeigh->cu->GBiIdx;
#endif
          break; }}// control point: RB
      if ( slice.getEnableTMVPFlag()) {//>> MTK colocated-RightBottom
        // offset the pos to be sure to "point" to the same position the uiAbsPartIdx would've pointed to
        Position posRB = pu.Y().bottomRight().offset( - 3.- 3 );

        const PreCalcValues& pcv = *cs.pcv;
        Position posC0;
        bool C0Avail = false;

        if ( ((posRB.x + pcv.minCUWidth) < pcv.lumaWidth) && ((posRB.y + pcv.minCUHeight) < pcv.lumaHeight) )
        {
          Position posInCtu( posRB.x & pcv.maxCUWidthMask, posRB.y & pcv.maxCUHeightMask );

          if ( (posInCtu.x + 4 < pcv.maxCUWidth) &&  // is not at the last column of CTU
            (posInCtu.y + 4 < pcv.maxCUHeight) )     // is not at the last row of CTU
          {
            posC0 = posRB.offset( 4.4 );
            C0Avail = true;
          }
          else if ( posInCtu.x + 4 < pcv.maxCUWidth ) // is not at the last column of CTU But is last row of CTU
          {
            posC0 = posRB.offset( 4.4 );
            // in the reference the CTU address is not set - thus probably resulting in no using this C0 possibility
          }
          else if ( posInCtu.y + 4 < pcv.maxCUHeight ) // is not at the last row of CTU But is last column of CTU
          {
            posC0 = posRB.offset( 4.4 );
            C0Avail = true;
          }
          else //is the right bottom corner of CTU
          {
            posC0 = posRB.offset( 4.4 );
            // same as for last column but not last row
          }
        }

        Mv        cColMv;
        int       refIdx = 0;
        bool      bExistMV = C0Avail && getColocatedMVP( pu, REF_PIC_LIST_0, posC0, cColMv, refIdx );
        if ( bExistMV )
        {
          mi[3].mv[0] = cColMv;
          mi[3].refIdx[0] = refIdx;
          mi[3].interDir = 1;
          isAvailable[3] = true;
        }

        if ( slice.isInterB() )
        {
          bExistMV = C0Avail && getColocatedMVP( pu, REF_PIC_LIST_1, posC0, cColMv, refIdx );
          if ( bExistMV )
          {
            mi[3].mv[1] = cColMv;
            mi[3].refIdx[1] = refIdx;
            mi[3].interDir |= 2;
            isAvailable[3] = true; }}}//------------------- insert model -------------------//
      int order[6] = { 0.1.2.3.4.5 };
      int modelNum = 6;
      int model[6] [4] = {{0.1.2 },          // 0: LT, RT, LB
        { 0.1.3 },          // 1: LT, RT, RB
        { 0.2.3 },          // 2: LT, LB, RB
        { 1.2.3 },          // 3: RT, LB, RB
        { 0.1 },             // 4: LT, RT
        { 0.2 },             // 5: LT, LB
      };

      int verNum[6] = { 3.3.3.3.2.2 };
      int startIdx = pu.cs->sps->getUseAffineType()?0 : 4;
      for ( int idx = startIdx; idx < modelNum; idx++ )
      {
        int modelIdx = order[idx];
#if JVET_N0481_BCW_CONSTRUCTED_AFFINE
        getAffineControlPointCand(pu, mi, neighGbi, isAvailable, model[modelIdx], modelIdx, verNum[modelIdx], affMrgCtx);
#else
        getAffineControlPointCand( pu, mi, isAvailable, model[modelIdx], modelIdx, verNum[modelIdx], affMrgCtx );
#endif
        if( affMrgCtx.numValidMergeCand ! =0 && affMrgCtx.numValidMergeCand - 1 == mrgCandIdx )
        {
          return;
        }

        // early termination
        if ( affMrgCtx.numValidMergeCand == maxNumAffineMergeCand )
        {
          return; }}}///> End: Constructed affine candidates
  }

  ///> zero padding
  int cnt = affMrgCtx.numValidMergeCand;
  while ( cnt < maxNumAffineMergeCand )
  {
    for ( int mvNum = 0; mvNum < 3; mvNum++ )
    {
      affMrgCtx.mvFieldNeighbours[(cnt << 1) + 0][mvNum].setMvField( Mv( 0.0 ), 0 );
    }
    affMrgCtx.interDirNeighbours[cnt] = 1;

    if ( slice.isInterB()) {for ( int mvNum = 0; mvNum < 3; mvNum++ )
      {
        affMrgCtx.mvFieldNeighbours[(cnt << 1) + 1][mvNum].setMvField( Mv( 0.0 ), 0 );
      }
      affMrgCtx.interDirNeighbours[cnt] = 3;
    }
    affMrgCtx.affineType[cnt] = AFFINEMODEL_4PARAM;

    if ( cnt == mrgCandIdx )
    {
      return; } cnt++; affMrgCtx.numValidMergeCand++; }}Copy the code

If you are interested, please pay attention to wechat public account Video Coding