This is my third article about getting started

“Ant teeth black” learning Record & practical practice


Learn to prepare

1. Environment configuration:

  • 1.1 This case uses the framework Pytorch-1.0.0

  • GPU: 1* P100 CPU: 8-core 64GiB Multi-Engine 1.0 (Python3, Recommended)

2. Run code method:

  • Click on the triangle in the menu bar at the top of this page to run the button or press Ctrl+Enter to run the code in each square

3. Instructions

  • This practice case refers to the link of teacher Hu Qi’s document and article
  • This practice will use OBS object storage service, will generate a small fee, before March 18, the owner of the building to provide free OBS data source, for everyone to download & experiment.
    • Unscheduled resources stop sharing, because my package has no resources.

4. Resource links

  • 1. Source code address

    // Choose one to use just # address 1, hu Qi teacher provided! Git clone https://codehub.devcloud.cn-north-4.huaweicloud.com/ai-pome-free00001/first-order-model.git # address 2, blogger Gitee link! git clone https://gitee.com/JiegangWu/first-order-model.gitCopy the code
  • 2. Model download

    • Download the data model from the AI marketplace

      // Download the address https://marketplace.huaweicloud.com/markets/aihub/datasets/detail/?content_id=00bc20c3-2a00-4231-bdfd-dfa3eb62a46dCopy the code
    • Download through OBS file sharing provided by the blogger

      Mox.file.copy_parallel ('obs://lab-modelarts/lab01/first-order motion-model-20210226T075740Z-001.zip', '//lab-modelarts/lab01/first-order motion-model-20210226T075740Z-001.zip', 'first-order-motion-model.zip') mox.file.copy_parallel('obs://lab-modelarts/lab01/02.mp4' , '02.mp4')Copy the code

steps

1. Set up the experimental environment

  • Use ModelArts to build the experimental environment
  • Creating a development environment

Download the source code

  • Download the experimental source code (either way)

3. Download models and files

  • Download models and files (either way)

    import moxing as mox # introduction package
    Copy the code
  • Unzip the files

    ! unzip first-order-motion-model.zipCopy the code
  • Template video move position

    ! mv 02.mp4 first-order-motion-model/Copy the code

4. Replace variables to run the program

  • Source file code parsing

    import imageio import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage.transform import resize from IPython.display import HTML import warnings warnings.filterwarnings("ignore") # Replace this with your picture path, the best picture is 256*256, #source_image_path = '/home/ma-user/work/first-order-motion-model/02.png' #source_image_path = '/home/ma-user/work/first-order-motion-model/02.png' #source_image_path = '/home/ma-user/work/first-order-motion-model/02.png '/home/ma-user/work/first-order-motion-model/05.png' source_image_path = '/home/ma-user/work/05.png' source_image = Imageio.imread (source_image_path) # Replace this with your video path, Reader_path = '/home/ma-user/work/first-order-motion-model/02.mp4' reader_path = '/home/ma-user/work/first-order-motion-model/02.mp4 '/home/ma-user/work/02.mp4' reader = imageio.get_reader(reader_path)Copy the code

    Adjust the image and video sizes to 256×256

    source_image = resize(source_image, (256, 256))[..., :3]
    
    fps = reader.get_meta_data()['fps']
    driving_video = []
    try:
      for im in reader:
        driving_video.append(im)
    except RuntimeError:
      pass
    reader.close()
    
    driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video]
    
    def display(source, driving, generated=None):
        fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6))`
    
        ims = []
        for i in range(len(driving)):
            cols = [source]
            cols.append(driving[i])
            if generated is not None:
                cols.append(generated[i])
            im = plt.imshow(np.concatenate(cols, axis=1), animated=True)
            plt.axis('off')
            ims.append([im])
    
        ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000)
        plt.close()
        return ani
    HTML(display(source_image, driving_video).to_html5_video())
    Copy the code
  • Replace file field variables

    Reader_path # source_image_path # source video address Driving_video: Drive the video, the facial expressions of the characters in the video as the object to be migrated. Source_image: The original image. The facial expressions of the characters in the video will be transferred to the characters in the original image. Relative: indicates whether the relative or absolute coordinates of key points in the video and picture should be used in the program. It is recommended to use relative coordinates. If absolute coordinates are used, the characters will be distorted after migration. Adapt_scale: Adaptive motion scale according to the key point convex hull.Copy the code
  • Configuration model

    from demo import load_checkpointsgenerator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml',                             checkpoint_path='/home/ma-user/work/first-order-motion-model/vox-cpk.pth.tar')  
    Copy the code

5. Generated results Video 1- No sound

  • Video code block

    from demo import make_animationfrom skimage import img_as_ubytepredictions = make_animation(source_image, Driving_video, generator, kp_detector, relative=True)# Save result video imageio.mimsave('.. /generated. Mp4 ', [img_as_ubyte(frame) for Frame in Predictions], FPS = FPS) /home/ma-user/work/HTML(display(source_image, driving_video, predictions).to_html5_video())Copy the code

6. Generate the resulting video 2- with sound

  • Installing third-party Packages

    # 安装视频剪辑神器 moviepy!pip install moviepy
    Copy the code
  • Synthetic audio video

    Editor import *videoclip_1 = VideoFileClip(reader_path)videoclip_2 = VideoFileClip(".. /generated. Mp4 ")# Extract audio audio_1 = videoclip_1 videoclip_2.set_audio(audio_1)videoclip_3.write_videofile(".. /result.mp4", audio_codec="aac")Copy the code

7. Generate a video with watermark

Video = VideoFileClip(".. /result.mp4") ImageClip("/home/ma-user/work/first-order-motion-model/water.png") .set_duration(video.duration) # watermark duration. Resize (height=50) # watermark height, Margin (right=0, top=0, Opacity =1).set_pos(("left","top")) # opacity= CompositeVideoClip([video, 1) logo])final.write_videofile(".. /result_water.mp4", audio_codec="aac") final_reader = imageio.get_reader(".. /result_water.mp4") fps = final_reader.get_meta_data()['fps'] result_water_video = [] try: for im in final_reader: result_water_video.append(im) except RuntimeError: passreader.close() result_water_video = [resize(frame, (256, 256))[..., :3] for frame in result_water_video]HTML(display(source_image, driving_video, result_water_video).to_html5_video())```Copy the code