For more content, you can search the public account Coder Space on wechat or scan the QR code below


The public the original: https://mp.weixin.qq.com/s/l6QFzV9hpVEG2ZQxp0YFfA


Prisma, an APP for editing image styles, was recently discovered. In this APP, you can apply different image styles to the target image to generate new images.

For example, you can take pictures of the Bund and apply Mosaic style to get Mosaic style photos of the Bund. It looks pretty amazing.



Such a picture processing conversion process is described as a professional point: image style transfer. The style of the image corresponds to the content of the image. An image can be described by these two features:



The following picture, for example, has a distinctly Chinese style, with mountains, rivers and trees as its content.



Therefore, image style transfer generally refers to the process of transferring the style part of the image features to the target image. The input of the whole migration process includes content graph and style graph respectively, and the output is the result graph after style migration. After applying image style migration, images of the same style can be generated.



Example diagrams using different styles of images:



The style of the image is more ambiguous and difficult to quantify than the content. So how to use the program to achieve such image style transfer? In fact, one of the mainstream methods is to use convolutional neural network, also known as CNN network. In 2015, Gatys et al. published A Neural Algorithm of Artistic Style, which can be regarded as the first work to apply CNN network to Style migration. Since then, relevant researches have emerged in endlessly.



Why CNN network can realize image style transfer? Its core point is that CNN network can extract different levels of features from images. The image has two visual features of style and content, which can be corresponding to two levels of features: low level features and high level features. For example, texture and tone can be considered as low level features, that is, features used to describe the image style; The description of more abstract image content is high-level features, that is, often said image content, such as houses and rivers. Therefore, the style transformation of new images can be realized only by selecting the appropriate CNN model and training the network layer which can represent the image style and image content.

In addition to pictures, we can even apply it to videos to achieve the migration of video styles, helping us to achieve richer and more varied video effects. Like the little fox in the oil painting style below.



There are also some extensions based on typical style migrations, and you can check them out again if you are interested. Here are a few examples.

  • Migrate multiple styles simultaneously into a single image



  • Image local style migration



Generating pictures based on image style transfer is also a way to obtain creative pictures and videos, and can even derive more marketing modes. With the development of on-side deep learning, trained models can be deployed on the end, and these technologies can be directly applied on the end, thus creating more mobile application scenarios and imagination space. The APP of art map conversion mentioned above is basically implemented in accordance with this idea.


Related quotes:

https://github.com/lengstrom/fast-style-transfer

https://github.com/titu1994/Neural-Style-Transfer


The public the original: mp.weixin.qq.com/s/l6QFzV9hp…


For more content, you can search the public account Coder Space on wechat or scan the QR code below



Please contact the author for reprinting.