Tuff’s address is https://tufu.xkboke.com

GIT open source address:

Laziness drives Idea

Sometimes in the station cool or UI China to see a good picture and works will want to collect down to learn, but every time right click save as are very troublesome, and some have to enlarge before the original map can be downloaded; As a pseudo full stack how can endure it, and then think of raking their website source code, this look found the original picture stored in the location of the rule, this is very happy, ha ha!

The birth of a prototype

Wave up!! I quickly completed the first small script and successfully downloaded the images I needed, but it was just the simplest way to crawl the images. Later I optimized it and put each downloaded image into a different folder. However, I thought that it was better to use this script as a tool, to make a crawler tool compatible with multiple websites, and to download the original pictures in batches. The idea was quickly put into practice, and after constant revision, finally MY tuff came out of the first version.

The iterative optimization

Only support stand cool and UI China at the beginning, but also to other web sites are compatible is not very good, it doesn’t matter, to make the first first, and then slowly iteration, then added graffiti kingdom, according to the feedback design, visual ME sites, such as in recent days in the post bar, found posted some pictures also is very beautiful, there are a lot of can be a wallpaper images, But it takes three or four steps to get to the original image, and you can only download one at a time, so I added it to my Tuff tool again, haha! My Tuff is slowly growing again! Let me show you a picture of my Tuff

Technology stack

In fact, the principle we all know is crawler, but I made crawler visualization into a tool, convenient for daily use. Here, EXPRESS is mainly used, request is used in the case of library,compressing is used to compress folders, node-uUID is used to generate random hash. Put in a directory structure

Part of the code

Index.js is the main request file. For other files, go to Git

const path = require('path'); const fs = require('fs'); const analyze = require('./analyze'); Const tarTool =require('./tarTool') const uuid =require('node-uuid') /** * create folder based on hash * @param path */ function Write (path) {fs.exists(path, function (exists) {//path is the folder path if (! Exists) {fs.mkdir(path, function (err) {if (err) {console.log(' failed to create '); return false; } else {console.log(' created successfully '); Function start(req,response,next) {const hash = uuid.v1().replace(/-/g, "") const imgDir = path.join(path.resolve(__dirname, '.. '), 'output/img/'+hash); Write (imgDir) // Initiate a request to get DOM console.log(' request address ',req.url); request(req.url, function(err, res, body) { if (! err && res) { console.log('start'); / / put the downLoad function passed as a parameter to analyze module findImg methods analyze. FindImg (body, the req. Type, imgDir, downLoad, the req. Url); response.json({head: {code: 0, msg: 'ok'}, data: hash}) }else { response.json({head: {code: 1000, msg: err}, data: "'}}})); } /** * if findImg returns the address of the image, request is used again to write the data to the local. * * @param {*} imgUrl * @param {*} i * @param {*} imgDir */ function downLoad(imgUrl, I,imgDir) {console.log(' imgUrl ',imgUrl); let ext = imgUrl.split('.').pop(); Request (imgUrl).pipe(fs.createWritestream (path.join(imgDir, I + '.' + ext), {'encoding': 'utf8',})); } /** * download the image to the local, using tar compression into a package, * @param {*} req * @param {*} next */ function tarFile(req,response,next) { The console. The log (' receive ', the req); tarTool.tarTool(req.path,response) } module.exports= { getImg:start, tarTool:tarFile }Copy the code

Method of use

Since is a tool, of course, must be very simple, you just need to copy your URL link to the download page, and then paste to I can in the input box, and then select the site types (quietly tell you, of course, don’t choose it doesn’t matter, I did a check), and then click search, then there is patience waiting for… loading…. (Because the server bandwidth is only 1M, so the download will be a little slow, if you want to tip, I don’t mind, ha ha), after execution, the download button will appear, you just need to click download to download the packaged file.

Supported websites

  • Standing cool
  • The UI China
  • Kingdom of graffiti
  • The design of PI
  • Visual ME
  • Baidu post bar
  • . (Waiting for your opinion)

The statement

This tool is only used as a technical communication tool and shall not be used for any commercial purpose or profit. This website does not store any pictures, all content through the crawler tool to crawl the existing content on the web page. Any pictures downloaded through this website do not represent your commercial rights or authorization, if you need authorization or commercial please contact the original website author or platform, thank you for your understanding!

Finally, open source GIT this address: https://github.com/gengchen528/imgSpider figure’s website address: https://tufu.xkboke.com

If you like it, please give a star, if you have any ideas, you can mention issues, you can also contact me on wechat, welcome to exchange, you can also leave a link to the website you want to collect in the comments, I will update the website fully supported by Tuff from time to time

The articles

Mpvue small program “Alumni Footprint” Growth (1)

Use node script to automatically delete douban comments and posts (this is being updated recently, will also be online visual operation, please pay attention)

Based on mongodb+ Express +vue+ AXIos + Bootstrap digging hot articles collection comment analysis

Personal blog: www.xkboke.com