“This is the 8th day of my participation in the August More Text Challenge. For details, see: August More Text Challenge.”

This is the third article in javascript to write shell scripts. The link to the previous article is as follows:

  • Javascript to write shell scripts a basic operation
  • Javascript write shell script 2 Execute Linux command

This article will focus on how to make a network request.

Web requests are made using curl, an open source tool that does much the same thing as Postman for building web requests and fetching and processing data.

If you’ve used curl before, skip it.

Using curl

See the response

# View the source code of the web page curl www.baidu.com # View the response header and the source code of the web page curl -i www.baidu.com # View only the response header curl -I www.baidu.comnCopy the code

Building an HTTP request

HTTP requests include:

  • The request line

    • HTTP verb (method) -x
  • Request header – H

    • Content-type Specifies the Type of data to be sent
  • Request body

    • -d

Specify HTTP methods

The default is GET curl -x POST www.baidu.com curl -x DELETE www.baidu.comCopy the code

Send different types of data

The -h parameter adds the HTTP request header and the -d parameter is used to send the data body of the POST request

  • Content-Type : application/x-www-form-urlencoded
# -d = 'application/x-www-form-urlencoded' curl -x POST -d 'userPhoneNumber=15064761673' -d 'userPhoneNumber=15064761673' example.com/user Send curl -d '@data.txt' example.com/user to the serverCopy the code
  • Content-Type: multipart/form-data
# -f parameter is used to upload to server binaries curl -f 'file = @ photo. PNG' < https://google.com/profile >Copy the code
  • Content-Type:application/json

    • H Add the request header
curl -d '{"login": "emma", "pass": "123"}' -H 'Content-Type: application/json' <https://google.com/login>
Copy the code

The download file

  • O Saves the server response as a file, equivalent to the wget command
# format: Curl -o sina.html www.baidu.com # Curl -o sina.html www.baidu.com For example, download only the response header curl -o sina_res_header. TXT -i www.baidu.comCopy the code
  • O Save the server response as a file, using the last part of the URL as the file name
curl -O <https://www.baidu.com/img/dong_66cae51456b9983a890610875e89183c.gif>
Copy the code

Use curl to make a network request

For example, if I wanted to get the data for the recommended articles on the gold nuggets home page, the steps would be as follows:

  • Open Network and see which interface is called on the mining home page
  • Copying Interface Information

Then start writing code to put the interface to get the list in a function, and if you need to change the parameters, you can pass them in as the parameters of the function

#! /usr/bin/env node

const shell=require('shelljs')
// Get the data
function fetchList(){
    return ` curl 'https://api.juejin.cn/recommend_api/v1/article/recommend_all_feed' \ -H 'authority: api.juejin.cn' \ -H 'pragma: no-cache' \ -H 'cache-control: no-cache' \ -H 'sec-ch-ua: "Chromium"; v="92", " Not A; Brand"; v="99", "Google Chrome"; v="92"' \ -H 'sec-ch-ua-mobile: ? 0' \ -h 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36' \ application/json' \ -H 'accept: */*' \ -H 'origin: https://juejin.cn' \ -H 'sec-fetch-site: same-site' \ -H 'sec-fetch-mode: cors' \ -H 'sec-fetch-dest: empty' \ -H 'referer: https://juejin.cn/' \ -H 'accept-language: zh-CN,zh; Q = 0.9, en. Q = 0.8, useful - TW; Q = 0.7 '\ -h' cookies: _ga = GA1.2.151728167.1605962485; n_mh=QqqO9vdPyoUgGdMUK7bmzGg_3PdkxHeXQjID5mYHilk; MONITOR_WEB_ID=f9b5a304-d583-464d-ac25-bd0fc1849132; passport_csrf_token_default=6ae74227ec74685add0271300bf1f275; passport_csrf_token=6ae74227ec74685add0271300bf1f275; sid_guard=5d56b314e811199a40ac8d4a1a03774b%7C1627264731%7C5184000%7CFri%2C+24-Sep-2021+01%3A58%3A51+GMT; uid_tt=df38a08a79a50cfc2bde3f330d5cef2c; uid_tt_ss=df38a08a79a50cfc2bde3f330d5cef2c; sid_tt=5d56b314e811199a40ac8d4a1a03774b; sessionid=5d56b314e811199a40ac8d4a1a03774b; sessionid_ss=5d56b314e811199a40ac8d4a1a03774b; _gid = GA1.2.1703676324.1627819267; _tea_utm_cache_2608={%22utm_source%22:%2220210803%22%2C%22utm_medium%22:%22Push%22%2C%22utm_campaign%22:%2231day%22}' \ --data-raw '{"id_type":2,"client_type":2608,"sort_type":3,"cursor":"0","limit":20}' \ --compressed `

    shell.exec(fetchList(),{silent:true},(code,stdout) = >{
        if(code! = =0)throw new Error('Failed to call interface')
Copy the code

The output is as shown in the console:

The overall process is relatively simple, after obtaining the data, you can do some of the data processing, such as statistics can appear all the article most keywords what.

Once the network request is made in the script, we can do a number of things, such as

  • Crawl the wallpapers of some websites
  • Crawl to some of Zhihu’s answers
  • Crawl some data and organize it into an Excel spreadsheet
  • Do some auto-check-in scripts

I’ll write about chestnuts in the future.