Golang implements simple crawler framework (1) — Project introduction and environment preparation

Recently, I learned the Go language and watched the in-depth explanation of the Go language course by Google engineers. Now I have sorted out the crawler project in the course, which is also a summary of my own learning. I have a rookie, if you have any questions, welcome to correct.

First, environmental preparation

1, go language installation

The Go installation package can be downloaded from studygolang.com/dl

Select the corresponding version to download

Msi is recommended for Windows users because it is easy to install and environment variables are automatically configured

Exe command. Run the go version command to view the GO language version

2. Environment configuration

Then we need to set up the Go workspace gopath directory (the project path for Go development)

For Windows, create an environment variable called GOPATH and set the value to your working directory, for example, GOPATH=D:\Workspace

The %GOPATH% directory convention has three subdirectories:

SRC stores source code (e.g..go.c.h. s, etc.)

PKG Files generated after compilation (for example:.a)

Bin An executable file generated after compilation

Bin and PKG directories can be created automatically by using the go command (for example, go install). You only need to create a SRC directory.

3. Goland installation and cracking

(1) Installation

Goland website: www.jetbrains.com/go/

Select the corresponding version to download and install

(2) crack

About the Goland cracking methods of Internet has a lot of articles, you can refer to this article: blog.csdn.net/dodod2012/a…

Ii. Project Introduction

This crawler crawls the user information data of Zhenai, and the crawler steps are as follows:

  • 1. Enter the city page of Zhenai and climb all the city information
  • 2. Access the city details page and climb to the USER URL
  • 3. Access the user details page to obtain the required user information

The crawler algorithm is as follows

In the next blog post, we will implement a standalone crawler project, please pay attention.