What skills do you need to learn to become a back-end technical engineer in line with the requirements of BAT and TMD? What is the back-end technology learning path?

Learning path

Without further ado, let’s go straight to the mind map framework of the back-end technology learning route we just drew:

Each node in the diagram can be opened. I have subdivided them and will describe them one by one in the following chapters.

Free nyc Java learning materials: contains 2021 latest complete interview questions and answers into a document (all), there are a lot of dry goods, including mysql, netty, spring, threads, a spring cloud, JVM, source code, algorithm in detail, also have a detailed study plan, the interview questions to sort out, etc…

Please add assistant vX: A13142089174 (note: Java information)

Computer Fundamentals

Whether it’s back-end development or front-end development, ultimately all of our software development is written on the basis of computation, although for most of us, when we actually start to write code, we rarely have to solve the problems at the bottom of the computer.

Java, C++, Python and Golang are often used in back-end development work. These languages are called high-level programming languages, which are close to the natural language of our daily communication and far from the bottom of the computer. But all high-level languages will eventually be transformed into assembly -> computer instructions -> control flow to control the computer hardware. Therefore, learning the basic knowledge of computer structure and working principle, operating system, can deepen our understanding of high-level language.

So what is the computer base that we’ve been talking about? Computer Science and technology of CS (Computer Science) as a professional courses, like other engineering course has its own theoretical system, if you are a Computer professional students don’t have to teach me what to learn, Computer professional university four years, the school is based, don’t look down upon you learn at school of the curriculum that appear to not do. I have here a copy of the main course structure of computer technology undergraduate course of CUST.

The computer major of first-class university should learn what can be compared to look below, from discipline mathematics theoretical foundation, computer system structure, software engineering method and so on dimension spread out.

That if you don’t want to change careers, majored in computer and don’t be scared, after all it is somebody else’s four years of learning content, and the training objectives of undergraduate course is not only to create a software engineer, or undergraduate study for a master’s doctoral education, the basis of note is the major in computer science, in the name has a word called “science”, The background software development of BAT company that I want to talk about in this article can be regarded as the direction of “engineering”, and more services for engineering development.

If it is just background development and job interviews, or if you are not a computer science major and want to change careers, you don’t have much time to study the theoretical courses in college, then help me narrow down the scope of computer fundamentals to the following four major courses: Computer Composition principles, Computer Networks, operating systems, and Data Structures.

Computer composition principle

This course gives you an understanding of what computers are made of and how they work. It includes:

  • Representation and operation of data in computer (Lemon said: computers do not know numbers, only high and low levels, so data inside the computer are represented by binary zeros and ones)

  • Storage system (Lemon said: data and program instructions should be stored down, learn the computer’s storage layer, memory, external storage, cache, virtual storage technology)

  • Instruction system (Lemon said: written code ultimately has to be translated into computer instructions, instructions in a variety of formats and addressing, the controller to control the execution of instructions)

  • The central processing unit (CPU), the brain of a computer, consisting mainly of the arithmetic unit and the controller.

  • Bus (Lemon said: a blood vessel or artery in a computer that connects the functional components of the computer and is used to transmit data, address signals, and control signals)

  • Input/Output system (lemon said: Input/Output also called IO system, connect and manage various external devices such as keyboard, monitor, etc.)

Computer network

The world’s first general-purpose computer “ENIAC” was invented in 1946, as the name is only used to calculate, later the computer more and more, if there is no network, each computer will become a lonely island, also won’t have the Internet boom, now the “computer network” course learning path is very clear, It is around how to connect different computers in geographical locations, and exchange data information efficiently and reliably, so that people can do it at home and know everything in the world.

Computer networks are divided into different levels. According to the attributes and characteristics of each layer, they can be divided into:

  • The physical layer

  • Data link layer

  • The network layer

  • The transport layer

  • The application layer

This hierarchy is the receiving path of a network packet from top to bottom, and vice versa. In order to exchange information, we must negotiate a set of general protocols, just like when we communicate with foreigners, either they learn Chinese or we learn English, or we have to agree on a standard language, which is called “communication protocol” in computer networks. As the network layer mentioned above, each layer has its own protocol, so the learning of computer network is basically around the learning of layered protocol.

The operating system

An operating system is also a piece of software. You’re familiar with Microsoft’s Windos operating system, the various distributions of Linux that are installed as software on computers.

Just the software and application software is different, that we contact with usually, it is special because it down and computer hardware (that is, we learn in the computer composition principle of the hardware), up to other applications and provides general interactive user interface, in plain English operating system is the role of mediation and housekeeper. It does these things for us:

  • Process management (Lemon said: you write the program to run to work, running the program called process, process is the smallest unit of resources)

  • Memory management (Lemon said: computer memory is expensive and little, and often to a high concurrency, memory management has knowledge)

  • File management (Lemon said: computer data and information needs to be saved and managed through the file system)

  • Input and output management (Lemon said: how the various external devices into the computer and after the access and how to manage)

The data structure

Data structure we are most familiar with, even if there is no computer foundation or want to switch to a computer, the first encounter is data structure, because the algorithm of the interview brush is essentially the use of various data structures. Therefore, purely from the perspective of interview utilitarianism, data structure is also a computer foundation that must be mastered. Data structure should be learned:

  • Linear lists (linked lists, arrays, circular linked lists)

  • The stack and queue

  • Trees and various binomial trees (binomial sorting trees, balanced binomial trees, Huffman trees, B trees, B+ trees, Trie trees)

  • Graph (storage structure of graph, BFS, DFS, shortest path, minimum spanning tree, topological sort, critical path)

  • Search algorithms (binary search, B-tree search, HASH table, KMP string pattern matching)

  • Sorting algorithms (insertion sort, bubble sort, merge sort, radix sort, heap sort)

  • Greedy algorithm

  • An operation

  • Divide and conquer algorithm

  • Dynamic programming

Good, computer foundation four major course already had probably led again, of course this is me to do not have computer foundation the pragmatism proposal of the classmate, wait for you to finish learning these four courses also can say entrance computer only, nevertheless this already was fierce than a lot of people. If you want to really understand the subject of computer, you can wait for the completion of the four basic courses, and then spend some time to choose some courses in the above training program to learn, to become a computer software back-end development engineer with complete knowledge system.

Linux

In the field of background development, you can not say 100% of the back-end services you can contact, at least 90% of them run on Linux system, because it is open source, convenient, powerful, need to learn the following technologies:

Using Linux

So if you want to go the back-end development route, I suggest you use Linux as early as possible. It can be a Linux VIRTUAL machine on a PC, or a dual system, which is what I did in college, back when cloud servers were not as common as they are now. Now I think it’s most convenient to buy a Linux cloud server, and if you are a student, you have educational benefits that are not expensive.

What do you do with Linux? Use it as your go-to system and log in to it from start to finish. Ok, you’ve got the basics of Linux.

Linux Advanced Programming

Linux “advanced programming” means one level deeper than the Linux basics above.

If you have learned how to use Linux, you are not a real developer. Using the system is the most basic requirement to become a developer. Knowing how to operate Linux is just like using Windows, but it is only a matter of learning cost. Your girlfriend could spend a little time mastering the basics of Linux.

To become a back-end developer, you need to know how to use the various system apis (system call interfaces) provided by Linux. Programmers use your code to control the system, while ordinary users can only use the mouse. This stage requires learning:

  • The Unix system implements Linux and basic system data types

  • Open, read close write dup FCNTL ioctl stat chmod ‘ ‘access chdir…

  • Basic features and advanced features of the system programming interface

  • Linux process environment, how to create processes, threads, program storage allocation, environment variables

  • Process groups, sessions, and task control, process priority, and scheduling

  • Dynamic and static libraries

  • Interprocess communication: pipes and FIFOS, message queues, semaphores, shared memory, memory maps

  • Sockets and network programming

In short, this stage needs to learn the advanced programming skills in the Linux environment, through the learning of these contents can also let you have a deeper understanding of how the Linux system works and runs, and really step into the Linux system programming door.

Network programming

Network programming is Communication through network socket, so it also belongs to inter-process Communication (IPC).

Because now the back-end services based on server/client model, based on the network communication between, you at home phone a takeout service request, is communication through the network to the backend server in the regiment, so the background service development, at the end of the day or network programming, and based on network programming data of application layer development.

Network programming what to learn:

  • What is a socket

  • Socket options

  • TCP/UDP socket programming

  • Unix Domain protocol and programming

  • Raw socket programming

  • IO multiplexing: Select, poll, epoll, and kqueue

  • Serialization technique

  • Zero copy technology

  • Open source network libraries: Muduo, Libevent

After learning the above content you can probably write a similar QQ as a small network chat tool.

Not at work, have a mature framework or network communication library, big companies such as goose factory are mostly from the research framework of network communication, a small company with open source projects, it makes a lot of background developers do not have to handle details at the bottom of the network communication, in addition to the part of the infrastructure for the development of students, most of the background to develop students work is doing business system development.

However, understanding the underlying network programming principle is the core ability of background developers, which is particularly important for C/C++ background developers. It gives you a high level of perspective, and without understanding the underlying principles, it’s like programming in a black box, where you don’t know where to start.

After learning the above content, it is basically equipped with the basic ability of background development, and can also develop a simple background server.

The database

Unless it is a simple forwarding routing class background service, generally speaking, the background development of the Web server background program, background service program is a loop:

Receive client data packets -> process data packets -> process business logic -> save necessary data -> reply response data to client

This will be accompanied by a variety of data processing, such as e-commerce system will deal with order data, user data, game background will deal with role data and equipment data, etc., data will be involved in the storage system, data are generally stored in the database.

I mainly studied two kinds of databases:

Relational database refers to the use of relational model to organize data database, simple understanding is two-dimensional table model.

A non-relational database generally refers to a NoSQL database that stores data in the form of key-value. Data and key-value are simply mapped.

Relational database

  • MySQL Database Architecture

  • MySQL index usage and optimization

  • InnoDB storage engine

  • Query performance optimization

  • Clustered index, non-clustered index

  • Transaction isolation, ACID, MVCC

  • Locking mechanism, optimistic locking, pessimistic locking, read locking, write locking, intent locking

  • The log

  • Data backup and recovery

Non-relational databases

  • Basic operation and use of Redis

  • Redis design and implementation principle

  • MongoDB

  • levelDB

  • memcache

  • HBase

  • CKV+ Tencent research

Background development services also need to learn to solve three high problems: high concurrency, high availability, high performance.

High concurrency

Using what we have learned so far, the backend server we have developed is more than enough to cope with some small concurrent scenarios. However, with the increase of Internet application business, the number of requests to the backend server is increasing rapidly, and the high concurrency demand comes with it, which is high TPS and high QPS

  • Transactions Per Second (TPS) Number of Transactions Per Second

  • QPS (Query Per Second) Number of queries Per Second.

For high-concurrency services, the traditional single-process model must be changed to handle such a large number of requests.

Multiple processes

For high-concurrency service requests, background services are USUALLY IO-intensive applications. In IO-intensive applications, most OF the CPU time is spent on network I/O, while in CPU-intensive applications, most of the time is spent on data calculation.

Most of the background service programs are IO intensive applications, and the CPU waits for network I/O in a waste of time, which tells us that the POTENTIAL of CPU has not been fully utilized, so when the processing capacity of a process reaches its upper limit, we can create more processes, which is the multi-process model.

multithreading

Multithreading is similar to multi-process. In Linux system, thread is actually implemented by the light-weight process (LWP). Background service implemented by multithreading is lighter than that of multi-process, because multithreading is implemented within the same process.

However, multithreading also introduces new problems, such as global data contention and synchronization issues, thread locking and preventing deadlocks.

coroutines

So what is a coroutine? Coroutines are microthreads that are lighter than threads. Just as a process can have multiple threads, a thread can also have multiple coroutines, so coroutines are also called microthreads and fibers. Coroutines can be roughly thought of as subroutine calls, each of which can be executed within a separate coroutine.

An asynchronous callback

Asynchronous callback means that the server thread that initiates the IO request will continue to execute the subsequent code without waiting for the network IO thread to complete the operation. Generally, the request thread needs to register a callback function first. When the IO is completed, the network IO thread calls the callback function registered before to notify the thread that initiates the IO request. In this way, the thread that initiates the request does not block waiting for the result, improving service processing performance.

A high performance

According to the above service model, the processing capacity of the service itself can be improved. High-performance background service often uses a variety of technologies and optimizes performance from multiple dimensions. For example, Content Delivery Network (CDN) is used to store and distribute Content to users nearby, shortening the response time. Pooling technology is adopted to avoid frequent resource allocation and recovery. Adopting service cluster to horizontally expand service capability; Using cache technology, hot data is added to cache to reduce database access.

  • CND content distribution technology

  • Pooling techniques: database connection pooling, thread pooling

  • clustering

  • Caching technology

High availability

High availability ensures service stability and prevents major problems or outages. Common solutions to high availability include redundancy and load balancing. Redundancy means deploying multiple servers so that when one fails, another can take over. The load balancing technology is used to dynamically allocate traffic to prevent a large amount of traffic from impacting a machine and causing uneven requests. The software load balancing technology can be implemented using DNS, Nginx, and LVS technologies. The main techniques studied here are:

  • Load balancing technology, hardware and software load balancing

  • Current limiting isolation degradation technology

  • Application-layer DISASTER recovery (Dr) and resource isolation fuse

  • Different live

Design patterns

Design patterns represent a best practice for software development. Having evolved over a long period of time, they provide the best solutions to common problems faced in software development. Learning these patterns can help inexperienced developers learn software design in a simple and quick way, and following the necessary design patterns when designing large-scale software can make the written code more robust and extensible.

6 principles of design Patterns:

  • Open closed principle: Open for extension, closed for modification, use abstract classes and interfaces.

  • Richter’s substitution principle: Base classes can be replaced by subclasses, using abstract class inheritance, not concrete class inheritance.

  • The dependency inversion principle: Rely on the abstract, not the concrete, program for the interface, not the implementation.

  • Interface isolation principle: It is better to use multiple isolated interfaces than to use a single interface, and establish a minimum interface.

  • Demeter’s Law: a software entity should interact with as few other entities as possible, establishing connections through intermediate classes.

  • Rule of composite reuse: use composition/aggregation rather than inheritance whenever possible.

Classification of common design patterns

  • The factory pattern

  • The singleton pattern

  • Builder model

  • Adapter mode

  • The bridge model

  • Filter mode

  • Decorator mode

  • The appearance model

  • The flyweight pattern

  • The proxy pattern

  • Chain of Responsibility model

  • Interpreter mode

  • Iterator pattern

  • Observer model

  • .

distributed

Why distributed? As the business volume increasing, the processing capacity of a single node is unable to meet the increasing computation, storage, task, and the ascension of the hardware (plus memory, and disk, using better CPU) high to do more harm than good, the application cannot be further optimized, we only need to consider a distributed system.

A distributed system is a system composed of a group of computer nodes that communicate through a network and coordinate their work to accomplish a common task. The emergence of distributed systems is to use cheap, ordinary machines to complete a single computer can not complete the computation, storage tasks. The aim is to use more machines to process more data.

The problems to be solved by distributed system are the same as those of stand-alone system. However, due to the topology structure of distributed system with multiple nodes and network communication, many problems that the stand-alone system does not have will be introduced. In order to solve these problems, more mechanisms and protocols will be introduced. Things to learn here include:

  • Distributed consistency algorithms: PAXOS, Raft, Zab

  • Distributed transactions: 2PC, 3PC, TCC

  • Distributed unique ID generation: Snowflake algorithm, UUID, Taobao TDDL SEQUENCE scheme, Meituan Leaf

  • Consistent HASH algorithm

  • Extensibility design, design extensible software architecture

  • Distributed file systems: HDFS and FastDFS

  • Microservice architecture design, service registration, service discovery, service routing

security

In essence, background services run on the network, need to interact with a variety of network environment, can work under normal circumstances, but there are a lot of malicious attacks against background services in the Internet, so network security is also the background development engineers need to learn. It mainly includes:

  • Web security: CSRF, SQL injection, XSS

  • To prevent DDos

  • Encryption and decryption algorithm: symmetric encryption, hash algorithm, asymmetric encryption

  • Network isolation: internal and external network separation, jump board

  • Authentication algorithm: OAuth2.0, OIDC, 2FA, single sign-on SSO

Monitoring and Statistics

How do we know the running status and health of the background service? If only the development of small toys monitoring and statistics is unnecessary, just record local logs, for the mature large background service system, monitoring, statistics, tracking is essential, no monitoring, no operation.

Open-source monitoring software includes Prometheus, Zabbix, and Open-Falcon.

Tracking system is also very important, especially at present microservitization, a service request needs to undergo multiple different microservice processing, which brings new challenges to distributed tracking, mainly including the following three aspects:

  • Logging is used to record debugging information or error information of programs and monitor the running status of the system and services

  • Monitor the performance of systems and services by collecting Metrics, such as aggregates

  • Tracing the details of how service requests are processed among the distributed components through distributed Tracing

The industry also has some mature open source software for monitoring and tracking: SkyWalking, Pinpoint, Zipkin, CAT dianping open source. However, large companies generally have a set of self-developed monitoring and tracking system, such as Tencent has a number of self-developed monitoring and call chain tracking system.

Search engine

We’re talking about full-text search engines. What are full-text search engines?

Full-text search engine is the mainstream search engine widely used at present. Its working principle is that the computer index program through scanning every word in the article, to establish an index for each word, indicating the number and position of the word in the article, when the user queries, the retrieval program according to the index established in advance to search, and search results feedback to the user’s retrieval method. This process is similar to looking up words through the search word table in a dictionary.

Data is divided into structured data and unstructured data

Data such as database tables is structured data; Data with variable length and no fixed format, such as HTML, XML and documents, is called unstructured data. Unstructured data, also known as full-text data, can be searched in the way of full-text retrieval.

The two major full-text search engines “Solr” and “Elasticsearch” are based on Lucene. What search engines need to learn:

  • The principle of search engine, search engine use inverted index technology to achieve efficient retrieval of full-text data.

  • Lucene, Apache Lucene is an open source full text search engine toolkit.

  • Elasticsearch principle and usage

  • Principle and use of Solr

Big data

Big data, also known as massive data, is the term for large or complex data sets that traditional data-processing applications cannot handle. With the increase of the number of users and the accumulation of data in background services, massive data to be mined will be generated. The analysis and use of these data can feed back online decisions, optimize operation strategies and generate data value.

Massive data can also be defined as a large amount of unstructured or structured data from a variety of sources.

The concept of big data in the field of software development began with the data warehouse in the 1990s, and the processing of big data also led to the development of various statistics and processing technologies for massive data.

It mainly includes the following technical points to learn: data storage, offline analysis, streaming computing.

  • Big data storage: Hadoop framework, HDFS, HBase, YARN architecture, Apache Kudu

  • Offline analysis: Hive, MapReduce, and Spark

  • Streaming computing: Flink, Storm, Kafka Stream, Spark Streaming

virtualization

Virtualization refers to using virtualization technology to virtualize a computer into multiple logical computers.

Benefits of Virtualization

  • Flexibility: Running multiple operating systems simultaneously on the same hardware

  • Agility: The operating system is moved in the same way that files or images are moved from one physical server to another.

  • Fault tolerance: When a physical server fails, the management software automatically migrates the instance to an available server, even without sensing the physical hardware failure.

  • Lower costs: You no longer need too many physical servers, and the cost of operation and maintenance is reduced.

Common virtualization technologies: KVM, Xen, OpenVZ, and Docker

Although for most background service programs, many services are deployed in Docker containers, Docker shares the kernel of the underlying system, and all containers share part of the runtime, so the isolation is worse than virtualization technologies like KVM. KVM and Docker have their own usage scenarios and will coexist for a long time in the future.

OpenStack Management VM (Virtual Machine) VM tool. Kubernetes, short for K8s, is used to manage Containers.

The middleware

You’ve probably heard the term “middleware” a lot in back-end development. What is middleware? Take a look at the Wiki definition:

Middleware technology to create commonly used in the software part of the application function of abstraction, the common and important procedure calls, distributed components, the message queue, transactions, security, linker, business process, network concurrent, combination of HTTP server, Web services, and other functions or respectively in different brands of different products, respectively.

Zhong Cuihao, a researcher at the Software Institute of the Chinese Academy of Sciences, defined middleware as “platform + communication”. This definition defines middleware only as such software for use in distributed systems, and it also distinguishes middleware from application software used in real-world applications.

Generally speaking, middleware is a kind of software that abstracts some general functions of distributed system to provide services collectively. It shields the complexity of the underlying operating system, provides a unified development environment, and reduces the complexity of software system development. Because middleware is between operating system and application software and provides service functions for application software, it is called middleware because it is between the two kinds of software.

Common open source middleware includes the following types, which can be combined to build a complete distributed background service system:

  • Web Server middleware, Nginx, OpenResty, Tomcat…

  • Cache middleware, server cache including Redis, Memcached…

  • Message queue middleware, Kafka, RabbitMQ, ActiveMQ…

  • RPC framework, Tars, Dubbo, gRPC, Thrift

  • Database middleware, Sharding JDBC

  • Log System Middleware, ELK B refers to a set of solutions, is Elasticsearch, Logstash, Kibana, Beats is the acronym of these four software products.

  • Configuration central middleware, Apollo, ZooKeeper unified configuration management

  • API gateway, open source projects Tyk, Kong, Zuul, Orange…

Version control

Large software projects have a huge amount of code, how to effectively organize and manage source code and version, so the version control system comes into being. A version control system, often referred to as SVN or Git, is a program that tracks and maintains changes to source code, files, configuration files, etc., and provides control over those changes.

Common version control systems fall into two broad categories: centralized and distributed version control. As a back-end developer, the use of version control systems is also a basic skill that must be mastered, but these systems are generally familiar with at the same time, just start familiar with some common operations.

  • The common centralized version control system is SVN.

  • Distributed version control system, Git

tool

Some of the tools recommended for back-end or software development are editors and ides.

The editor

Developing on Linux can’t be done without Vim or Emacs, two popular editors that have formed two camps of enthusiasts. Especially the Vim editor, the learning cost is a little high, with a variety of plug-ins and configuration, some Vim fans have it as an IDE to use, once mastered can greatly improve the work efficiency, it is worth you to learn.

In addition to Vim, it’s highly recommended to learn Markdown syntax, a lightweight markup language that allows you to write a document in a plain text format that is easy to read and write, without having to adjust the layout. Typora, Youdao Cloud Note Markdown editor and VSCode Markdown plug-in are recommended for Markdown editor.

IDE

Editors are fine as small projects, but back-end development is usually a large software engineering project, so it is not possible to use an editor to manage, in this case you need to learn to use professional integrated development tools.

An IDE (Integrated Development Environment) is an application that provides a program Development Environment and typically includes tools such as a code editor, compiler, debugger, and graphical user interface.

Sharpening the knife does not mistakenly cut wood work, pick a weapon to program the world again. The JetBrains series and VS Code are recommended. JetBrains products include a series of IDES developed in various languages, especially Java’s Intellij IDEA, which has a very good reputation. Part of the corresponding product series are as follows.

  • CLion – a cross-platform C/C++ IDE development tool that supports C++11, C++14, libc++ and Boost.

  • Integrated development environment for Goland-Go language.

  • IntelliJ IDEA – released in 2001. A set of intelligent Java integrated development environment, special focus and emphasis on the programmer development writing efficiency.

  • PhpStorm – PHP IDE development tool.

  • PyCharm – A Python IDE development tool that incorporates the Django framework.

  • Appcode-swift and Objective-C IDE development tools.

Visual Studio Code (VS Code for short) is a free Code editor developed by Microsoft for Windows, Linux, and macOS operating systems. It supports testing, has built-in Git version control, and also has development environment functionality. Examples include code completion (similar to IntelliSense), code snippet, and code refactoring. The editor allows users to customize various attributes and parameters, such as changing theme colors and keyboard shortcuts, as well as built-in extension management.

In the 2019 Stack Overflow developer survey, VS Code was named the most popular development environment for developers.

test

Software engineers not only write code, but also do testing. Software testing and software development go hand in hand. Testing ensures that our code is more robust and maintainable.

TDD, short for Test-driven Development, is a core practice and technology in agile Development, as well as a design methodology. The idea behind TDD is to write unit test case code before developing functional code, and determine what product code needs to be written based on the test code. Knowledge of the following test techniques and methods is required.

  • Unit testing

  • Pressure test

  • Full link test

  • A/B testing, grayscale publishing, blue-green deployment

Study the order

To sum up, there is still a lot to learn about back-end technology, and it cannot be accomplished overnight.

And this article will give you some of you, if you’re a beginner, you might want to ask me directly where do I start? Then I don’t keep in mind, if you don’t know where to start learning, for each node of the learning route, I will according to personal learning experience and combined with the knowledge of the big factory high P to give you the above technical route learning priority to do a sort, you follow the learning line, sorting rules:

The higher the star level, the higher the ranking, the higher the importance, the priority to arrange time to learn.

5 stars in Computer science

Linux 5 star

Database 5 stars

Design mode 5 stars

5 star tools

Middleware 4 stars

Distributed 4 stars

High concurrency, high availability, high performance 4 stars

Search engine 4 stars

Test 3 star

Monitoring and statistics 3 stars

Virtualization 3 Stars

Security 3 stars

Big data 3 stars

Language confusion

Careful readers will notice that the back-end technology learning path discussed so far in this article does not mention a specific programming language. This is not to say that it is unimportant. Languages are a prerequisite for many technologies. Metaphorically speaking, programming languages are bricks and mortar, and big projects are built brick by brick.

In fact, in the process of learning the above technical points, you will naturally come into contact with a variety of programming language middleware or open source projects, no matter what language is used to do back-end service development, there is no problem, and there are not many excellent open source frameworks to learn, the key is to have a clear learning route. Major backend development languages include Java, C++, PHP, Python, and Go. Which one?

If you are a student in school and have enough time, I suggest you can try it. How do you know it is not appropriate if you have not tried it? Then, I will choose a language for in-depth study based on my preference and future job orientation.

If you’re an employee, adapt to the team and learn what language your product or business needs to be developed in. After mastering these common back-end technologies, it’s just a matter of learning the programming language, and very, very quickly.

If this article is useful to you, please help forward attention to support it!