Preface:

Believe that the recent golden nine silver ten job-hopping season, there are many students interview or job loss is the present domestic this mode, actively preparing the interview before the interview, review the entire Java knowledge system will become very important, can be very responsible, is ready to review, will directly affect the success rate of your entry.

But many of you don’t have the right materials to review your entire Java body of knowledge, or you may not know where to start.

I happened to find this material from a netizen group. It is a very technical material both from the perspective of the whole Java knowledge system and interview. The following article will take you to recognize their shortcomings.

1.JVM

The JVM is an imaginary computer that can run Java code, including a set of bytecode instructions, a set of registers, a stack, a garbage collection, a heap, and a storage method field. The JVM runs on top of the operating system and has no direct interaction with the hardware.



2. JAVA collection

The collection classes are stored in the java.util package, and there are three main types: set, list, and Map.

  1. Collection: Collection is the basic interface to the Collection List, Set, and Queue.
  2. Iterator: an Iterator that iterates through the data in a collection
  3. Map: is the basic interface of a mapping table



3. JAVA multi-threaded concurrency




4. JAVA based

1JAVA Anomaly classification and handling

2 Java reflection

3 Java annotations

4 inside the Java classes

Java 5 generics

6JAVA serialization (creating reusable Java objects)

Java 7 copy



5. Spring principle

It is a comprehensive, one-stop solution for enterprise application development, spanning the presentation, business, and persistence layers. But Spring can still be seamlessly integrated with other frameworks.

1. The characteristics of the Spring

2.Spring core components

3. Common Spring modules

4. The Spring package

5.Spring common annotations

6.Spring third-party integration

7. The Spring IOC principle

8. Spring APO principle

9. Spring MVC principle

10. Spring Boot principle

11. The JPA principle

12. Mybatis cache

13. Tomcat architecture



6. The service

1. Service registration discovery

2. API gateway

3. Configure the center

4. Event Scheduling (Kafka)

5. Service Tracking (starter-sleuth)

6. Service Fuse (Hystrix)

7. The API management



7. Netty and RPC

1. net ty principle

2. net ty high performance

3. net ty RPC implementation

4.RMI implementation

5.Protoclol Buffer

6.Thrift



8, the network

1. Network layer 7 architecture

2. The principle of TCP/IP

3.TCP three handshakes/four waves

4. The principle of HTTP

5. The CDN principle



9. The log

1.Slf4j

2.Log4j

3.LogBack

4.ELK



10. Zookeeper

Zookeeper is a distributed coordination service that can be used for service discovery, distributed locking, distributed leadership election, and configuration management. Zookeeper provides a tree structure similar to the Linux file system (it is considered a lightweight in-memory file system, but suitable for storing only a small amount of information. It is not suitable for storing a large number of files or large files), and provides a monitoring and notification mechanism for each node.

1. The Zookeeper

2. Working principle of Zookeeper (Atomic broadcast)

3.Znode has four types of directory nodes



11. Kafka

Kafka is a high-throughput, distributed, publish/subscribe based messaging system originally developed by LinkedIn and written in Scala. Kafka is currently an open source project of Apache.

1. Kafka data storage design

2. Producer design

3. Consumer design

12. RabbitMQ

RabbitMQ is an open source implementation of AMQP developed in the Erlang language.

1. The RabbitMQ architecture

2. Exchange types

13. Hbase

Base is a distributed, column-oriented open source database (actually, column-family oriented). HDFS provides reliable underlying data storage services for Hbase, MapReduce provides high-performance computing capabilities for Hbase, and Zookeeper provides stable services and Failover mechanisms for Hbase. Hbase is a distributed database solution that uses a large number of inexpensive machines to store and read massive data at high speed.

1. Column storage

2. Hbase core concepts

3. Hbase core architecture

4. Hbase write logic

5. HBase vs Cassandra

14. MongoDB

MongoDB is an open source database system based on distributed file storage written in C++ language. Adding more nodes can ensure server performance under high load conditions. MongoDB aims to provide scalable, high-performance data storage solutions for WEB applications. MongoDB stores data as a document. The data structure consists of key=>value pairs. MongoDB documents are similar to JSON objects. Field values can contain other documents, arrays, and document arrays.

15. Cassandra

Apache Cassandra is a highly scalable, high-performance distributed NoSQL database. Cassandra is designed to process large amounts of data on many commodity servers, providing high availability without fear of a single point of failure. Cassandra has a distributed architecture capable of handling large amounts of data. Data is placed on different machines with multiple replicators to achieve high availability without fear of a single point of failure.

1. Data model

2. Cassandra consistently Hash and virtual nodes

3. The Gossip protocols

4. Data replication

5. Data write request and coordinator

6. Data read request and background repair

7. Data storage (CommitLog, MemTable, SSTable)

8. Secondary index (generate RowKey for value summary to be indexed)

9. Data read and write

16. Design patterns

1. Design principles

2. Factory method pattern

3. Abstract factory pattern

4. Singleton mode

5. Builder mode

6. Prototyping

7. Adapter mode

8. Decorator mode

9. Proxy mode

10. Appearance mode

11. Bridge mode

12. Combination mode

13. Enjoy yuan mode

14. Strategic mode

15. Template method pattern

16. Observer mode

17. Iterative subpatterns

18. Chain of responsibility

19. Command mode

20. Memo mode

21. State mode

22. Visitor pattern

23. The intermediary model

24. Interpreter mode

17. Load balancing

Based on the existing network structure, load balancing provides a cheap, effective and transparent method to expand the bandwidth of network devices and servers, increase the throughput, strengthen the network data processing ability, improve the flexibility and availability of the network.

1. Layer-4 load balancing vs Layer-7 load balancing

2. Load balancing algorithm/policy

3.LVS

4.Keepalive

5.Nginx reverse proxy load balancer

6.HAProxy

Database 18.

1 Storage Engine

2 the index

3 Database three paradigm

4 Databases are transactions

5 Stored Procedures (SQL statement sets for specific functions)

6 trigger (a program that can execute automatically)

7 Database concurrency policy

8 Database Lock

9 Based on Redis distributed lock

10 Partition table

Two stage shape agreement

Three stage shape agreement

13 Flexible Transactions

14 CAP

29. Consistency algorithm

1.Paxos

2.Zab

3.Raft

4.NWR

5.Gossip

6. Consistent Hash

20. JAVA algorithm

Binary search

2. Bubble sorting algorithm

3. Insert sorting algorithm

4. Quicksort algorithm

5. Hill sorting algorithm

6. Merge sort algorithm

7. Bucket sorting algorithm

8. Radix sort algorithm

9. Backtracking

10. Shortest path algorithm

11. Maximum subarray algorithm

12. Longest common suborder algorithm

Minimum spanning tree algorithm

21. Data structure

1 Stack

2 Queue

3 Linked List

4 Hash Table

Sort binary trees

6 the red-black tree

7 B-TREE

Eight diagrams

22. Encryption algorithm

1.AES

2.RSA

3.CRC

4.MD5

23. Distributed caching

1. Cache avalanche

2. Cache penetration

3. Cache penetration

4. Preheat the cache

5. Cache updates

6. Cache degradation

24. Hadoop

It’s a big data solution. It provides a distributed system infrastructure. Core content includes HDFS and MapReduce. After hadoop2.0, yarn is introduced. HDFS provides data storage and mapreduce facilitates data calculation.

1. HDFS

2. MapReduce

3. Life cycle of Hadoop MapReduce jobs

25. Spark

Spark provides a comprehensive and unified framework for managing big data processing requirements of various data sets and data sources (batch data or real-time streaming data) with different natures (text data, graph data, etc.).

1. The concept

2. Core architecture

3. Core components

4.SPARK programming model

5.SPARK calculation model

6.SPARK Runs the process

7. The SPARK RDD process

8.SPARK RDD

26. Storm

1. Cluster architecture

2. Programming model (spout->tuple-> Bolt)

3. The Topology

4.Storm Streaming Grouping

27. YARN

YARN is a framework for resource management and task scheduling. It consists of ResourceManager (RM), NodeManager (NM), and ApplicationMaster (AM). ResourceManager monitors, allocates, and manages all resources. ApplicationMaster is responsible for scheduling and coordinating each specific application; NodeManager is responsible for the maintenance of each node. For all applications, RM has absolute control and the right to allocate resources. Each AM negotiates resources with the RM and communicates with the NodeManager to execute and monitor tasks.

1.ResourceManager

2.NodeManager

3. ApplicationMaster

4. YARN operation process

28. Machine learning

1. Decision tree

2. Random forest algorithm

Logistic regression

4.SVM

Naive Bayes

6.K nearest neighbor algorithm

7. K-means algorithm

8. Adaboost algorithm

Neural networks

10. Markov

29. Cloud computing

1. SaaS

2. PaaS

3. IaaS

5. Openstack

Conclusion:

The whole core knowledge point is sorted out by a Meituan architect. Every point of knowledge has a detailed introduction and analysis. If you need the source file, you can follow the wechat public number: Java Programmers Gathering Place and get it.