In this article, we will analyze the HashMap of JDK source code (above).

Special note: Because the red black tree structure of HashMap is more complex, so it involves red black tree related operations, I will write a blog for you to share, the article will add red black tree article link at the end!

5. Member methods of the HashMap

5.1 Put (K Key, V Value) Method

The PUT method is quite complex, and the implementation steps are as follows:

  1. Calculate the bucket to which the key is mapped using the hash value.

  2. If there is no collision on the bucket, insert the bucket directly.

  3. If a collision occurs, the collision needs to be handled:

    A If the bucket uses a red-black tree to handle conflicts, the bucket uses the red-black tree method to insert data.

    B Otherwise, the traditional chain method is used. If the chain length reaches a critical value, the chain is transformed into a red-black tree;

  4. If duplicate keys exist in the bucket, the new value value is substituted for that key.

  5. If the size is greater than threshold, expand the capacity.

Specific methods are as follows:

public V put(K key, V value) {
    // Call hash(key) to compute the hash value of the key
    return putVal(hash(key), key, value, false.true);
}
Copy the code

Description:

  1. HashMap only provides put for adding elements. The putVal method is just a method called by the PUT method and is not provided to the user. So let’s focus on the putVal method.
  2. So we can see that in the putVal method key is performing a hash method here, so let’s see how the hash method is implemented.
static final int hash(Object key) {
    int h;
    // If key is null, hash value is 0,
    // Otherwise, call the key hashCode() method to calculate the hash value of the key and assign it to h,
    // Perform bitwise xor with the unsigned 16 bit right shift to get the final hash value,
    // This is done to make the calculated hash more diffuse
    // Why more diffuse? Because the more dispersed, the shorter the list length of a bucket is, the fewer red-black trees are generated later, and the more efficient it is
    return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
}
Copy the code

If a HashMap uses a key to obtain a hashCode, an exception will be thrown.

Read the above hash method:

Let’s first look at how the hash of key is computed. The hash value of the key is calculated using the above method.

This hash method first computes the hashCode of the key assigned to H, and then performs bitwise xor with the unsigned 16-bit right-shifted binary of H to get the final hash value.

The hash value calculated by the above hash function is used in putVal:

final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {...if ((p = tab[i = (n - 1) & hash]) == null) // Where n represents the array length 16, formula
    // (length-1) &hash = bucket subscript when array length is 2 to the NTH power,
    // This formula is equivalent to: hash % length Mod the array length
    // For example: hash % 32 = hash & (32-1). }Copy the code

The calculation process is as follows:

Description:

  1. The key. The hashCode ();Return the hash value that is hashcode, let’s say a random generated value.
  2. N indicates that the length of the array initialization is 16.
  3. & (bitwise and operation) : Operation rule: the result of the same binary digit is 1 if it is 1, otherwise it is 0.
  4. ^ (xor operation by bit) : Operation rule: in the same binary digit, the same digit, the result is 0, different is 1.

Finally get 0101==> stab with subscript 5.

To put it simply:

The high 16bits remain the same, the low and high 16bits do an xor (the resulting hashCode is converted to a 32-bit binary, the first 16 bits and the last 16 bits do an xor for the low and high 16bits).

Question: Why do you do this?

If n is a small array, let’s say 16, then n-1 is 1111, and such a value does bitwise and bits with hashCode directly, actually using only the last four bits of the hash value. If the high hash values change a lot and the low hash values change a little bit, then you’re going to have a hash conflict, so you’re going to use both the high and the low hash values, and you’re going to solve that problem.

Now, let’s take a look at the putVal method and see what it does.

Main parameters:

  • Hash: Hash value of the key
  • Key: indicates the original key
  • Value: indicates the value to be stored
  • OnlyIfAbsent: True does not change the existing value
  • Evict: False indicates that the table is created

The putVal method source code is as follows:

public V put(K key, V value) {
	return putVal(hash(key), key, value, false.true);
}

static final int hash(Object key) {
    int h;// If the key is null then the hash value is 0, otherwise the key's hashCode() method is called and the high 16 bits participate in the xor of the whole hash, or if the array length is small, the high 16 bits participate in the address,
    // the addressing formula :(leng-1) & hash
    // This makes the calculated results more scattered and less prone to hash collisions
    return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
}

final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
    Node<K,V>[] tab; Node<K,V> p; int n, i;
    /* 1) Transient Node
      
       [] table; Represents an array that stores elements in the Map collection. 2) (TAB = table) == null N = (TAB = resize()).length; n = (TAB = resize()).length; Initialize the array and assign the initialized array length to n. 4) execute n = (TAB = resize()).length, array TAB each space is null. * /
      ,v>
    if ((tab = table) == null || (n = tab.length) == 0)
        n = (tab = resize()).length;
    /* 1) I = (n-1) &hash evaluates the index of the array assigned to I, i.e. determines which bucket the element is stored in. 2) p = TAB [I = (n-1) & hash] means to obtain the calculated position of the data assigned to node P. 3) (p = TAB [I = (n-1) & hash]) == null; New nodes are created based on key-value pairs and placed into buckets at that location. Summary: If the bucket has no hash collisions, the key-value pair is inserted directly into the spatial location. * / 
    if ((p = tab[i = (n - 1) & hash]) == null)
        // create a new node and store it in the bucket
        tab[i] = newNode(hash, key, value, null);
    else {
         // Else TAB [I] does not equal null, indicating that this position already has a value
        Node<K,V> e; K k;
        /* Compare the hash value of the first element (node in array) in the bucket with the key. 1) p.hash == hash: p.hash specifies the hash value of the existing data. P represents TAB [I], which is the Node object returned by the hash, key, value, null (newNode) method. Node
      
        newNode(int hash, K key, V value, Node
       
         next) { return new Node<>(hash, key, value, next); } The Node class has a hash member variable that records the hash value of the previous data. 2) (k = P. key) == key: P. key obtains the key of the original data, assigns the value to k key, and compares the address values of the two keys. 3) the key! = null && key. Equals (k) : to perform here show the address of the two key values are not equal, then the judgment before adding the key whether is equal to null, if is not equal to null to invoke the equals method to judge whether the content of the two key equal. * /
       ,v>
      ,v>
        if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
                /* The hash values of the two elements are equal, and the key value is also equal. Assign the old element whole object to e, and use e to record */ 
                e = p;
        // Hash value is not equal or key is not equal; Determine whether P is a red-black tree
        else if (p instanceof TreeNode)
            // Put it in the tree
            e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
        // it is a linked list node
        else {
            /* 1) If it is a linked list, it is necessary to iterate to the last node and insert 2) It is necessary to iterate to check whether there are duplicate keys */ in the linked list
            for (int binCount = 0; ; ++binCount) {
                /* 1)e = p.next gets the next element of p and assigns it to e. 2)(e = p.ext) == null Determine whether p.ext is equal to null, equal to null, p has no next element, then the tail of the linked list has been reached, and no duplicate key has been found, then the HashMap does not contain the key, insert the key value pair into the linked list. * /
                if ((e = p.next) == null) {
                    /* 1) create a newNode and insert it into the end. Node
      
        newNode(int hash, K key, V value, Node
       
         next) { return new Node<>(hash, key, value, next); } Notice that the fourth argument next is null, because the current element is inserted at the end of the list, so the next node must be NULL. 2) This method also meets the characteristics of the linked list data structure, adding new elements backwards each time. * /
       ,v>
      ,v>
                    p.next = newNode(hash, key, value, null);
                    /* 1) After the node is added, determine whether the number of nodes is greater than TREEIFY_THRESHOLD critical value 8. If so, convert the linked list to red-black tree. 2) int binCount = 0: indicates the initialization value of the for loop. Count from 0. It keeps track of the number of nodes traversed. The value is 0 for the first node, 1 for the second node... Seven is the eighth node, plus one element in the array, which is nine. Treeify_threshold-1 -- "8-1 --" 7 If the value of binCount is 7(plus one element in the array, the number of elements is 9), the value of treeify_threshold-1 is also 7, and the red-black tree is converted. * /
                    if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                        // Convert to a red-black tree
                        treeifyBin(tab, hash);
                    // Break the loop
                    break;
                }
                 
                /* e = p.ext is not null and is not the last element. Continue to determine whether the key value of the node in the list is the same as the key value of the inserted element. * /
                if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
                    // equal, out of the loop
                    /* If the element to be added has the same key as the existing element in the list, the for loop is broken. Instead of continuing the comparison, execute the following if statement to replace if (e! = null) */
                    break;
                /* Indicates that the newly added element is not equal to the current node. Proceed to the next node. Used to iterate over the linked list in the bucket, combined with the previous e = p.ext to iterate over the linked list */p = e; }}/* = insert key (hash) = insert key (hash) = insert key (hash) = insert key (hash
        if(e ! =null) { 
            // Record the value of e
            V oldValue = e.value;
            // onlyIfAbsent Is false or the old value is null
            if(! onlyIfAbsent || oldValue ==null)
                // Replace the old value with the new value
                // e.value indicates the old value. Value indicates the new value
                e.value = value;
            // Callback after access
            afterNodeAccess(e);
            // Return the old value
            returnoldValue; }}// Number of times to modify records
    ++modCount;
    // Check whether the actual size is greater than threshold. If so, expand the capacity
    if (++size > threshold)
        resize();
    // Callback after insertion
    afterNodeInsertion(evict);
    return null;
}
Copy the code

(1) Compute the hash value of the key.

(2) If the number of buckets (array) is 0, initialize the bucket;

(3) If there is no element in the bucket where key resides, insert it directly.

(4) If the key of the first element in the bucket where the key resides is the same as that of the key to be inserted, the element is found, and proceed to the subsequent process (9).

(5) If the first element is a tree node, putTreeVal() is called to find the element or insert the tree node;

(6) If it is not in the above three cases, the linked list corresponding to the bucket is traversed to find whether the key exists in the linked list;

(7) If the element corresponding to the key is found, proceed to the subsequent process (9);

(8) If no element corresponding to the key is found, insert a new node at the end of the list and judge whether tree is needed;

(9) If the element corresponding to the key is found, it determines whether the old value needs to be replaced and directly returns the old value;

(10) If an element is inserted, the number is increased by 1 and the need for expansion is determined;

5.2 Expansion Method Resize ()

Capacity expansion mechanism:

  1. When does it need to be expanded?

    Array expansion occurs when the number of elements in a HashMap exceeds the array size (array length)*loadFactor(default value: 0.75).

  2. What is the HashMap expansion?

    Capacity expansion involves a rehash assignment and traverses all elements of the hash table, which is time-consuming. Avoid resize when writing programs.

    When HashMap is expanded, it uses a very clever rehash, because it doubles every time it is expanded, and it only has one more bit than the original (n-1) &hash, so the node is either in its original position, Or it is assigned to the “old location + old capacity” position.

    For example, when we expand from 16 to 32, the specific changes are as follows:



So after the element recalculates the hash, the range of n-1 tokens is 1bit higher (red) because n is doubled, so the new index changes like this.



instructions:

5 is the original index calculated by hypothesis. This verifies the above description: after expansion, all nodes are either in their original location or assigned to the “original location + old capacity” position.

Therefore, when we extend the HashMap, we don’t need to recalculate the hash. We just need to see if the new bit of the hash value is 1 or 0. 0 means the index is unchanged, and 1 means the index is “old + old”. You can see the resize from 16 to 32 below:



Because of this clever rehash method, it not only saves the time of recalculating the hash value, but also can be considered random since the new 1bit is 0 or 1. The resize process ensures that the number of nodes on each bucket after rehash must be less than or equal to the number of nodes on the original bucket. This ensures that no more serious hash conflicts will occur after rehash, and evenly distributes the nodes in the previous conflict to the new bucket.

Source code resize method of interpretation

Here is a concrete implementation of the code:

/** * Why does it need to be expanded? * To solve the problem of chained query efficiency caused by hash conflicts, capacity expansion will alleviate this problem */
final Node<K,V>[] resize() {
    // oldTab: indicates the hash table array before expansion
    Node<K,V>[] oldTab = table;
    // oldCap: indicates the length of the table array before capacity expansion
    Return 0 if the current hash array is equal to null, otherwise return the current hash array length
    int oldCap = (oldTab == null)?0 : oldTab.length;
    // oldThr: indicates the threshold before capacity expansion. The default value is 12(16 x 0.75).
    int oldThr = threshold;
    // newCap: the size of the expanded table hash array
    // newThr: Conditions for expansion after expansion (new expansion threshold)
    int newCap, newThr = 0;
    
    // If oldCap > 0
    // If this condition is true, the hash table array in the hashMap has been initialized
    // Start to calculate the size after expansion
    if (oldCap > 0) {
        // If the size of the table array before capacity expansion reaches the upper threshold, the capacity will not be expanded
        // Set the capacity expansion condition to the maximum value of int
        if (oldCap >= MAXIMUM_CAPACITY) {
            // Change the threshold to the maximum value of int
            threshold = Integer.MAX_VALUE;
            return oldTab;
        }
        
        // If the size of the table array before expansion does not exceed the maximum value, it is doubled
        // (newCap = oldCap << 1) < MAXIMUM_CAPACITY The capacity must be smaller than the maximum capacity after being doubled
        // oldCap >= DEFAULT_INITIAL_CAPACITY The original hash table array length must be greater than or equal to 16
        // If oldCap is less than the default initial capacity of 16, such as the default capacity passed in as 8, the following code is not executed
        else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            // The new capacity expansion threshold is doubled
            newThr = oldThr << 1; // double threshold
    }
    // If old hash table array length oldCap == 0
    // Indicates that the hash table in the hashMap has not been initialized, which is null
    // If oldThr is greater than 0, the value is directly assigned
    1. New HashMap(initCap,loadFactor); 2. 2.new HashMap(initCap); 3.new HashMap(Map); // The incoming map already has data */
    else if (oldThr > 0) // The old threshold is assigned to the new array length
        newCap = oldThr;
    // If old hash table array length oldCap == 0
    // Indicates that the hash table in the hashMap has not been initialized, which is null
    // In this case, oldThr == 0
    else { // Use the default values
        newCap = DEFAULT_INITIAL_CAPACITY;/ / 16
        newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);/ / 12
    }
    // If the new capacity expansion threshold newThr has not been assigned by this point, then
    // The new maximum resize limit needs to be calculated
    if (newThr == 0) {
        float ft = (float)newCap * loadFactor;
        newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    }
    // Assign the new threshold newThr to threshold
    threshold = newThr;
    
    @SuppressWarnings({"rawtypes","unchecked"})
    // Create a new hash table
    // newCap is the new array length --> 32
    Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
    table = newTab;
    
    // Note: hashMap Before the expansion, the table is not null
    if(oldTab ! =null) {
        // Move the data of each bucket to the new hash table
        // Iterate over each bucket of the old hash table, recalculating the new positions of elements in the bucket
        for (int j = 0; j < oldCap; ++j) {
            // The current node
            Node<K,V> e;
            // Note: There is data in the bucket, but the specific data is
            // 1. Single data, 2. Linked list, 3
            if((e = oldTab[j]) ! =null) {
                // The original data is assigned null to facilitate GC collection
                oldTab[j] = null;
                // Check whether the array has the next reference (if it is a single data)
                if (e.next == null)
                    // There is no next reference, it is not a linked list,
                    // There is only a single key-value pair on the bucket,
                    // You can put data directly into the new hash table
                    // the e.hash (newcap-1) addressing formula yields two index results:
                    // 1. The index position is the same as in the old hash table,
                    // 2. The index position in the old hash table is I + oldCap
                    newTab[e.hash & (newCap - 1)] = e;
                // The bucket has formed a red-black tree
                else if (e instanceof TreeNode)
                    // Call the corresponding method to split the tree
                    // I will write a separate blog for detailed analysis of the red-black tree
                    // Red-black tree correlation can be skipped first
                    ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                // The bucket bit has formed a linked list
                else { // Use linked lists to handle conflicts
                    // Low list:
                    // The subscript position of the expanded array is used when the subscript position of the current array is the same
                    Node<K,V> loHead = null, loTail = null;
                    // High-order list: the index position of the array is equal to
                    // used when the subscript position of the current array + the length of the array before expansion oldCap
                    Node<K,V> hiHead = null, hiTail = null;
                    Node<K,V> next;
                    
                    // Use the principle explained above to calculate the new position of the node
                    do {
                        / / the original indexes
                        next = e.next;
                     	// if = true
                        // this node does not need to be moved after resize
                        / / example:
                        // if hash1 ->...... 0 1111
                        // if oldCap=16 ->...... 1, 0000
                        // e.hash & oldCap = 0, then
                        // The subscript of the expanded array is j, the same as that of the current array
                        // Use low linked lists
                        if ((e.hash & oldCap) == 0) {
                            if (loTail == null)
                                loHead = e;
                            else
                                loTail.next = e;
                            loTail = e;
                        }
                        / / example:
                        // if hash2 ->...... 1, 1111
                        // if oldCap=16 ->...... 1, 0000
                        // if the result of e.hash & oldCap is not 0, then
                        // The index position of the array after expansion is:
                        // the current array subscript position j + the array length before expansion oldCap
                        // Use high-order lists
                        else {
                            if (hiTail == null)
                                hiHead = e;
                            elsehiTail.next = e; hiTail = e; }}while((e = next) ! =null);
                    
                    // Put the low list in the bucket
                    if(loTail ! =null) {
                        loTail.next = null;
                        // Index position = current array subscript position j
                        newTab[j] = loHead;
                    }
                    // Put the high list in the bucket
                    if(hiTail ! =null) {
                        hiTail.next = null;
                        // index position = current array subscript position j + array length oldCap before expansion
                        newTab[j + oldCap] = hiHead;
                    }
                }
            }
        }
    }
    // Returns a new hash table
    return newTab;
}
Copy the code

(1) If the default constructor is used, the default value is initialized when the element is inserted for the first time, the capacity is 16, and the expansion threshold is 12;

(2) If the non-default constructor is used, the initial capacity on the first insertion is equal to the expansion threshold, which in the constructor is equal to the nearest power of 2 to the n of the incoming capacity;

(3) If the old capacity is greater than 0, the new capacity is equal to twice the old capacity, but not more than the maximum capacity 2 ^ 30, and the threshold for new capacity expansion is twice the threshold of the old capacity expansion;

(4) Create a bucket with a new capacity;

(5) Move elements. The original linked list is divided into two linked lists. The low linked list is stored in the location of the original bucket, and the high linked list is moved to the location of the original bucket plus the location of the old capacity;

5.3 Delete method remove()

Delete method is to first find the location of the element, if it is a linked list to traverse the list to find the element and then delete. If you’re using a red-black tree, you go through the tree and you find it and you delete it, and if the tree is less than 6, you roll the list.

Remove remove() method:

RemoveNode removeNode removeNode removeNode removeNode removeNode removeNode
// Delete by key
public V remove(Object key) {
        Node<K,V> e;
        return (e = removeNode(hash(key), key, null.false.true)) = =null ?
            null : e.value;
    }
// Delete according to key,value
@Override
public boolean remove(Object key, Object value) {
    return removeNode(hash(key), key, value, true.true) != null;
}
Copy the code

RemoveNode () method:

final Node<K,V> removeNode(int hash, Object key, Object value,
                           boolean matchValue, boolean movable) {
    / / parameters:
    // matchValue This parameter is true when deleted based on key and value
    // movable can be ignored for now
    
    // TAB: references the hash table in the current haashMap
    // p: current node element
    // n: the current hash table array length
    // index: indicates the addressing result
    Node<K,V>[] tab; Node<K,V> p; int n, index;
	// Find the location based on the hash
	// If the bucket mapped to the current key is not empty
    if((tab = table) ! =null && (n = tab.length) > 0 &&
        (p = tab[index = (n - 1) & hash]) ! =null) {
        // If the bucket contains data and needs to be queried and deleted
        // node: the element to be removed by lookup
        // e: indicates the next element of the current node
        / / k, v key values
        Node<K,V> node = null, e; K k; V v;
        
        // In the first case, the element in the current bucket is the element we want to delete
        // If the node on the bucket is the key, point node to that node
        if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
            node = p;
        // If the first element in the bucket is not the element we are looking for and e = p.ext in the bucket is not null
        // Indicates that the node in the bucket has the next node
        else if((e = p.next) ! =null) {
            // Description: The current bucket is either a linked list or a red-black tree
            
            // Check whether a red black tree has been formed in the bucket
            if (p instanceof TreeNode)
                // If the conflict is handled by the red-black tree, the node to be deleted from the red-black tree is obtained
                node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
            // The bucket has a linked list
            else {
                // Determine whether to handle hash conflicts as linked lists
                // If it is, the list is traversed to find the node to delete
                do {
                    if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k)))) {
                        node = e;
                        break;
                    }
                    p = e;
                } while((e = e.next) ! =null); }}// Compare the value of the found key with that of the deleted key
        if(node ! =null&& (! matchValue || (v = node.value) == value || (value ! =null && value.equals(v)))) {
            // If the bucket is a red-black tree, delete the node by calling the red-black tree method
            if (node instanceof TreeNode)
                ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
            // If the bucket is a linked list
            else if (node == p)
                // Delete the list
                tab[index] = node.next;
            // If the bucket is in
            else
                // Set the next element of the current element P to the next element of the element to be deleted
                p.next = node.next;
            // Record the number of changes
            ++modCount;
            // The number of changes
            --size;
            afterNodeRemoval(node);
            returnnode; }}return null;
}
Copy the code

Get ()

A lookup method that finds the value by the element’s key.

The code is as follows:

public V get(Object key) {
    Node<K,V> e;
    return (e = getNode(hash(key), key)) == null ? null : e.value;
}
Copy the code

The get method mainly calls the getNode method, with the following code:

final Node<K,V> getNode(int hash, Object key) {
    // TAB: references the hash table of the current hashMap
    // first: the head element in the bucket
    // e: temporary node element
    // n: length of the table array
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    // If the hash table is not empty and the bucket corresponding to key is not empty
    if((tab = table) ! =null && (n = tab.length) > 0 &&
        (first = tab[(n - 1) & hash]) ! =null) {
        Note: always check the first element */
        // In the first case, the positioned bucket element is the data we want to get
        if (first.hash == hash && // always check first node((k = first.key) == key || (key ! =null && key.equals(k))))
            return first;
        // The first element of the bucket is not the target element, and first.next is not null
        // Indicates that the current bucket level has more than one element, which can be a linked list or a red-black tree
        if((e = first.next) ! =null) {
            // Second case: the bucket has been upgraded to a red-black tree
            // Check if it is a red-black tree. If it is, call getTreeNode to get the node
            if (first instanceof TreeNode)
                // Call tree-related methods to get the target element
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            // The bucket bit has formed a linked list
            do {
                // If it's not a red-black tree, it's a linked list
                // Check whether the key exists in the linked list through a loop
                if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
                    return e;
            } while((e = e.next) ! =null); }}// Return null if not found
    return null;
}
Copy the code

Summary:

Get method implementation steps:

A. Obtain the bucket to which the key is mapped using the hash value

B. The key on the bucket is the key to search for. The key is directly found and returned

C. Check whether the key on the bucket is the same as the key on the bucket.

  • If the subsequent node is a red-black tree node, obtain the value based on the key by calling the red-black tree method
  • If the subsequent node is a linked list node, it iterates through the list to get a value based on the key

6. Several ways to traverse a HashMap

  1. Iterate through Key and Values respectively
for (String key : map.keySet()) {
	System.out.println(key);
}
for (Object vlaue : map.values() {
	System.out.println(value);
}
Copy the code
  1. Use the Iterator to iterate
Iterator<Map.Entry<String, Object>> iterator = map.entrySet().iterator();
while (iterator.hasNext()) {
    Map.Entry<String, Object> mapEntry = iterator.next();
    System.out.println(mapEntry.getKey() + "-" + mapEntry.getValue());
}
Copy the code
  1. Using GET (not recommended)
Set<String> keySet = map.keySet();
for (String str : keySet) {
	System.out.println(str + "-" + map.get(str));
}
Copy the code

Description:

According to the Ali development manual, this approach is not recommended because of two iterations. KeySet gets Iterator once and iterates through get again, reducing performance.

  1. After jdk8, use the default method in the Map interface:
default void forEach(BiConsumer<? super K,? super V> action) 
// Methods in the BiConsumer interface:
	void accept(T t, U u)Perform this operation on the given parameter. Parameter t - first input parameter u - second input parameterCopy the code

Traversal code:

HashMap<String,String> map = new HashMap();
map.put("001"."zhangsan");
map.put("002"."lisi");
map.forEach((key, value) -> {
    System.out.println(key + "-" + value);
});
Copy the code

7. To summarize

(1) HashMap is a hash table, which adopts the storage structure of array + linked list + red-black tree;

(2) The default initial capacity of HashMap is 16 (1<<4), the default load factor is 0.75 F, and the capacity is always 2 to the power of n;

(3) When HashMap is expanded, its capacity is doubled each time;

(4) When the number of buckets is less than 64, the tree is not implemented, but the capacity is expanded.

(5) When the number of buckets is greater than 64 and the number of elements in a single bucket is greater than 8, tree is performed;

(6) When the number of elements in a single bucket is less than 6, anti-tree is carried out;

(7) HashMap is a non-thread-safe container;

(8) The time complexity of HashMap searching for added elements is O(1);

HashMap red-black tree operations, which will be updated soon…