403-205=?得多少怎么给雨水鞋205是多大的孩子穿的讲运算

还我宝宝最新章节|还我宝宝全文阅读 - 南山书院
欢迎你全文阅读还我宝宝最新章节
亲爱的书友,您现在看到的是一个月之前的数据,访问最新更新请点击
更新时间:&&& 小说作者:“这位首长,我们很熟吗?”作为一个不起眼的职场小菜鸟,什么时候惹上了这位肩扛一穗一星的军长的?夏语默表示很懵。“不熟,只是生过孩子。”军长大人冷冷地开口。夏语默感觉晴天霹雳,她什么时候跟这个男人生过孩子?她怎么不知道?!男人将她抵在墙角,“孩子在哪?”“我还是黄花大闺女,生什么孩子,滚!”夏语默忍无可忍,怒了。不说?很好,高冷的军长大人有的是办法。部队三百六十种变态受训方法任你挑,这还不算,白天体能训练,晚上床上技能训练!军长大人蹙眉:“体能实在太差,负重跑加跑十公里。”洛奕辰,你大爷!精神、身体双重压榨,夏语默表示扛不住了……&&下载:&&&概况:
《南山书院》为非赢利性站点。
《》由书友上传且不做任何商业用途,仅为喜爱还我宝宝无弹窗的书友提供一个分享与交流的平台。
如果小说内容涉嫌色情、暴力等违法内容,或者是侵犯了作者《漠七七》的合法权益,请!钢格板产品分类
您所在的位置:
钢格栅板理论重量是多少【怎么计算】
时间:16-10-15
& & 钢格栅板不同规格型号的理论重量不同,一般按照理论重量计算公式计算。下面是钢格栅板的不同规格型号的理论重量表。
& & 205/30/100中心距的重量为28.85kg,那么怎么计算出来的呢,①先计算扁钢的根数:=36小数点四舍五入。②计算横杆麻花钢的根数:=9③扁钢的重量计算公式:0..75&36④麻花钢的重量:0.22&9=1.98kg⑤扁钢重量+麻花钢重量。像不足一平米的钢梯踏步板、沟盖板的重量也是如上的计算方法。
麻花钢钢格板理论重量表
横杆规格(mm)
中心距重量(kg)
横杆规格(mm)
中心距重量(kg)
G203/30/100
G204/30/100
G205/30/100
G253/30/100
G255/30/100
G254/30/100
G323/30/100
G325/30/100
G353/30/100
G354/30/100
G355/30/100
G403/30/100
G404/30/100
G405/30/100
G455/30/100
G503/30/100
G504/30/100
G505/30/100
G555/30/100
G605/30/100
G655/30/100
横杆规格(mm)
中心距重量(kg)
横杆规格(mm)
中心距重量(kg)
G203/30/50
G204/30/50
G205/30/50
G253/30/50
G255/30/50
G254/30/50
G323/30/50
G325/30/50
G353/30/50
G354/30/50
G355/30/50
G403/30/50
G404/30/50
G405/30/50
G455/30/50
G503/30/50
G504/30/50
G505/30/50
G555/30/50
G605/30/50
G655/30/50
横杆规格(mm)
中心距重量(kg)
横杆规格(mm)
中心距重量(kg)
G203/40/100
G204/40/100
G205/40/100
G253/40/100
G255/40/100
G254/40/100
G323/40/100
G325/40/100
G353/40/100
G354/40/100
G355/40/100
G403/40/100
G404/40/100
G405/40/100
G455/40/100
G503/40/100
G504/40/100
G505/40/100
G555/40/100
G605/40/100
G655/40/100
横杆规格(mm)
中心距重量(kg)
横杆规格(mm)
中心距重量(kg)
G203/40/50
G204/40/50
G205/40/50
G253/40/50
G255/40/50
G254/40/50
G323/40/50
G325/40/50
G353/40/50
G354/40/50
G355/40/50
G403/40/50
G404/40/50
G405/40/50
G455/40/50
G503/40/50
G504/40/50
G505/40/50
G555/40/50
G605/40/50
G655/40/50
插接钢格板重量表:
重量(kg)
重量(kg)
重量(kg)
重量(kg)
相关推荐文章1 package java.
3 import sun.misc.SharedS
5 import java.io.IOE
6 import java.io.InvalidObjectE
7 import java.io.S
8 import java.lang.reflect.ParameterizedT
9 import java.lang.reflect.T
10 import java.util.function.BiC
11 import java.util.function.BiF
12 import java.util.function.C
13 import java.util.function.F
* HashMap是常用的Java集合之一,是基于哈希表的Map接口的实现。与HashTable主要区别为不支持同步和允许null作为key和value。
* HashMap非线程安全,即任一时刻可以有多个线程同时写HashMap,可能会导致数据的不一致。
* 如果需要满足线程安全,可以用 Collections的synchronizedMap方法使HashMap具有线程安全的能力,或者使用ConcurrentHashMap。
* 在JDK1.6中,HashMap采用数组+链表实现,即使用链表处理冲突,同一hash值的链表都存储在一个链表里。
* 但是当位于一个数组中的元素较多,即hash值相等的元素较多时,通过key值依次查找的效率较低。
* 而JDK1.8中,HashMap采用数组+链表+红黑树实现,当链表长度超过阈值8时,将链表转换为红黑树,这样大大减少了查找时间。
* 原本Map.Entry接口的实现类Entry改名为了Node。转化为红黑树时改用另一种实现TreeNode。
24 public class HashMap&K, V& extends AbstractMap&K, V&
implements Map&K, V&, Cloneable, Serializable {
private static final long serialVersionUID = 181265L;
* 默认的初始容量(容量为HashMap中槽的数目)是16,且实际容量必须是2的整数次幂。
static final int DEFAULT_INITIAL_CAPACITY = 1 && 4; // aka 16
* 最大容量(必须是2的幂且小于2的30次方,传入容量过大将被这个值替换)
static final int MAXIMUM_CAPACITY = 1 && 30;
* 默认装填因子0.75,如果当前键值对个数 &= HashMap最大容量*装填因子,进行rehash操作
static final float DEFAULT_LOAD_FACTOR = 0.75f;
* JDK1.8 新加,Entry链表最大长度,当桶中节点数目大于该长度时,将链表转成红黑树存储;
static final int TREEIFY_THRESHOLD = 8;
* JDK1.8 新加,当桶中节点数小于该长度,将红黑树转为链表存储;
static final int UNTREEIFY_THRESHOLD = 6;
* 桶可能被转化为树形结构的最小容量。当哈希表的大小超过这个阈值,才会把链式结构转化成树型结构,否则仅采取扩容来尝试减少冲突。
* 应该至少4*TREEIFY_THRESHOLD来避免扩容和树形结构化之间的冲突。
static final int MIN_TREEIFY_CAPACITY = 64;
* JDK1.6用Entry描述键值对,JDK1.8中用Node代替Entry
static class Node&K, V& implements Map.Entry&K, V& {
// hash存储key的hashCode
// final:一个键值对的key不可改变
//指向下个节点的引用
Node&K, V&
//构造函数
Node(int hash, K key, V value, Node&K, V& next) {
this.hash =
this.key =
this.value =
this.next =
public final K getKey() {
public final V getValue() {
public final String toString() {
return key + "=" +
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
public final V setValue(V newValue) {
V oldValue =
value = newV
return oldV
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry&?, ?& e = (Map.Entry&?, ?&)
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
return false;
/* ---------------- Static utilities -------------- */
* HashMap中键值对的存储形式为链表节点,hashCode相同的节点(位于同一个桶)用链表组织
* hash方法分为三步:
* 1.取key的hashCode
* 2.key的hashCode高16位异或低16位
* 3.将第一步和第二步得到的结果进行取模运算。
static final int hash(Object key) {
//计算key的hashCode, h = Objects.hashCode(key)
//h &&& 16表示对h无符号右移16位,高位补0,然后h与h &&& 16按位异或
return (key == null) ? 0 : (h = key.hashCode()) ^ (h &&& 16);
* 如果参数x实现了Comparable接口,返回参数x的类名,否则返回null
static Class&?& comparableClassFor(Object x) {
if (x instanceof Comparable) {
Type[] ts,
ParameterizedT
if ((c = x.getClass()) == String.class) // bypass checks
if ((ts = c.getGenericInterfaces()) != null) {
for (int i = 0; i & ts. ++i) {
if (((t = ts[i]) instanceof ParameterizedType) &&
((p = (ParameterizedType) t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c
return null;
* 如果x的类型为kc,则返回k.compareTo(x),否则返回0
@SuppressWarnings({"rawtypes", "unchecked"}) // for cast to Comparable
static int compareComparables(Class&?& kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable) k).compareTo(x));
* 结果为&=cap的最小2的自然数幂
static final int tableSizeFor(int cap) {
//先移位再或运算,最终保证返回值是2的整数幂
int n = cap - 1;
n |= n &&& 1;
n |= n &&& 2;
n |= n &&& 4;
n |= n &&& 8;
n |= n &&& 16;
return (n & 0) ? 1 : (n &= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
/* ---------------- Fields -------------- */
* 哈希桶数组,分配的时候,table的长度总是2的幂
transient Node&K, V&[]
* HashMap将数据转换成set的另一种存储形式,这个变量主要用于迭代功能
transient Set&Map.Entry&K, V&& entryS
* 实际存储的数量,则HashMap的size()方法,实际返回的就是这个值,isEmpty()也是判断该值是否为0
transient int
* hashmap结构被改变的次数,fail-fast机制
transient int modC
* HashMap的扩容阈值,在HashMap中存储的Node键值对超过这个数量时,自动扩容容量为原来的二倍
* HashMap的负加载因子,可计算出当前table长度下的扩容阈值:threshold = loadFactor * table.length
final float loadF
/* ---------------- Public operations -------------- */
* 使用指定的初始化容量initial capacity 和加载因子load factor构造一个空HashMap
* @param initialCapacity 初始化容量
* @param loadFactor
* @throws IllegalArgumentException 如果指定的初始化容量为负数或者加载因子为非正数
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity & 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity & MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor &= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadF
this.threshold = tableSizeFor(initialCapacity);
* 使用指定的初始化容量initial capacity和默认加载因子DEFAULT_LOAD_FACTOR(0.75)构造一个空HashMap
* @param initialCapacity 初始化容量
* @throws IllegalArgumentException 如果指定的初始化容量为负数
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
* 使用指定的初始化容量(16)和默认加载因子DEFAULT_LOAD_FACTOR(0.75)构造一个空HashMap
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
* 使用指定Map m构造新的HashMap。使用指定的初始化容量(16)和默认加载因子DEFAULT_LOAD_FACTOR(0.75)
* @param m 指定的map
* @throws NullPointerException 如果指定的map是null
public HashMap(Map&? extends K, ? extends V& m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
* Map.putAll and Map constructor的实现需要的方法
* 将m的键值对插入本map中
* @param m
* @param evict 初始化map时使用false,否则使用true
final void putMapEntries(Map&? extends K, ? extends V& m, boolean evict) {
int s = m.size();
//如果参数map不为空
if (s & 0) {
// 判断table是否已经初始化
if (table == null) { // pre-size
// 未初始化,s为m的实际元素个数
float ft = ((float) s / loadFactor) + 1.0F;
int t = ((ft & (float) MAXIMUM_CAPACITY) ?
(int) ft : MAXIMUM_CAPACITY);
// 计算得到的t大于阈值,则初始化阈值
if (t & threshold)
//根据容量初始化临界值
threshold = tableSizeFor(t);
// 已初始化,并且m元素个数大于阈值,进行扩容处理
} else if (s & threshold)
//扩容处理
// 将m中的所有元素添加至HashMap中
for (Map.Entry&? extends K, ? extends V& e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
* 返回map中键值对映射的个数
* @return map中键值对映射的个数
public int size() {
* 如果map中没有键值对映射,返回true
* @return 如果map中没有键值对映射,返回true
public boolean isEmpty() {
return size == 0;
* 返回指定的key映射的value,如果value为null,则返回null
* get可以分为三个步骤:
* 1.通过hash(Object key)方法计算key的哈希值hash。
* 2.通过getNode( int hash, Object key)方法获取node。
* 3.如果node为null,返回null,否则返回node.value。
* @see #put(Object, Object)
public V get(Object key) {
Node&K, V&
//根据key及其hash值查询node节点,如果存在,则返回该节点的value值
return (e = getNode(hash(key), key)) == null ? null : e.
* 根据key的哈希值和key获取对应的节点
* getNode可分为以下几个步骤:
* 1.如果哈希表为空,或key对应的桶为空,返回null
* 2.如果桶中的第一个节点就和指定参数hash和key匹配上了,返回这个节点。
* 3.如果桶中的第一个节点没有匹配上,而且有后续节点
* 3.1如果当前的桶采用红黑树,则调用红黑树的get方法去获取节点
* 3.2如果当前的桶不采用红黑树,即桶中节点结构为链式结构,遍历链表,直到key匹配
* 4.找到节点返回null,否则返回null。
* @param hash 指定参数key的哈希值
* @param key
指定参数key
* @return 返回node,如果没有则返回null
final Node&K, V& getNode(int hash, Object key) {
Node&K, V&[]
Node&K, V& first,
//如果哈希表不为空,而且key对应的桶上不为空
if ((tab = table) != null && (n = tab.length) & 0 &&
(first = tab[(n - 1) & hash]) != null) {
//如果桶中的第一个节点就和指定参数hash和key匹配上了
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
//返回桶中的第一个节点
//如果桶中的第一个节点没有匹配上,而且有后续节点
if ((e = first.next) != null) {
//如果当前的桶采用红黑树,则调用红黑树的get方法去获取节点
if (first instanceof TreeNode)
return ((TreeNode&K, V&) first).getTreeNode(hash, key);
//如果当前的桶不采用红黑树,即桶中节点结构为链式结构
//遍历链表,直到key匹配
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
} while ((e = e.next) != null);
//如果哈希表为空,或者没有找到节点,返回null
return null;
* 如果map中含有key为指定参数key的键值对,返回true
* @param key 指定参数key
* @return 如果map中含有key为指定参数key的键值对,返回true
public boolean containsKey(Object key) {
return getNode(hash(key), key) != null;
* 将指定参数key和指定参数value插入map中,如果key已经存在,那就替换key对应的value
* put(K key, V value)可以分为三个步骤:
* 1.通过hash(Object key)方法计算key的哈希值。
* 2.通过putVal(hash(key), key, value, false, true)方法实现功能。
* 3.返回putVal方法返回的结果。
* @param key
* @param value 指定value
* @return 如果value被替换,则返回旧的value,否则返回null。当然,可能key对应的value就是null
public V put(K key, V value) {
// 倒数第二个参数false:表示允许旧值替换
// 最后一个参数true:表示HashMap不处于创建模式
return putVal(hash(key), key, value, false, true);
* Map.put和其他相关方法的实现需要的方法
* putVal方法可以分为下面的几个步骤:
* 1.如果哈希表为空,调用resize()创建一个哈希表。
* 2.如果指定参数hash在表中没有对应的桶,即为没有碰撞,直接将键值对插入到哈希表中即可。
* 3.如果有碰撞,遍历桶,找到key映射的节点
* 3.1桶中的第一个节点就匹配了,将桶中的第一个节点记录起来。
* 3.2如果桶中的第一个节点没有匹配,且桶中结构为红黑树,则调用红黑树对应的方法插入键值对。
* 3.3如果不是红黑树,那么就肯定是链表。遍历链表,如果找到了key映射的节点,就记录这个节点,退出循环。如果没有找到,在链表尾部插入节点。插入后,如果链的长度大于TREEIFY_THRESHOLD这个临界值,则使用treeifyBin方法把链表转为红黑树。
* 4.如果找到了key映射的节点,且节点不为null
* 4.1记录节点的vlaue。
* 4.2如果参数onlyIfAbsent为false,或者oldValue为null,替换value,否则不替换。
* 4.3返回记录下来的节点的value。
* 5.如果没有找到key映射的节点(2、3步中讲了,这种情况会插入到hashMap中),插入节点后size会加1,这时要检查size是否大于临界值threshold,如果大于会使用resize方法进行扩容。
* @param hash
指定参数key的哈希值
* @param key
指定参数key
* @param value
指定参数value
* @param onlyIfAbsent 如果为true,即使指定参数key在map中已经存在,也不会替换value
* @param evict
如果为false,数组table在创建模式中
* @return 如果value被替换,则返回旧的value,否则返回null。当然,可能key对应的value就是null。
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node&K, V&[]
Node&K, V&
//如果哈希表为空,调用resize()创建一个哈希表,并用变量n记录哈希表长度
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).
* 如果指定参数hash在表中没有对应的桶,即为没有碰撞
* Hash函数,(n - 1) & hash 计算key将被放置的槽位
* (n - 1) & hash 本质上是hash % n,位运算更快
if ((p = tab[i = (n - 1) & hash]) == null)
//直接将键值对插入到map中即可
tab[i] = newNode(hash, key, value, null);
else {// 桶中已经存在元素
Node&K, V&
// 比较桶中第一个元素(数组中的结点)的hash值相等,key相等
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
// 将第一个元素赋值给e,用e来记录
// 当前桶中无该键值对,且桶是红黑树结构,按照红黑树结构插入
else if (p instanceof TreeNode)
e = ((TreeNode&K, V&) p).putTreeVal(this, tab, hash, key, value);
// 当前桶中无该键值对,且桶是链表结构,按照链表结构插入到尾部
for (int binCount = 0; ; ++binCount) {
// 遍历到链表尾部
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
// 检查链表长度是否达到阈值,达到将该槽位节点组织形式转为红黑树
if (binCount &= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
// 链表节点的&key, value&与put操作&key, value&相同时,不做重复操作,跳出循环
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
// 找到或新建一个key和hashCode与插入元素相等的键值对,进行put操作
if (e != null) { // existing mapping for key
// 记录e的value
V oldValue = e.
* onlyIfAbsent为false或旧值为null时,允许替换旧值
* 否则无需替换
if (!onlyIfAbsent || oldValue == null)
// 访问后回调
afterNodeAccess(e);
// 返回旧值
return oldV
// 更新结构化修改信息
// 键值对数目超过阈值时,进行rehash
if (++size & threshold)
// 插入后回调
afterNodeInsertion(evict);
return null;
* 对table进行初始化或者扩容。
* 如果table为null,则对table进行初始化
* 如果对table扩容,因为每次扩容都是翻倍,与原来计算(n-1)&hash的结果相比,节点要么就在原来的位置,要么就被分配到&原位置+旧容量&这个位置
* resize的步骤总结为:
* 1.计算扩容后的容量,临界值。
* 2.将hashMap的临界值修改为扩容后的临界值
* 3.根据扩容后的容量新建数组,然后将hashMap的table的引用指向新数组。
* 4.将旧数组的元素复制到table中。
* @return the table
final Node&K, V&[] resize() {
//新建oldTab数组保存扩容前的数组table
Node&K, V&[] oldTab =
//获取原来数组的长度
int oldCap = (oldTab == null) ? 0 : oldTab.
//原来数组扩容的临界值
int oldThr =
int newCap, newThr = 0;
//如果扩容前的容量 & 0
if (oldCap & 0) {
//如果原来的数组长度大于最大值(2^30)
if (oldCap &= MAXIMUM_CAPACITY) {
//扩容临界值提高到正无穷
threshold = Integer.MAX_VALUE;
//无法进行扩容,返回原来的数组
return oldT
//如果现在容量的两倍小于MAXIMUM_CAPACITY且现在的容量大于DEFAULT_INITIAL_CAPACITY
} else if ((newCap = oldCap && 1) & MAXIMUM_CAPACITY &&
oldCap &= DEFAULT_INITIAL_CAPACITY)
//临界值变为原来的2倍
newThr = oldThr && 1;
} else if (oldThr & 0) //如果旧容量 &= 0,而且旧临界值 & 0
//数组的新容量设置为老数组扩容的临界值
newCap = oldT
else { //如果旧容量 &= 0,且旧临界值 &= 0,新容量扩充为默认初始化容量,新临界值为DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY
newCap = DEFAULT_INITIAL_CAPACITY;//新数组初始容量设置为默认值
newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);//计算默认容量下的阈值
// 计算新的resize上限
if (newThr == 0) {//在当上面的条件判断中,只有oldThr & 0成立时,newThr == 0
//ft为临时临界值,下面会确定这个临界值是否合法,如果合法,那就是真正的临界值
float ft = (float) newCap * loadF
//当新容量& MAXIMUM_CAPACITY且ft & (float)MAXIMUM_CAPACITY,新的临界值为ft,否则为Integer.MAX_VALUE
newThr = (newCap & MAXIMUM_CAPACITY && ft & (float) MAXIMUM_CAPACITY ?
(int) ft : Integer.MAX_VALUE);
//将扩容后hashMap的临界值设置为newThr
threshold = newT
//创建新的table,初始化容量为newCap
@SuppressWarnings({"rawtypes", "unchecked"})
Node&K, V&[] newTab = (Node&K, V&[]) new Node[newCap];
//修改hashMap的table为新建的newTab
table = newT
//如果旧table不为空,将旧table中的元素复制到新的table中
if (oldTab != null) {
//遍历旧哈希表的每个桶,将旧哈希表中的桶复制到新的哈希表中
for (int j = 0; j & oldC ++j) {
Node&K, V&
//如果旧桶不为null,使用e记录旧桶
if ((e = oldTab[j]) != null) {
//将旧桶置为null
oldTab[j] = null;
//如果旧桶中只有一个node
if (e.next == null)
//将e也就是oldTab[j]放入newTab中e.hash & (newCap - 1)的位置
newTab[e.hash & (newCap - 1)] =
//如果旧桶中的结构为红黑树
else if (e instanceof TreeNode)
//将树中的node分离
((TreeNode&K, V&) e).split(this, newTab, j, oldCap);
//如果旧桶中的结构为链表,链表重排,jdk1.8做的一系列优化
Node&K, V& loHead = null, loTail = null;
Node&K, V& hiHead = null, hiTail = null;
Node&K, V&
//遍历整个链表中的节点
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loTail.next =
} else {// 原索引+oldCap
if (hiTail == null)
hiTail.next =
} while ((e = next) != null);
// 原索引放到bucket里
if (loTail != null) {
loTail.next = null;
newTab[j] = loH
// 原索引+oldCap放到bucket里
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiH
return newT
* 将链表转化为红黑树
final void treeifyBin(Node&K, V&[] tab, int hash) {
Node&K, V&
//如果桶数组table为空,或者桶数组table的长度小于MIN_TREEIFY_CAPACITY,不符合转化为红黑树的条件
if (tab == null || (n = tab.length) & MIN_TREEIFY_CAPACITY)
//如果符合转化为红黑树的条件,而且hash对应的桶不为null
else if ((e = tab[index = (n - 1) & hash]) != null) {
// 红黑树的头、尾节点
TreeNode&K, V& hd = null, tl = null;
//遍历链表
//替换链表node为树node,建立双向链表
TreeNode&K, V& p = replacementTreeNode(e, null);
// 确定树头节点
if (tl == null)
} while ((e = e.next) != null);
//遍历链表插入每个节点到红黑树
if ((tab[index] = hd) != null)
hd.treeify(tab);
* 将参数map中的所有键值对映射插入到hashMap中,如果有碰撞,则覆盖value。
* @param m 参数map
* @throws NullPointerException 如果map为null
public void putAll(Map&? extends K, ? extends V& m) {
putMapEntries(m, true);
* 删除hashMap中key映射的node
* remove方法的实现可以分为三个步骤:
* 1.通过 hash(Object key)方法计算key的哈希值。
* 2.通过 removeNode 方法实现功能。
* 3.返回被删除的node的value。
* @param key 参数key
* @return 如果没有映射到node,返回null,否则返回对应的value
public V remove(Object key) {
Node&K, V&
//根据key来删除node。removeNode方法的具体实现在下面
return (e = removeNode(hash(key), key, null, false, true)) == null ?
* Map.remove和相关方法的实现需要的方法
* removeNode方法的步骤总结为:
* 1.如果数组table为空或key映射到的桶为空,返回null。
* 2.如果key映射到的桶上第一个node的就是要删除的node,记录下来。
* 3.如果桶内不止一个node,且桶内的结构为红黑树,记录key映射到的node。
* 4.桶内的结构不为红黑树,那么桶内的结构就肯定为链表,遍历链表,找到key映射到的node,记录下来。
* 5.如果被记录下来的node不为null,删除node,size-1被删除。
* 6.返回被删除的node。
* @param hash
key的哈希值
* @param key
key的哈希值
* @param value
如果 matchValue 为true,则value也作为确定被删除的node的条件之一,否则忽略
* @param matchValue 如果为true,则value也作为确定被删除的node的条件之一
* @param movable
如果为false,删除node时不会删除其他node
* @return 返回被删除的node,如果没有node被删除,则返回null(针对红黑树的删除方法)
final Node&K, V& removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node&K, V&[]
Node&K, V&
//如果数组table不为空且key映射到的桶不为空
if ((tab = table) != null && (n = tab.length) & 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node&K, V& node = null,
//如果桶上第一个node的就是要删除的node
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
//记录桶上第一个node
else if ((e = p.next) != null) {//如果桶内不止一个node
//如果桶内的结构为红黑树
if (p instanceof TreeNode)
//记录key映射到的node
node = ((TreeNode&K, V&) p).getTreeNode(hash, key);
else {//如果桶内的结构为链表
do {//遍历链表,找到key映射到的node
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
//记录key映射到的node
} while ((e = e.next) != null);
//如果得到的node不为null且(matchValue为false||node.value和参数value匹配)
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
//如果桶内的结构为红黑树
if (node instanceof TreeNode)
//使用红黑树的删除方法删除node
((TreeNode&K, V&) node).removeTreeNode(this, tab, movable);
else if (node == p)//如果桶的第一个node的就是要删除的node
//删除node
tab[index] = node.
else//如果桶内的结构为链表,使用链表删除元素的方式删除node
p.next = node.
++modC//结构性修改次数+1
--//哈希表大小-1
afterNodeRemoval(node);
return//返回被删除的node
return null;//如果数组table为空或key映射到的桶为空,返回null。
* 删除map中所有的键值对
public void clear() {
Node&K, V&[]
modCount++;
if ((tab = table) != null && size & 0) {
for (int i = 0; i & tab. ++i)
tab[i] = null;
* 如果hashMap中的键值对有一对或多对的value为参数value,返回true
* @param value 参数value
* @return 如果hashMap中的键值对有一对或多对的value为参数value,返回true
public boolean containsValue(Object value) {
Node&K, V&[]
if ((tab = table) != null && size & 0) {
//遍历数组table
for (int i = 0; i & tab. ++i) {
//遍历桶中的node
for (Node&K, V& e = tab[i]; e != null; e = e.next) {
if ((v = e.value) == value ||
(value != null && value.equals(v)))
return true;
return false;
* 返回hashMap中所有key的视图。
* 改变hashMap会影响到set,反之亦然。
* 如果当迭代器迭代set时,hashMap被修改(除非是迭代器自己的remove()方法),迭代器的结果是不确定的。
* set支持元素的删除,通过Iterator.remove、Set.remove、removeAll、retainAll、clear操作删除hashMap中对应的键值对。
* 不支持add和addAll方法。
* @return 返回hashMap中所有key的set视图
public Set&K& keySet() {
Set&K& ks = keyS
if (ks == null) {
ks = new KeySet();
* 内部类KeySet
final class KeySet extends AbstractSet&K& {
public final int size() {
public final void clear() {
HashMap.this.clear();
public final Iterator&K& iterator() {
return new KeyIterator();
public final boolean contains(Object o) {
return containsKey(o);
public final boolean remove(Object key) {
return removeNode(hash(key), key, null, false, true) != null;
public final Spliterator&K& spliterator() {
return new KeySpliterator&&(HashMap.this, 0, -1, 0, 0);
public final void forEach(Consumer&? super K& action) {
Node&K, V&[]
if (action == null)
throw new NullPointerException();
if (size & 0 && (tab = table) != null) {
int mc = modC
for (int i = 0; i & tab. ++i) {
for (Node&K, V& e = tab[i]; e != null; e = e.next)
action.accept(e.key);
if (modCount != mc)
throw new ConcurrentModificationException();
* 返回hashMap中所有value的collection视图
* 改变hashMap会改变collection,反之亦然。
* 如果当迭代器迭代collection时,hashMap被修改(除非是迭代器自己的remove()方法),迭代器的结果是不确定的。
* collection支持元素的删除,通过Iterator.remove、Collection.remove、removeAll、retainAll、clear操作删除hashMap中对应的键值对。
* 不支持add和addAll方法。
* @return 返回hashMap中所有key的collection视图
public Collection&V& values() {
Collection&V& vs =
if (vs == null) {
vs = new Values();
* 内部类Values
final class Values extends AbstractCollection&V& {
public final int size() {
public final void clear() {
HashMap.this.clear();
public final Iterator&V& iterator() {
return new ValueIterator();
public final boolean contains(Object o) {
return containsValue(o);
public final Spliterator&V& spliterator() {
return new ValueSpliterator&&(HashMap.this, 0, -1, 0, 0);
public final void forEach(Consumer&? super V& action) {
Node&K, V&[]
if (action == null)
throw new NullPointerException();
if (size & 0 && (tab = table) != null) {
int mc = modC
for (int i = 0; i & tab. ++i) {
for (Node&K, V& e = tab[i]; e != null; e = e.next)
action.accept(e.value);
if (modCount != mc)
throw new ConcurrentModificationException();
* 返回hashMap中所有键值对的set视图
* 改变hashMap会影响到set,反之亦然。
* 如果当迭代器迭代set时,hashMap被修改(除非是迭代器自己的remove()方法),迭代器的结果是不确定的。
* set支持元素的删除,通过Iterator.remove、Set.remove、removeAll、retainAll、clear操作删除hashMap中对应的键值对。
* 不支持add和addAll方法。
* @return 返回hashMap中所有键值对的set视图
public Set&Map.Entry&K, V&& entrySet() {
Set&Map.Entry&K, V&&
return (es = entrySet) == null ? (entrySet = new EntrySet()) :
* 内部类EntrySet
final class EntrySet extends AbstractSet&Map.Entry&K, V&& {
public final int size() {
public final void clear() {
HashMap.this.clear();
public final Iterator&Map.Entry&K, V&& iterator() {
return new EntryIterator();
public final boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry&?, ?& e = (Map.Entry&?, ?&)
Object key = e.getKey();
Node&K, V& candidate = getNode(hash(key), key);
return candidate != null && candidate.equals(e);
public final boolean remove(Object o) {
if (o instanceof Map.Entry) {
Map.Entry&?, ?& e = (Map.Entry&?, ?&)
Object key = e.getKey();
Object value = e.getValue();
return removeNode(hash(key), key, value, true, true) != null;
return false;
public final Spliterator&Map.Entry&K, V&& spliterator() {
return new EntrySpliterator&&(HashMap.this, 0, -1, 0, 0);
public final void forEach(Consumer&? super Map.Entry&K, V&& action) {
Node&K, V&[]
if (action == null)
throw new NullPointerException();
if (size & 0 && (tab = table) != null) {
int mc = modC
for (int i = 0; i & tab. ++i) {
for (Node&K, V& e = tab[i]; e != null; e = e.next)
action.accept(e);
if (modCount != mc)
throw new ConcurrentModificationException();
// JDK8重写的方法
* 通过key映射到对应node,如果没映射到则返回默认值defaultValue
* @param key
* @param defaultValue
* @return key映射到对应的node,如果没映射到则返回默认值defaultValue
public V getOrDefault(Object key, V defaultValue) {
Node&K, V&
return (e = getNode(hash(key), key)) == null ? defaultValue : e.
* 在hashMap中插入参数key和value组成的键值对,如果key在hashMap中已经存在,不替换value
* @param key
* @param value
* @return 如果key在hashMap中不存在,返回旧value
public V putIfAbsent(K key, V value) {
return putVal(hash(key), key, value, true, true);
* 删除hashMap中key为参数key,value为参数value的键值对。如果桶中结构为树,则级联删除
* @param key
* @param value
* @return 删除成功,返回true
public boolean remove(Object key, Object value) {
return removeNode(hash(key), key, value, true, true) != null;
* 使用newValue替换key和oldValue映射到的键值对中的value
* @param key
* @param oldValue
* @param newValue
* @return 替换成功,返回true
public boolean replace(K key, V oldValue, V newValue) {
Node&K, V&
if ((e = getNode(hash(key), key)) != null &&
((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
e.value = newV
afterNodeAccess(e);
return true;
return false;
* 使用参数value替换key映射到的键值对中的value
* @param key
* @param value
* @return 替换成功,返回true
public V replace(K key, V value) {
Node&K, V&
if ((e = getNode(hash(key), key)) != null) {
V oldValue = e.
afterNodeAccess(e);
return oldV
return null;
public V computeIfAbsent(K key,
Function&? super K, ? extends V& mappingFunction) {
if (mappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node&K, V&[]
Node&K, V&
int binCount = 0;
TreeNode&K, V& t = null;
Node&K, V& old = null;
if (size & threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode&K, V&) first).getTreeNode(hash, key);
Node&K, V& e =
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
} while ((e = e.next) != null);
if (old != null && (oldValue = old.value) != null) {
afterNodeAccess(old);
return oldV
V v = mappingFunction.apply(key);
if (v == null) {
return null;
} else if (old != null) {
old.value =
afterNodeAccess(old);
} else if (t != null)
t.putTreeVal(this, tab, hash, key, v);
tab[i] = newNode(hash, key, v, first);
if (binCount &= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
afterNodeInsertion(true);
public V computeIfPresent(K key,
BiFunction&? super K, ? super V, ? extends V& remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
Node&K, V&
int hash = hash(key);
if ((e = getNode(hash, key)) != null &&
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
afterNodeAccess(e);
removeNode(hash, key, null, false, true);
return null;
public V compute(K key,
BiFunction&? super K, ? super V, ? extends V& remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node&K, V&[]
Node&K, V&
int binCount = 0;
TreeNode&K, V& t = null;
Node&K, V& old = null;
if (size & threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode&K, V&) first).getTreeNode(hash, key);
Node&K, V& e =
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
} while ((e = e.next) != null);
V oldValue = (old == null) ? null : old.
V v = remappingFunction.apply(key, oldValue);
if (old != null) {
if (v != null) {
old.value =
afterNodeAccess(old);
removeNode(hash, key, null, false, true);
} else if (v != null) {
if (t != null)
t.putTreeVal(this, tab, hash, key, v);
tab[i] = newNode(hash, key, v, first);
if (binCount &= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
afterNodeInsertion(true);
public V merge(K key, V value,
BiFunction&? super V, ? super V, ? extends V& remappingFunction) {
if (value == null)
throw new NullPointerException();
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node&K, V&[]
Node&K, V&
int binCount = 0;
TreeNode&K, V& t = null;
Node&K, V& old = null;
if (size & threshold || (tab = table) == null ||
(n = tab.length) == 0)
n = (tab = resize()).
if ((first = tab[i = (n - 1) & hash]) != null) {
if (first instanceof TreeNode)
old = (t = (TreeNode&K, V&) first).getTreeNode(hash, key);
Node&K, V& e =
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
} while ((e = e.next) != null);
if (old != null) {
if (old.value != null)
v = remappingFunction.apply(old.value, value);
if (v != null) {
old.value =
afterNodeAccess(old);
removeNode(hash, key, null, false, true);
if (value != null) {
if (t != null)
t.putTreeVal(this, tab, hash, key, value);
tab[i] = newNode(hash, key, value, first);
if (binCount &= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
afterNodeInsertion(true);
public void forEach(BiConsumer&? super K, ? super V& action) {
Node&K, V&[]
if (action == null)
throw new NullPointerException();
if (size & 0 && (tab = table) != null) {
int mc = modC
for (int i = 0; i & tab. ++i) {
for (Node&K, V& e = tab[i]; e != null; e = e.next)
action.accept(e.key, e.value);
if (modCount != mc)
throw new ConcurrentModificationException();
public void replaceAll(BiFunction&? super K, ? super V, ? extends V& function) {
Node&K, V&[]
if (function == null)
throw new NullPointerException();
if (size & 0 && (tab = table) != null) {
int mc = modC
for (int i = 0; i & tab. ++i) {
for (Node&K, V& e = tab[i]; e != null; e = e.next) {
e.value = function.apply(e.key, e.value);
if (modCount != mc)
throw new ConcurrentModificationException();
/* ------------------------------------------------------------ */
// 克隆和序列化
* 浅拷贝。
* clone方法虽然生成了新的HashMap对象,新的HashMap中的table数组虽然也是新生成的,但是数组中的元素还是引用以前的HashMap中的元素。
* 这就导致在对HashMap中的元素进行修改的时候,即对数组中元素进行修改,会导致原对象和clone对象都发生改变,但进行新增或删除就不会影响对方,因为这相当于是对数组做出的改变,clone对象新生成了一个数组。
* @return hashMap的浅拷贝
@SuppressWarnings("unchecked")
public Object clone() {
HashMap&K, V&
result = (HashMap&K, V&) super.clone();
} catch (CloneNotSupportedException e) {
// this shouldn't happen, since we are Cloneable
throw new InternalError(e);
result.reinitialize();
result.putMapEntries(this, false);
// These methods are also used when serializing HashSets
final float loadFactor() {
return loadF
final int capacity() {
return (table != null) ? table.length :
(threshold & 0) ? threshold :
DEFAULT_INITIAL_CAPACITY;
* 序列化hashMap到ObjectOutputStream中
* 将hashMap的总容量capacity、实际容量size、键值对映射写入到ObjectOutputStream中。键值对映射序列化时是无序的。
* @serialData The &i&capacity&/i& of the HashMap (the length of the
* bucket array) is emitted (int), followed by the
* &i&size&/i& (an int, the number of key-value
* mappings), followed by the key (Object) and value (Object)
* for each key-value mapping.
The key-value mappings are
* emitted in no particular order.
private void writeObject(java.io.ObjectOutputStream s)
throws IOException {
int buckets = capacity();
// Write out the threshold, loadfactor, and any hidden stuff
s.defaultWriteObject();
//写入总容量
s.writeInt(buckets);
//写入实际容量
s.writeInt(size);
//写入键值对
internalWriteEntries(s);
* 到ObjectOutputStream中读取hashMap
* 将hashMap的总容量capacity、实际容量size、键值对映射读取出来
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException {
// 将hashMap的总容量capacity、实际容量size、键值对映射读取出来
s.defaultReadObject();
//重置hashMap
reinitialize();
//如果加载因子不合法,抛出异常
if (loadFactor &= 0 || Float.isNaN(loadFactor))
throw new InvalidObjectException("Illegal load factor: " +
loadFactor);
s.readInt();
//读出桶的数量,忽略
int mappings = s.readInt(); //读出实际容量size
//如果读出的实际容量size小于0,抛出异常
if (mappings & 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
else if (mappings & 0) { // (if zero, use defaults)
// Size the table using given load factor only if within
// range of 0.25...4.0
//调整hashMap大小
float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
// 加载因子
float fc = (float) mappings / lf + 1.0f;
//初步得到的总容量,后续还会处理
//处理初步得到的容量,确认最终的总容量
int cap = ((fc & DEFAULT_INITIAL_CAPACITY) ?
DEFAULT_INITIAL_CAPACITY :
(fc &= MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY :
tableSizeFor((int) fc));
//计算临界值,得到初步的临界值
float ft = (float) cap *
//得到最终的临界值
threshold = ((cap & MAXIMUM_CAPACITY && ft & MAXIMUM_CAPACITY) ?
(int) ft : Integer.MAX_VALUE);
// Check Map.Entry[].class since it's the nearest public type to
// what we're actually creating.
SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);
//新建桶数组table
@SuppressWarnings({"rawtypes", "unchecked"})
Node&K, V&[] tab = (Node&K, V&[]) new Node[cap];
// 读出key和value,并组成键值对插入hashMap中
for (int i = 0; i & i++) {
@SuppressWarnings("unchecked")
K key = (K) s.readObject();
@SuppressWarnings("unchecked")
V value = (V) s.readObject();
putVal(hash(key), key, value, false, false);
/* ------------------------------------------------------------ */
// iterators
abstract class HashIterator {
Node&K, V&
// next entry to return
Node&K, V&
// current entry
int expectedModC
// for fast-fail
// current slot
HashIterator() {
expectedModCount = modC
Node&K, V&[] t =
current = next = null;
index = 0;
if (t != null && size & 0) { // advance to first entry
} while (index & t.length && (next = t[index++]) == null);
public final boolean hasNext() {
return next != null;
final Node&K, V& nextNode() {
Node&K, V&[]
Node&K, V& e =
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
if (e == null)
throw new NoSuchElementException();
if ((next = (current = e).next) == null && (t = table) != null) {
} while (index & t.length && (next = t[index++]) == null);
public final void remove() {
Node&K, V& p =
if (p == null)
throw new IllegalStateException();
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
current = null;
K key = p.
removeNode(hash(key), key, null, false, false);
expectedModCount = modC
final class KeyIterator extends HashIterator
implements Iterator&K& {
public final K next() {
return nextNode().
final class ValueIterator extends HashIterator
implements Iterator&V& {
public final V next() {
return nextNode().
final class EntryIterator extends HashIterator
implements Iterator&Map.Entry&K, V&& {
public final Map.Entry&K, V& next() {
return nextNode();
/* ------------------------------------------------------------ */
// spliterators
static class HashMapSpliterator&K, V& {
final HashMap&K, V&
Node&K, V&
//记录当前的节点
//当前节点的下标
//估计大小
int expectedModC
// for comodification checks
HashMapSpliterator(HashMap&K, V& m, int origin,
int fence, int est,
int expectedModCount) {
this.map =
this.index =
this.fence =
this.est =
this.expectedModCount = expectedModC
final int getFence() { // initialize fence and size on first use
if ((hi = fence) & 0) {
HashMap&K, V& m =
expectedModCount = m.modC
Node&K, V&[] tab = m.
hi = fence = (tab == null) ? 0 : tab.
public final long estimateSize() {
getFence(); // force init
return (long)
static final class KeySpliterator&K, V&
extends HashMapSpliterator&K, V&
implements Spliterator&K& {
KeySpliterator(HashMap&K, V& m, int origin, int fence, int est,
int expectedModCount) {
super(m, origin, fence, est, expectedModCount);
public KeySpliterator&K, V& trySplit() {
int hi = getFence(), lo = index, mid = (lo + hi) &&& 1;
return (lo &= mid || current != null) ? null :
new KeySpliterator&&(map, lo, index = mid, est &&&= 1,
expectedModCount);
public void forEachRemaining(Consumer&? super K& action) {
int i, hi,
if (action == null)
throw new NullPointerException();
HashMap&K, V& m =
Node&K, V&[] tab = m.
if ((hi = fence) & 0) {
mc = expectedModCount = m.modC
hi = fence = (tab == null) ? 0 : tab.
mc = expectedModC
if (tab != null && tab.length &= hi &&
(i = index) &= 0 && (i & (index = hi) || current != null)) {
Node&K, V& p =
current = null;
if (p == null)
p = tab[i++];
action.accept(p.key);
} while (p != null || i & hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
public boolean tryAdvance(Consumer&? super K& action) {
if (action == null)
throw new NullPointerException();
Node&K, V&[] tab = map.
if (tab != null && tab.length &= (hi = getFence()) && index &= 0) {
while (current != null || index & hi) {
if (current == null)
current = tab[index++];
K k = current.
current = current.
action.accept(k);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
return false;
public int characteristics() {
return (fence & 0 || est == map.size ? Spliterator.SIZED : 0) |
Spliterator.DISTINCT;
static final class ValueSpliterator&K, V&
extends HashMapSpliterator&K, V&
implements Spliterator&V& {
ValueSpliterator(HashMap&K, V& m, int origin, int fence, int est,
int expectedModCount) {
super(m, origin, fence, est, expectedModCount);
public ValueSpliterator&K, V& trySplit() {
int hi = getFence(), lo = index, mid = (lo + hi) &&& 1;
return (lo &= mid || current != null) ? null :
new ValueSpliterator&&(map, lo, index = mid, est &&&= 1,
expectedModCount);
public void forEachRemaining(Consumer&? super V& action) {
int i, hi,
if (action == null)
throw new NullPointerException();
HashMap&K, V& m =
Node&K, V&[] tab = m.
if ((hi = fence) & 0) {
mc = expectedModCount = m.modC
hi = fence = (tab == null) ? 0 : tab.
mc = expectedModC
if (tab != null && tab.length &= hi &&
(i = index) &= 0 && (i & (index = hi) || current != null)) {
Node&K, V& p =
current = null;
if (p == null)
p = tab[i++];
action.accept(p.value);
} while (p != null || i & hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
public boolean tryAdvance(Consumer&? super V& action) {
if (action == null)
throw new NullPointerException();
Node&K, V&[] tab = map.
if (tab != null && tab.length &= (hi = getFence()) && index &= 0) {
while (current != null || index & hi) {
if (current == null)
current = tab[index++];
V v = current.
current = current.
action.accept(v);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
return false;
public int characteristics() {
return (fence & 0 || est == map.size ? Spliterator.SIZED : 0);
static final class EntrySpliterator&K, V&
extends HashMapSpliterator&K, V&
implements Spliterator&Map.Entry&K, V&& {
EntrySpliterator(HashMap&K, V& m, int origin, int fence, int est,
int expectedModCount) {
super(m, origin, fence, est, expectedModCount);
public EntrySpliterator&K, V& trySplit() {
int hi = getFence(), lo = index, mid = (lo + hi) &&& 1;
return (lo &= mid || current != null) ? null :
new EntrySpliterator&&(map, lo, index = mid, est &&&= 1,
expectedModCount);
public void forEachRemaining(Consumer&? super Map.Entry&K, V&& action) {
int i, hi,
if (action == null)
throw new NullPointerException();
HashMap&K, V& m =
Node&K, V&[] tab = m.
if ((hi = fence) & 0) {
mc = expectedModCount = m.modC
hi = fence = (tab == null) ? 0 : tab.
mc = expectedModC
if (tab != null && tab.length &= hi &&
(i = index) &= 0 && (i & (index = hi) || current != null)) {
Node&K, V& p =
current = null;
if (p == null)
p = tab[i++];
action.accept(p);
} while (p != null || i & hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
public boolean tryAdvance(Consumer&? super Map.Entry&K, V&& action) {
if (action == null)
throw new NullPointerException();
Node&K, V&[] tab = map.
if (tab != null && tab.length &= (hi = getFence()) && index &= 0) {
while (current != null || index & hi) {
if (current == null)
current = tab[index++];
Node&K, V& e =
current = current.
action.accept(e);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
return false;
public int characteristics() {
return (fence & 0 || est == map.size ? Spliterator.SIZED : 0) |
Spliterator.DISTINCT;
/* ------------------------------------------------------------ */
// LinkedHashMap support
* The following package-protected methods are designed to be
* overridden by LinkedHashMap, but not by any other subclass.
* Nearly all other internal methods are also package-protected
* but are declared final, so can be used by LinkedHashMap, view
* classes, and HashSet.
// 创建一个链表结点
Node&K, V& newNode(int hash, K key, V value, Node&K, V& next) {
return new Node&&(hash, key, value, next);
// 替换一个链表节点
Node&K, V& replacementNode(Node&K, V& p, Node&K, V& next) {
return new Node&&(p.hash, p.key, p.value, next);
// 创建一个红黑树节点
TreeNode&K, V& newTreeNode(int hash, K key, V value, Node&K, V& next) {
return new TreeNode&&(hash, key, value, next);
// 替换一个红黑树节点
TreeNode&K, V& replacementTreeNode(Node&K, V& p, Node&K, V& next) {
return new TreeNode&&(p.hash, p.key, p.value, next);
* Reset to initial default state.
Called by clone and readObject.
void reinitialize() {
table = null;
entrySet = null;
keySet = null;
values = null;
modCount = 0;
threshold = 0;
// Callbacks to allow LinkedHashMap post-actions
void afterNodeAccess(Node&K, V& p) {
void afterNodeInsertion(boolean evict) {
void afterNodeRemoval(Node&K, V& p) {
// 写入hashMap键值对到ObjectOutputStream中
void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
Node&K, V&[]
if (size & 0 && (tab = table) != null) {
for (int i = 0; i & tab. ++i) {
for (Node&K, V& e = tab[i]; e != null; e = e.next) {
s.writeObject(e.key);
s.writeObject(e.value);
/* ------------------------------------------------------------ */
// Tree bins
* JDK1.8新增,用来支持桶的红黑树结构实现
* 性质1. 节点是红色或黑色。
* 性质2. 根是黑色。
* 性质3. 所有叶子都是黑色(叶子是NIL节点)。
* 性质4. 每个红色节点必须有两个黑色的子节点。(从每个叶子到根的所有路径上不能有两个连续的红色节点。)
* 性质5. 从任一节点到其每个叶子的所有简单路径都包含相同数目的黑色节点。
static final class TreeNode&K, V& extends LinkedHashMap.Entry&K, V& {
TreeNode&K, V&
//节点的父亲
TreeNode&K, V&
//节点的左孩子
TreeNode&K, V&
//节点的右孩子
TreeNode&K, V&
//节点的前一个节点
//true表示红节点,false表示黑节点
TreeNode(int hash, K key, V val, Node&K, V& next) {
super(hash, key, val, next);
* 获取红黑树的根
final TreeNode&K, V& root() {
for (TreeNode&K, V& r = this, ; ) {
if ((p = r.parent) == null)
* 确保root是桶中的第一个元素 ,将root移到中中的第一个
static &K, V& void moveRootToFront(Node&K, V&[] tab, TreeNode&K, V& root) {
if (root != null && tab != null && (n = tab.length) & 0) {
int index = (n - 1) & root.
TreeNode&K, V& first = (TreeNode&K, V&) tab[index];
if (root != first) {
Node&K, V&
tab[index] =
TreeNode&K, V& rp = root.
if ((rn = root.next) != null)
((TreeNode&K, V&) rn).prev =
if (rp != null)
if (first != null)
first.prev =
root.next =
root.prev = null;
assert checkInvariants(root);
* 查找hash为h,key为k的节点
final TreeNode&K, V& find(int h, Object k, Class&?& kc) {
TreeNode&K, V& p = this;
TreeNode&K, V& pl = p.left, pr = p.right,
if ((ph = p.hash) & h)
else if (ph & h)
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
else if (pl == null)
else if (pr == null)
else if ((kc != null ||
(kc = comparableClassFor(k)) != null) &&
(dir = compareComparables(kc, k, pk)) != 0)
p = (dir & 0) ? pl :
else if ((q = pr.find(h, k, kc)) != null)
} while (p != null);
return null;
* 获取树节点,通过根节点查找
final TreeNode&K, V& getTreeNode(int h, Object k) {
return ((parent != null) ? root() : this).find(h, k, null);
* 比较2个对象的大小
static int tieBreakOrder(Object a, Object b) {
if (a == null || b == null ||
(d = a.getClass().getName().
compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) &= System.identityHashCode(b) ?
* 将链表转为二叉树
* @return root of tree
final void treeify(Node&K, V&[] tab) {
TreeNode&K, V& root = null;
for (TreeNode&K, V& x = this, x != null; x = next) {
next = (TreeNode&K, V&) x.
x.left = x.right = null;
if (root == null) {
x.parent = null;
x.red = false;
int h = x.
Class&?& kc = null;
for (TreeNode&K, V& p = ; ) {
if ((ph = p.hash) & h)
else if (ph & h)
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode&K, V& xp =
if ((p = (dir &= 0) ? p.left : p.right) == null) {
x.parent =
if (dir &= 0)
xp.right =
root = balanceInsertion(root, x);
moveRootToFront(tab, root);
* 将二叉树转为链表
final Node&K, V& untreeify(HashMap&K, V& map) {
Node&K, V& hd = null, tl = null;
for (Node&K, V& q = this; q != null; q = q.next) {
Node&K, V& p = map.replacementNode(q, null);
if (tl == null)
* 添加一个键值对
final TreeNode&K, V& putTreeVal(HashMap&K, V& map, Node&K, V&[] tab,
int h, K k, V v) {
Class&?& kc = null;
boolean searched = false;
TreeNode&K, V& root = (parent != null) ? root() : this;
for (TreeNode&K, V& p = ; ) {
if ((ph = p.hash) & h)
else if (ph & h)
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if (!searched) {
TreeNode&K, V& q,
searched = true;
if (((ch = p.left) != null &&
(q = ch.find(h, k, kc)) != null) ||
((ch = p.right) != null &&
(q = ch.find(h, k, kc)) != null))
dir = tieBreakOrder(k, pk);
TreeNode&K, V& xp =
if ((p = (dir &= 0) ? p.left : p.right) == null) {
Node&K, V& xpn = xp.
TreeNode&K, V& x = map.newTreeNode(h, k, v, xpn);
if (dir &= 0)
xp.right =
x.parent = x.prev =
if (xpn != null)
((TreeNode&K, V&) xpn).prev =
moveRootToFront(tab, balanceInsertion(root, x));
return null;
* Removes the given node, that must be present before this call.
* This is messier than typical red-black deletion code because we
* cannot swap the contents of an interior node with a leaf
* successor that is pinned by "next" pointers that are accessible
* independently during traversal. So instead we swap the tree
* linkages. If the current tree appears to have too few nodes,
* the bin is converted back to a plain bin. (The test triggers
* somewhere between 2 and 6 nodes, depending on tree structure).
final void removeTreeNode(HashMap&K, V& map, Node&K, V&[] tab,
boolean movable) {
if (tab == null || (n = tab.length) == 0)
int index = (n - 1) &
TreeNode&K, V& first = (TreeNode&K, V&) tab[index], root = first,
TreeNode&K, V& succ = (TreeNode&K, V&) next, pred =
if (pred == null)
tab[index] = first =
pred.next =
if (succ != null)
succ.prev =
if (first == null)
if (root.parent != null)
root = root.root();
if (root == null || root.right == null ||
(rl = root.left) == null || rl.left == null) {
tab[index] = first.untreeify(map);
// too small
TreeNode&K, V& p = this, pl = left, pr = right,
if (pl != null && pr != null) {
TreeNode&K, V& s = pr,
while ((sl = s.left) != null) // find successor
boolean c = s.
s.red = p.
p.red = // swap colors
TreeNode&K, V& sr = s.
TreeNode&K, V& pp = p.
if (s == pr) { // p was s's direct parent
p.parent =
TreeNode&K, V& sp = s.
if ((p.parent = sp) != null) {
if (s == sp.left)
sp.right =
if ((s.right = pr) != null)
pr.parent =
p.left = null;
if ((p.right = sr) != null)
sr.parent =
if ((s.left = pl) != null)
pl.parent =
if ((s.parent = pp) == null)
else if (p == pp.left)
pp.right =
if (sr != null)
replacement =
replacement =
} else if (pl != null)
replacement =
else if (pr != null)
replacement =
replacement =
if (replacement != p) {
TreeNode&K, V& pp = replacement.parent = p.
if (pp == null)
else if (p == pp.left)
pp.right =
p.left = p.right = p.parent = null;
TreeNode&K, V& r = p.red ? root : balanceDeletion(root, replacement);
if (replacement == p) {
TreeNode&K, V& pp = p.
p.parent = null;
if (pp != null) {
if (p == pp.left)
pp.left = null;
else if (p == pp.right)
pp.right = null;
if (movable)
moveRootToFront(tab, r);
* 将结点太多的桶分割
* @param map
* @param tab
the table for recording bin heads
* @param index the index of the table being split
* @param bit
the bit of hash to split on
final void split(HashMap&K, V& map, Node&K, V&[] tab, int index, int bit) {
TreeNode&K, V& b = this;
// Relink into lo and hi lists, preserving order
TreeNode&K, V& loHead = null, loTail = null;
TreeNode&K, V& hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for (TreeNode&K, V& e = b, e != null; e = next) {
next = (TreeNode&K, V&) e.
e.next = null;
if ((e.hash & bit) == 0) {
if ((e.prev = loTail) == null)
loTail.next =
if ((e.prev = hiTail) == null)
hiTail.next =
if (loHead != null) {
if (lc &= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
tab[index] = loH
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
if (hiHead != null) {
if (hc &= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
tab[index + bit] = hiH
if (loHead != null)
hiHead.treeify(tab);
/* ------------------------------------------------------------ */
// 红黑树方法,都是从CLR中修改的
* @param root
* @param p
* @param &K&
* @param &V&
static &K, V& TreeNode&K, V& rotateLeft(TreeNode&K, V& root,
TreeNode&K, V& p) {
TreeNode&K, V& r, pp,
if (p != null && (r = p.right) != null) {
if ((rl = p.right = r.left) != null)
rl.parent =
if ((pp = r.parent = p.parent) == null)
(root = r).red = false;
else if (pp.left == p)
pp.right =
p.parent =
* @param root
* @param p
* @param &K&
* @param &V&
static &K, V& TreeNode&K, V& rotateRight(TreeNode&K, V& root,
TreeNode&K, V& p) {
TreeNode&K, V& l, pp,
if (p != null && (l = p.left) != null) {
if ((lr = p.left = l.right) != null)
lr.parent =
if ((pp = l.parent = p.parent) == null)
(root = l).red = false;
else if (pp.right == p)
pp.right =
p.parent =
* 保证插入后平衡
* @param root
* @param x
* @param &K&
* @param &V&
static &K, V& TreeNode&K, V& balanceInsertion(TreeNode&K, V& root,
TreeNode&K, V& x) {
x.red = true;
for (TreeNode&K, V& xp, xpp, xppl, ; ) {
if ((xp = x.parent) == null) {
x.red = false;
} else if (!xp.red || (xpp = xp.parent) == null)
if (xp == (xppl = xpp.left)) {
if ((xppr = xpp.right) != null && xppr.red) {
xppr.red = false;
xp.red = false;
xpp.red = true;
if (x == xp.right) {
root = rotateLeft(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateRight(root, xpp);
if (xppl != null && xppl.red) {
xppl.red = false;
xp.red = false;
xpp.red = true;
if (x == xp.left) {
root = rotateRight(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateLeft(root, xpp);
* 删除后调整平衡
* @param root
* @param x
* @param &K&
* @param &V&
static &K, V& TreeNode&K, V& balanceDeletion(TreeNode&K, V& root,
TreeNode&K, V& x) {
for (TreeNode&K, V& xp, xpl, ; ) {
if (x == null || x == root)
else if ((xp = x.parent) == null) {
x.red = false;
} else if (x.red) {
x.red = false;
} else if ((xpl = xp.left) == x) {
if ((xpr = xp.right) != null && xpr.red) {
xpr.red = false;
xp.red = true;
root = rotateLeft(root, xp);
xpr = (xp = x.parent) == null ? null : xp.
if (xpr == null)
TreeNode&K, V& sl = xpr.left, sr = xpr.
if ((sr == null || !sr.red) &&
(sl == null || !sl.red)) {
xpr.red = true;
if (sr == null || !sr.red) {
if (sl != null)
sl.red = false;
xpr.red = true;
root = rotateRight(root, xpr);
xpr = (xp = x.parent) == null ?
null : xp.
if (xpr != null) {
xpr.red = (xp == null) ? false : xp.
if ((sr = xpr.right) != null)
sr.red = false;
if (xp != null) {
xp.red = false;
root = rotateLeft(root, xp);
} else { // symmetric
if (xpl != null && xpl.red) {
xpl.red = false;
xp.red = true;
root = rotateRight(root, xp);
xpl = (xp = x.parent) == null ? null : xp.
if (xpl == null)
TreeNode&K, V& sl = xpl.left, sr = xpl.
if ((sl == null || !sl.red) &&
(sr == null || !sr.red)) {
xpl.red = true;
if (sl == null || !sl.red) {
if (sr != null)
sr.red = false;
xpl.red = true;
root = rotateLeft(root, xpl);
xpl = (xp = x.parent) == null ?
null : xp.
if (xpl != null) {
xpl.red = (xp == null) ? false : xp.
if ((sl = xpl.left) != null)
sl.red = false;
if (xp != null) {
xp.red = false;
root = rotateRight(root, xp);
* 检测是否符合红黑树
static &K, V& boolean checkInvariants(TreeNode&K, V& t) {
TreeNode&K, V& tp = t.parent, tl = t.left, tr = t.right,
tb = t.prev, tn = (TreeNode&K, V&) t.
if (tb != null && tb.next != t)
return false;
if (tn != null && tn.prev != t)
return false;
if (tp != null && t != tp.left && t != tp.right)
return false;
if (tl != null && (tl.parent != t || tl.hash & t.hash))
return false;
if (tr != null && (tr.parent != t || tr.hash & t.hash))
return false;
if (t.red && tl != null && tl.red && tr != null && tr.red)
return false;
if (tl != null && !checkInvariants(tl))
return false;
if (tr != null && !checkInvariants(tr))
return false;
return true;
阅读(...) 评论()

我要回帖

更多关于 雨水鞋205是多大的孩子穿的 的文章

 

随机推荐