这个文件是否可以用来延期journal entryy v...

OneCommunity BuddyPress社区主题[更新至v1.4.1] | 唯艾迪主题模板
- 130,816 views - 113,800 views - 52,882 views - 48,906 views - 41,829 views - 41,463 views - 31,151 views - 30,341 views - 27,803 views - 27,556 viewsBook Store 购物/电子商务 wordpress主题[v2.0] | 唯艾迪主题模板
- 130,816 views - 113,800 views - 52,882 views - 48,906 views - 41,829 views - 41,463 views - 31,151 views - 30,341 views - 27,803 views - 27,556 viewsFrom OSDev Wiki
32/64 bit:
Mixed (16/32 bit):
ELF (Executable and Linkable Format) was designed by Unix System Laboratories while working with Sun Microsystems on SVR4 (UNIX System V Release 4.0). Consequently, ELF first appeared in Solaris 2.0 (aka SunOS 5.0), which is based on SVR4. The format is specified in the .
A very versatile file format, it was later picked up by many other operating systems for use as both executable files and as shared library files. It does distinguish between TEXT, DATA and BSS.
Today, ELF is considered the standard format on Unix-alike systems. While it has some drawbacks (e.g., using up one of the scarce general purpose registers of the IA32 when using position-independent code), it is well supported and documented.
ELF is a format for storing programs or fragments of programs on disk, created as a result of compiling and linking. An ELF file is divided into sections. For an executable program, these are the text section for the code, the data section for global variables and the rodata section that usually contains constant strings. The ELF file contains headers that describe how these sections should be stored in memory.
Note that depending on whether your file is a linkable or an executable file, the headers in the ELF file won't be the same:
process.o, result of gcc -c process.c $SOME_FLAGS
C32/kernel/bin/.process.o
architecture: i386, flags 0x:
HAS_RELOC, HAS_SYMS
start address 0x
CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
CONTENTS, ALLOC, LOAD, DATA
CONTENTS, READONLY
CONTENTS, RELOC, READONLY, DEBUGGING
5 .stabstr
CONTENTS, READONLY, DEBUGGING
CONTENTS, ALLOC, LOAD, READONLY, DATA
7 .comment
CONTENTS, READONLY
The 'flags' will tell you what's actually available in the ELF file. Here, we have symbol tables and relocation: all that we need to link the file against another, but virtually no information about how to load the file in memory (even if that could be guessed). We don't have the program entry point, for instance, and we have a sections table rather than a program header.
where code stands, as said above. objdump -drS .process.o will show you that
where global tables, variables, etc. stand. objdump -s -j .data .process.o will hexdump it.
don't look for bits of .bss in your file: there's none. That's where your uninitialized arrays and variable are, and the loader 'knows' they should be filled with zeroes ... there's no point storing more zeroes on your disk than there already are, is it ?
that's where your strings go, usually the things you forgot when linking and that cause your kernel not to work. objdump -s -j .rodata .process.o will hexdump it. Note that depending on the compiler, you may have more sections like this.
.comment & .note
just comments put there by the compiler/linker toolchain
.stab & .stabstr
debugging symbols & similar information.
/bin/bash, a real executable file
/bin/bash:
file format elf32-i386
architecture: i386, flags 0x:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x08056c40
Program Header:
0x vaddr 0x paddr 0x align 2**2
filesz 0x memsz 0x flags r-x
The program header itself... taking 224 bytes, and starting at offset 0x34 in the file
INTERP off
0x vaddr 0x paddr 0x align 2**0
filesz 0x memsz 0x flags r--
The program that should be used to 'execute' the binary. Here, it reads as '/lib/ld-linux.so.2', which means some dynamic libraries linking will be required before we run the program.
0x vaddr 0x paddr 0x align 2**12
filesz 0x0007411c memsz 0x0007411c flags r-x
Now we're requested to read 7411c bytes, starting at file's start (?) and being 7411c bytes large (that's virtually the whole file!), which will be read-only but executable. They'll be to appear starting at virtual address 0x for the program to work properly.
0x vaddr 0x080bd120 paddr 0x080bd120 align 2**12
filesz 0x000022ac memsz 0x flags rw-
More bits to load, (likely to be .data section). Notice that the 'filesize' and 'memsize' differ, which means the .bss section will actually be allocated through this statement, but left as zeroes while 'real' data only occupy first 0x22ac bytes starting at virtual address 0x80bd120.
DYNAMIC off
0x00075f4c vaddr 0x080bef4c paddr 0x080bef4c align 2**2
filesz 0x memsz 0x flags rw-
The dynamic sections are used to store information used in the dynamic linking process, such as required libraries and relocation entries.
0x vaddr 0x paddr 0x align 2**2
filesz 0x memsz 0x flags r--
NOTE sections contain information left by either the programmer or the linker, for most programs linked using the GNU `ld` linker it just says 'GNU'
EH_FRAME off
0x vaddr 0x080bc0f0 paddr 0x080bc0f0 align 2**2
filesz 0x0000002c memsz 0x0000002c flags r--
that's for Exception Handler information, in case we should link against some C++ binaries at execution (afaik).
/bin/bash, loaded (as in /proc/xxxx/maps)
bd000 r-xp :06 30574
080bd000-080c0000 rw-p :06 30574
080c0 rwxp :00 0
14000 r-xp :06 27304
/lib/ld-2.3.2.so
15000 rw-p :06 27304
/lib/ld-2.3.2.so
We can recognize our 'code bits' and 'data bits', by stating that the second one should be loaded at 0x080bd*120* and that it starts in file at 0x*, we actually preserved page-to-disk blocks mapping (e.g. if page 0x80bc000 is missing, just fetch file blocks from 0x75000). That means, however, that a part of the code is mapped twice, but with different permissions. I suggest you do give them different physical pages too if you don't want to end up with modifiable code.
Executable image and elf binary can being mapped onto each other
The ELF file format is described in the ELF Specification. The most relevant sections for this project are 1.1 to 1.4 and 2.1 to 2.7.
The steps involved in identifying the sections of the ELF file are:
Read the ELF Header. The ELF header will always be at the very beginning of an ELF file. The ELF header contains information about how the rest of the file is laid out. You are interested only in the program headers.
Find the Program Headers, which specify where in the file to find the text and data sections and where they should end up in the executable image.
There are a few simplifying assumptions you can make about the types and location of program headers. In the files you will be working with, there will always be one text header and one data header. The text header will be the first program header and the data header will be the second program header. This is not generally true of ELF files, but it will be true of the programs you will be responsible for.
The file geekos/include/geekos/elf.h provides data types for structures which match the format of the ELF and program headers.
This is a rough guideline for what Parse_ELF_Executable() has to do:
Check that exeFileData is non-null and exeFileLength is large enough to accommodate the ELF headers and phnum program headers.
Check that the file starts with the ELF magic number (4 bytes) as described in figure 1-4 (and subsequent table) on page 11 in the ELF specification.
Check that the ELF file has no more than EXE_MAX_SEGMENTS program headers (phnum field of the elfHeader).
Fill in numSegments and entryAddr fields of the exeFormat output variable.
For each program header k in turn, fill in the corresponding segmentList[k] array element of exeFormat with offsetInFile, lengthInFile, startAddress, sizeInMemory, protFlags with information from that program header k. See figure 2-1 on page 33 in the ELF specification.
Relocation becomes handy when you need to load, for example, modules or drivers. It's possible to use the "-r" option to ld to permit you to have multiple object files linked into one big one, which means easier coding and faster testing.
The basic outline of things you need to do for relocation:
Check the object file header (it has to be ELF, not PE, for example)
Get a load address (eg. all drivers start at 0xA0000000, need some method of keeping track of driver locations)
Allocate enough space for all program sections (ST_PROGBITS)
Copy from the image in RAM to the allocated space
Go through all sections resolving external references against the kernel symbol table
If all succeeded, you can use the "e_entry" field of the header as the offset from the load address to call the entry point (if one was specified), or do a symbol lookup, or just return a success error code.
Once you can relocate ELF objects you'll be able to have drivers loaded when needed instead of at startup - which is always a Good Thing (tm).
The header is found at the start of the ELF file.
Position (32 bit)
Position (64 bit)
Magic number - 0x7F, then 'ELF' in ASCII
1 = 32 bit, 2 = 64 bit
1 = little endian, 2 = big endian
ELF Version
OS ABI - usually 0 for System V
Unused/padding
1 = relocatable, 2 = executable, 3 = shared, 4 = core
Instruction set - see table below
ELF Version
Program entry position
Program header table position
Section header table position
Flags - ar see note below
Header size
Size of an entry in the program header table
Number of entries in the program header table
Size of an entry in the section header table
Number of entries in the section header table
Index in section header table with the section names
The flags entry can probably be ignored for x86 ELFs, as no flags are actually defined.
Instruction Set Architectures:
Architecture
No Specific
The most common architectures are in bold.
This is an array of N (given in the main header) entries in the following format. Make sure to use the correct version depending on whether the file is 32 bit or 64 bit as the tables are quite different.
32 bit version:
Type of segment (see below)
The offset in the file that the data for this segment can be found (p_offset)
Where you should start to put this segment in virtual memory (p_vaddr)
Undefined for the System V ABI
Size of the segment in the file (p_filesz)
Size of the segment in memory (p_memsz)
Flags (see below)
The required alignment for this section (must be a power of 2)
64 bit version:
Type of segment (see below)
Flags (see below)
The offset in the file that the data for this segment can be found (p_offset)
Where you should start to put this segment in virtual memory (p_vaddr)
Undefined for the System V ABI
Size of the segment in the file (p_filesz)
Size of the segment in memory (p_memsz)
The required alignment for this section (must be a power of 2)
Segment types: 0 = null - 1 = load - clear p_memsz bytes at p_vaddr to 0, then copy p_filesz bytes from p_offset to p_ 2 = dynamic - requ 3 = interp - contains a file path to an executable to use as an interpreter for t
4 = note section. There are more values, but mostly contain architecture/environment specific information, which is probably not required for the majority of ELF files.
Flags: 1 = executable, 2 = writable, 4 = readable.
The logic that will allow an ELF program to run (which is quite simple once you have a scheduler) is this:
*IRQ fires*-&Scheduler-&ELF Program on Queue-&Run ELF Program until an exit() is called (usually in crt0)-&Take process off the Queue
Dynamic Linking is when the OS gives a program shared libraries if it needs them. Meaning, the libraries are found in the system and then "bind" to the program that needs them while the program is running, versus static linking, which links the libraries before the program is run. The main advantages are that programs take up less memory, and are smaller in file size. The main disadvantage, however, is that the program becomes less portable because the program depends on many different shared libraries.
In order to implement this, you need to have proper scheduling in place, a library, and a program to use that library.
You can create a library with GCC:
myos-gcc -c -fPIC -o oneobject.o oneobject.c
myos-gcc -c -fPIC -o anotherobject.o anotherobject.c
myos-gcc -shared -fPIC -Wl,-soname,nameofmylib oneobject.o anotherobject.o -o mylib.so
This library should be treated as a file, which is loaded when the OS detects its attempted usage. You will need to implement this "Dynamic Linker" into a certain classification of code such as in your memory management or your task management section. When the ELF program is run, the system should attach the shared object data to a malloc() region of memory, where the function calls to the libraries redirect to that malloc() region of memory. Once the program is finished, the region can be given up back to the OS with a call to free().
That should be a good starting point to writing a dynamic linker.
Detailed and up-to-date ELF information (including SPARC in depth) by Oracle.
See (generic or platform-specific) 'Core' specifications for additional ELF information.
,which contains a detail of elf references
ELF 64-Bit, General extension to ELF32.
Documented x86-64 specific extensions with ELF64.
Detailed guide on how to create ELF binaries from scratch.
In other languages4315人阅读
Java集合源码剖析(7)
转载请注明出处:Hashtable简介& & Hashtable同样是基于哈希表实现的,同样每个元素是一个key-value对,其内部也是通过单链表解决冲突问题,容量不足(超过了阀值)时,同样会自动增长。& & Hashtable也是JDK1.0引入的类,是线程安全的,能用于多线程环境中。& & Hashtable同样实现了Serializable接口,它支持序列化,实现了Cloneable接口,能被克隆。HashTable源码剖析& & Hashtable的源码的很多实现都与HashMap差不多,源码如下(加入了比较详细的注释):package java.
import java.io.*;
public class Hashtable&K,V&
extends Dictionary&K,V&
implements Map&K,V&, Cloneable, java.io.Serializable {
// 保存key-value的数组。
// Hashtable同样采用单链表解决冲突,每一个Entry本质上是一个单向链表
private transient Entry[]
// Hashtable中键值对的数量
// 阈值,用于判断是否需要调整Hashtable的容量(threshold = 容量*加载因子)
// 加载因子
private float loadF
// Hashtable被改变的次数,用于fail-fast机制的实现
private transient int modCount = 0;
// 序列版本号
private static final long serialVersionUID = 2286392L;
// 指定“容量大小”和“加载因子”的构造函数
public Hashtable(int initialCapacity, float loadFactor) {
if (initialCapacity & 0)
throw new IllegalArgumentException(&Illegal Capacity: &+
initialCapacity);
if (loadFactor &= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException(&Illegal Load: &+loadFactor);
if (initialCapacity==0)
initialCapacity = 1;
this.loadFactor = loadF
table = new Entry[initialCapacity];
threshold = (int)(initialCapacity * loadFactor);
// 指定“容量大小”的构造函数
public Hashtable(int initialCapacity) {
this(initialCapacity, 0.75f);
// 默认构造函数。
public Hashtable() {
// 默认构造函数,指定的容量大小是11;加载因子是0.75
this(11, 0.75f);
// 包含“子Map”的构造函数
public Hashtable(Map&? extends K, ? extends V& t) {
this(Math.max(2*t.size(), 11), 0.75f);
// 将“子Map”的全部元素都添加到Hashtable中
putAll(t);
public synchronized int size() {
public synchronized boolean isEmpty() {
return count == 0;
// 返回“所有key”的枚举对象
public synchronized Enumeration&K& keys() {
return this.&K&getEnumeration(KEYS);
// 返回“所有value”的枚举对象
public synchronized Enumeration&V& elements() {
return this.&V&getEnumeration(VALUES);
// 判断Hashtable是否包含“值(value)”
public synchronized boolean contains(Object value) {
//注意,Hashtable中的value不能是null,
// 若是null的话,抛出异常!
if (value == null) {
throw new NullPointerException();
// 从后向前遍历table数组中的元素(Entry)
// 对于每个Entry(单向链表),逐个遍历,判断节点的值是否等于value
Entry tab[] =
for (int i = tab. i-- & 0 ;) {
for (Entry&K,V& e = tab[i] ; e != e = e.next) {
if (e.value.equals(value)) {
public boolean containsValue(Object value) {
return contains(value);
// 判断Hashtable是否包含key
public synchronized boolean containsKey(Object key) {
Entry tab[] =
//计算hash值,直接用key的hashCode代替
int hash = key.hashCode();
// 计算在数组中的索引值
int index = (hash & 0x7FFFFFFF) % tab.
// 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
for (Entry&K,V& e = tab[index] ; e != e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
// 返回key对应的value,没有的话返回null
public synchronized V get(Object key) {
Entry tab[] =
int hash = key.hashCode();
// 计算索引值,
int index = (hash & 0x7FFFFFFF) % tab.
// 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
for (Entry&K,V& e = tab[index] ; e != e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
// 调整Hashtable的长度,将长度变成原来的2倍+1
protected void rehash() {
int oldCapacity = table.
Entry[] oldMap =
//创建新容量大小的Entry数组
int newCapacity = oldCapacity * 2 + 1;
Entry[] newMap = new Entry[newCapacity];
modCount++;
threshold = (int)(newCapacity * loadFactor);
table = newM
//将“旧的Hashtable”中的元素复制到“新的Hashtable”中
for (int i = oldC i-- & 0 ;) {
for (Entry&K,V& old = oldMap[i] ; old != ) {
Entry&K,V& e =
old = old.
//重新计算index
int index = (e.hash & 0x7FFFFFFF) % newC
e.next = newMap[index];
newMap[index] =
// 将“key-value”添加到Hashtable中
public synchronized V put(K key, V value) {
// Hashtable中不能插入value为null的元素!!!
if (value == null) {
throw new NullPointerException();
// 若“Hashtable中已存在键为key的键值对”,
// 则用“新的value”替换“旧的value”
Entry tab[] =
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.
for (Entry&K,V& e = tab[index] ; e != e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
V old = e.
// 若“Hashtable中不存在键为key的键值对”,
// 将“修改统计数”+1
modCount++;
若“Hashtable实际容量” & “阈值”(阈值=总的容量 * 加载因子)
则调整Hashtable的大小
if (count &= threshold) {
index = (hash & 0x7FFFFFFF) % tab.
//将新的key-value对插入到tab[index]处(即链表的头结点)
Entry&K,V& e = tab[index];
tab[index] = new Entry&K,V&(hash, key, value, e);
// 删除Hashtable中键为key的元素
public synchronized V remove(Object key) {
Entry tab[] =
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.
//从table[index]链表中找出要删除的节点,并删除该节点。
//因为是单链表,因此要保留带删节点的前一个节点,才能有效地删除节点
for (Entry&K,V& e = tab[index], prev = e != prev = e, e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
modCount++;
if (prev != null) {
prev.next = e.
tab[index] = e.
V oldValue = e.
return oldV
// 将“Map(t)”的中全部元素逐一添加到Hashtable中
public synchronized void putAll(Map&? extends K, ? extends V& t) {
for (Map.Entry&? extends K, ? extends V& e : t.entrySet())
put(e.getKey(), e.getValue());
// 清空Hashtable
// 将Hashtable的table数组的值全部设为null
public synchronized void clear() {
Entry tab[] =
modCount++;
for (int index = tab. --index &= 0; )
tab[index] =
count = 0;
// 克隆一个Hashtable,并以Object的形式返回。
public synchronized Object clone() {
Hashtable&K,V& t = (Hashtable&K,V&) super.clone();
t.table = new Entry[table.length];
for (int i = table. i-- & 0 ; ) {
t.table[i] = (table[i] != null)
? (Entry&K,V&) table[i].clone() :
t.keySet =
t.entrySet =
t.values =
t.modCount = 0;
} catch (CloneNotSupportedException e) {
throw new InternalError();
public synchronized String toString() {
int max = size() - 1;
if (max == -1)
return &{}&;
StringBuilder sb = new StringBuilder();
Iterator&Map.Entry&K,V&& it = entrySet().iterator();
sb.append('{');
for (int i = 0; ; i++) {
Map.Entry&K,V& e = it.next();
K key = e.getKey();
V value = e.getValue();
sb.append(key
== this ? &(this Map)& : key.toString());
sb.append('=');
sb.append(value == this ? &(this Map)& : value.toString());
if (i == max)
return sb.append('}').toString();
sb.append(&, &);
// 获取Hashtable的枚举类对象
// 若Hashtable的实际大小为0,则返回“空枚举类”对象;
// 否则,返回正常的Enumerator的对象。
private &T& Enumeration&T& getEnumeration(int type) {
if (count == 0) {
return (Enumeration&T&)emptyE
return new Enumerator&T&(type, false);
// 获取Hashtable的迭代器
// 若Hashtable的实际大小为0,则返回“空迭代器”对象;
// 否则,返回正常的Enumerator的对象。(Enumerator实现了迭代器和枚举两个接口)
private &T& Iterator&T& getIterator(int type) {
if (count == 0) {
return (Iterator&T&) emptyI
return new Enumerator&T&(type, true);
// Hashtable的“key的集合”。它是一个Set,没有重复元素
private transient volatile Set&K& keySet =
// Hashtable的“key-value的集合”。它是一个Set,没有重复元素
private transient volatile Set&Map.Entry&K,V&& entrySet =
// Hashtable的“key-value的集合”。它是一个Collection,可以有重复元素
private transient volatile Collection&V& values =
// 返回一个被synchronizedSet封装后的KeySet对象
// synchronizedSet封装的目的是对KeySet的所有方法都添加synchronized,实现多线程同步
public Set&K& keySet() {
if (keySet == null)
keySet = Collections.synchronizedSet(new KeySet(), this);
return keyS
// Hashtable的Key的Set集合。
// KeySet继承于AbstractSet,所以,KeySet中的元素没有重复的。
private class KeySet extends AbstractSet&K& {
public Iterator&K& iterator() {
return getIterator(KEYS);
public int size() {
public boolean contains(Object o) {
return containsKey(o);
public boolean remove(Object o) {
return Hashtable.this.remove(o) !=
public void clear() {
Hashtable.this.clear();
// 返回一个被synchronizedSet封装后的EntrySet对象
// synchronizedSet封装的目的是对EntrySet的所有方法都添加synchronized,实现多线程同步
public Set&Map.Entry&K,V&& entrySet() {
if (entrySet==null)
entrySet = Collections.synchronizedSet(new EntrySet(), this);
return entryS
// Hashtable的Entry的Set集合。
// EntrySet继承于AbstractSet,所以,EntrySet中的元素没有重复的。
private class EntrySet extends AbstractSet&Map.Entry&K,V&& {
public Iterator&Map.Entry&K,V&& iterator() {
return getIterator(ENTRIES);
public boolean add(Map.Entry&K,V& o) {
return super.add(o);
// 查找EntrySet中是否包含Object(0)
// 首先,在table中找到o对应的Entry链表
// 然后,查找Entry链表中是否存在Object
public boolean contains(Object o) {
if (!(o instanceof Map.Entry))
Map.Entry entry = (Map.Entry)o;
Object key = entry.getKey();
Entry[] tab =
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.
for (Entry e = tab[index]; e != e = e.next)
if (e.hash==hash && e.equals(entry))
// 删除元素Object(0)
// 首先,在table中找到o对应的Entry链表
// 然后,删除链表中的元素Object
public boolean remove(Object o) {
if (!(o instanceof Map.Entry))
Map.Entry&K,V& entry = (Map.Entry&K,V&)
K key = entry.getKey();
Entry[] tab =
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.
for (Entry&K,V& e = tab[index], prev = e !=
prev = e, e = e.next) {
if (e.hash==hash && e.equals(entry)) {
modCount++;
if (prev != null)
prev.next = e.
tab[index] = e.
public int size() {
public void clear() {
Hashtable.this.clear();
// 返回一个被synchronizedCollection封装后的ValueCollection对象
// synchronizedCollection封装的目的是对ValueCollection的所有方法都添加synchronized,实现多线程同步
public Collection&V& values() {
if (values==null)
values = Collections.synchronizedCollection(new ValueCollection(),
// Hashtable的value的Collection集合。
// ValueCollection继承于AbstractCollection,所以,ValueCollection中的元素可以重复的。
private class ValueCollection extends AbstractCollection&V& {
public Iterator&V& iterator() {
return getIterator(VALUES);
public int size() {
public boolean contains(Object o) {
return containsValue(o);
public void clear() {
Hashtable.this.clear();
// 重新equals()函数
// 若两个Hashtable的所有key-value键值对都相等,则判断它们两个相等
public synchronized boolean equals(Object o) {
if (o == this)
if (!(o instanceof Map))
Map&K,V& t = (Map&K,V&)
if (t.size() != size())
// 通过迭代器依次取出当前Hashtable的key-value键值对
// 并判断该键值对,存在于Hashtable中。
// 若不存在,则立即返回false;否则,遍历完“当前Hashtable”并返回true。
Iterator&Map.Entry&K,V&& i = entrySet().iterator();
while (i.hasNext()) {
Map.Entry&K,V& e = i.next();
K key = e.getKey();
V value = e.getValue();
if (value == null) {
if (!(t.get(key)==null && t.containsKey(key)))
if (!value.equals(t.get(key)))
} catch (ClassCastException unused)
} catch (NullPointerException unused) {
// 计算Entry的hashCode
// 若 Hashtable的实际大小为0 或者 加载因子&0,则返回0。
// 否则,返回“Hashtable中的每个Entry的key和value的异或值 的总和”。
public synchronized int hashCode() {
int h = 0;
if (count == 0 || loadFactor & 0)
// Returns zero
loadFactor = -loadF
// Mark hashCode computation in progress
Entry[] tab =
for (int i = 0; i & tab. i++)
for (Entry e = tab[i]; e != e = e.next)
h += e.key.hashCode() ^ e.value.hashCode();
loadFactor = -loadF
// Mark hashCode computation complete
// java.io.Serializable的写入函数
// 将Hashtable的“总的容量,实际容量,所有的Entry”都写入到输出流中
private synchronized void writeObject(java.io.ObjectOutputStream s)
throws IOException
// Write out the length, threshold, loadfactor
s.defaultWriteObject();
// Write out length, count of elements and then the key/value objects
s.writeInt(table.length);
s.writeInt(count);
for (int index = table.length-1; index &= 0; index--) {
Entry entry = table[index];
while (entry != null) {
s.writeObject(entry.key);
s.writeObject(entry.value);
entry = entry.
// java.io.Serializable的读取函数:根据写入方式读出
// 将Hashtable的“总的容量,实际容量,所有的Entry”依次读出
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException
// Read in the length, threshold, and loadfactor
s.defaultReadObject();
// Read the original length of the array and number of elements
int origlength = s.readInt();
int elements = s.readInt();
// Compute new size with a bit of room 5% to grow but
// no larger than the original size.
Make the length
// odd if it's large enough, this helps distribute the entries.
// Guard against the length ending up zero, that's not valid.
int length = (int)(elements * loadFactor) + (elements / 20) + 3;
if (length & elements && (length & 1) == 0)
if (origlength & 0 && length & origlength)
Entry[] table = new Entry[length];
count = 0;
// Read the number of elements and then all the key/value objects
for (; elements & 0; elements--) {
K key = (K)s.readObject();
V value = (V)s.readObject();
// synch could be eliminated for performance
reconstitutionPut(table, key, value);
this.table =
private void reconstitutionPut(Entry[] tab, K key, V value)
throws StreamCorruptedException
if (value == null) {
throw new java.io.StreamCorruptedException();
// Makes sure the key is not already in the hashtable.
// This should not happen in deserialized version.
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.
for (Entry&K,V& e = tab[index] ; e != e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
throw new java.io.StreamCorruptedException();
// Creates the new entry.
Entry&K,V& e = tab[index];
tab[index] = new Entry&K,V&(hash, key, value, e);
// Hashtable的Entry节点,它本质上是一个单向链表。
// 也因此,我们才能推断出Hashtable是由拉链法实现的散列表
private static class Entry&K,V& implements Map.Entry&K,V& {
// 指向的下一个Entry,即链表的下一个节点
Entry&K,V&
// 构造函数
protected Entry(int hash, K key, V value, Entry&K,V& next) {
this.hash =
this.key =
this.value =
this.next =
protected Object clone() {
return new Entry&K,V&(hash, key, value,
(next==null ? null : (Entry&K,V&) next.clone()));
public K getKey() {
public V getValue() {
// 设置value。若value是null,则抛出异常。
public V setValue(V value) {
if (value == null)
throw new NullPointerException();
V oldValue = this.
this.value =
return oldV
// 覆盖equals()方法,判断两个Entry是否相等。
// 若两个Entry的key和value都相等,则认为它们相等。
public boolean equals(Object o) {
if (!(o instanceof Map.Entry))
Map.Entry e = (Map.Entry)o;
return (key==null ? e.getKey()==null : key.equals(e.getKey())) &&
(value==null ? e.getValue()==null : value.equals(e.getValue()));
public int hashCode() {
return hash ^ (value==null ? 0 : value.hashCode());
public String toString() {
return key.toString()+&=&+value.toString();
private static final int KEYS = 0;
private static final int VALUES = 1;
private static final int ENTRIES = 2;
// Enumerator的作用是提供了“通过elements()遍历Hashtable的接口” 和 “通过entrySet()遍历Hashtable的接口”。
private class Enumerator&T& implements Enumeration&T&, Iterator&T& {
// 指向Hashtable的table
Entry[] table = Hashtable.this.
// Hashtable的总的大小
int index = table.
Entry&K,V& entry =
Entry&K,V& lastReturned =
// Enumerator是 “迭代器(Iterator)” 还是 “枚举类(Enumeration)”的标志
// iterator为true,表示它是迭代器;否则,是枚举类。
// 在将Enumerator当作迭代器使用时会用到,用来实现fail-fast机制。
protected int expectedModCount = modC
Enumerator(int type, boolean iterator) {
this.type =
this.iterator =
// 从遍历table的数组的末尾向前查找,直到找到不为null的Entry。
public boolean hasMoreElements() {
Entry&K,V& e =
Entry[] t =
/* Use locals for faster loop iteration */
while (e == null && i & 0) {
e = t[--i];
return e !=
// 获取下一个元素
// 注意:从hasMoreElements() 和nextElement() 可以看出“Hashtable的elements()遍历方式”
// 首先,从后向前的遍历table数组。table数组的每个节点都是一个单向链表(Entry)。
// 然后,依次向后遍历单向链表Entry。
public T nextElement() {
Entry&K,V& et =
Entry[] t =
/* Use locals for faster loop iteration */
while (et == null && i & 0) {
et = t[--i];
if (et != null) {
Entry&K,V& e = lastReturned =
entry = e.
return type == KEYS ? (T)e.key : (type == VALUES ? (T)e.value : (T)e);
throw new NoSuchElementException(&Hashtable Enumerator&);
// 迭代器Iterator的判断是否存在下一个元素
// 实际上,它是调用的hasMoreElements()
public boolean hasNext() {
return hasMoreElements();
// 迭代器获取下一个元素
// 实际上,它是调用的nextElement()
public T next() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
return nextElement();
// 迭代器的remove()接口。
// 首先,它在table数组中找出要删除元素所在的Entry,
// 然后,删除单向链表Entry中的元素。
public void remove() {
if (!iterator)
throw new UnsupportedOperationException();
if (lastReturned == null)
throw new IllegalStateException(&Hashtable Enumerator&);
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
synchronized(Hashtable.this) {
Entry[] tab = Hashtable.this.
int index = (lastReturned.hash & 0x7FFFFFFF) % tab.
for (Entry&K,V& e = tab[index], prev = e !=
prev = e, e = e.next) {
if (e == lastReturned) {
modCount++;
expectedModCount++;
if (prev == null)
tab[index] = e.
prev.next = e.
lastReturned =
throw new ConcurrentModificationException();
private static Enumeration emptyEnumerator = new EmptyEnumerator();
private static Iterator emptyIterator = new EmptyIterator();
// 空枚举类
// 当Hashtable的实际大小为0;此时,又要通过Enumeration遍历Hashtable时,返回的是“空枚举类”的对象。
private static class EmptyEnumerator implements Enumeration&Object& {
EmptyEnumerator() {
// 空枚举类的hasMoreElements() 始终返回false
public boolean hasMoreElements() {
// 空枚举类的nextElement() 抛出异常
public Object nextElement() {
throw new NoSuchElementException(&Hashtable Enumerator&);
// 空迭代器
// 当Hashtable的实际大小为0;此时,又要通过迭代器遍历Hashtable时,返回的是“空迭代器”的对象。
private static class EmptyIterator implements Iterator&Object& {
EmptyIterator() {
public boolean hasNext() {
public Object next() {
throw new NoSuchElementException(&Hashtable Iterator&);
public void remove() {
throw new IllegalStateException(&Hashtable Iterator&);
} 几点总结& & 针对Hashtable,我们同样给出几点比较重要的总结,但要结合与HashMap的比较来总结。& & 1、二者的存储结构和解决冲突的方法都是相同的。& & 2、HashTable在不指定容量的情况下的默认容量为11,而HashMap为16,Hashtable不要求底层数组的容量一定要为2的整数次幂,而HashMap则要求一定为2的整数次幂。& & 3、Hashtable中key和value都不允许为null,而HashMap中key和value都允许为null(key只能有一个为null,而value则可以有多个为null)。但是如果在Hashtable中有类似put(null,null)的操作,编译同样可以通过,因为key和value都是Object类型,但运行时会抛出NullPointerException异常,这是JDK的规范规定的。我们来看下ContainsKey方法和ContainsValue的源码:
// 判断Hashtable是否包含“值(value)”
public synchronized boolean contains(Object value) {
//注意,Hashtable中的value不能是null,
// 若是null的话,抛出异常!
if (value == null) {
throw new NullPointerException();
// 从后向前遍历table数组中的元素(Entry)
// 对于每个Entry(单向链表),逐个遍历,判断节点的值是否等于value
Entry tab[] =
for (int i = tab. i-- & 0 ;) {
for (Entry&K,V& e = tab[i] ; e != e = e.next) {
if (e.value.equals(value)) {
public boolean containsValue(Object value) {
return contains(value);
// 判断Hashtable是否包含key
public synchronized boolean containsKey(Object key) {
Entry tab[] =
//计算hash值,直接用key的hashCode代替
int hash = key.hashCode();
// 计算在数组中的索引值
int index = (hash & 0x7FFFFFFF) % tab.
// 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
for (Entry&K,V& e = tab[index] ; e != e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
& & 很明显,如果value为null,会直接抛出NullPointerException异常,但源码中并没有对key是否为null判断,有点小不解!不过NullPointerException属于RuntimeException异常,是可以由JVM自动抛出的,也许对key的值在JVM中有所限制吧。& & 4、Hashtable扩容时,将容量变为原来的2倍加1,而HashMap扩容时,将容量变为原来的2倍。& & 5、Hashtable计算hash值,直接用key的hashCode(),而HashMap重新计算了key的hash值,Hashtable在求hash值对应的位置索引时,用取模运算,而HashMap在求位置索引时,则用与运算,且这里一般先用hash&0x7FFFFFFF后,再对length取模,&0x7FFFFFFF的目的是为了将负的hash值转化为正值,因为hash值有可能为负数,而&0x7FFFFFFF后,只有符号外改变,而后面的位都不变。
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:1097445次
积分:16515
积分:16515
排名:第369名
原创:219篇
转载:21篇
评论:1588条
阅读:53932
文章:14篇
阅读:50428
文章:20篇
阅读:34590
文章:56篇
阅读:113769
文章:23篇
阅读:113825
阅读:79700
文章:24篇
阅读:213873
文章:13篇
阅读:119154
(1)(1)(1)(6)(28)(51)(5)(29)(5)(8)(33)(21)(11)(20)(2)(19)

我要回帖

更多关于 journal entry 的文章

 

随机推荐