来自dirkmeister 关于存储系统的总结

原文:http://dirkmeister.blogspot.com/2010/01/storage-systems-course-my-own-idea.html 需翻墙



在这篇文章中,我将提取我自己对存储课程的一些观点和想法并组织起来。我假设有个15 周的课程并且每周有1 个半小时讲演(就像我们在德国):


Material: Ruemmler, Wilkes An introduction to disk drive modeling

  1. Disk Scheduling / SSD 磁盘调度/固态硬盘
    Material: Iyer, Druschel. Anticipatory scheduling: A disk scheduling framework to overcome deceptive idleness in synchronous I/O, Agrawal et al. Design Tradeoffs for SSD Performance
  2. RAID  冗余磁盘整列
    Material: Patterson et al. Introduction to Redundant Arrays of Inexpensive Disk (RAID), Corbett. Row-Diagonal Parity for Double Disk Failure Correction
  3. Local File Systems  本地文件系统
  4. Local File System Case Studies: ext3, btrfs
    Material: Valerie Aurora. A short history of btrfs, Card et al. Design and Implementation of the Second Extended Filesystem
  5. Local File Structures (Sequential, Hashing, B-Tree)  本地文件结构
    Material: Comer. The Ubiquitous B-Tree
  6. SAN / NAS / Object-based Storage
    Material: Sacks. Demystifying DAS, SAN, NAS, NAS Gateways, Fibre Channel, and iSCSI
  7. Examples: NFS, Ceph, GoogleFS/Hadoop DFS
    Material: Weil. Ceph, A scalable, high-performance distributed file system, Ghemawat et al. The Google File System
  8. Snapshots and Log-based Storage Designs  快照和基于日志存储设计
    Material: Brinkmann, Effert. Snapshots and Continuous Data Replication in Cluster Storage Environments, Hitz et al.File System Design for an NFS File Server Appliance, Rosenblum, Ousterhout. The Design and Implementation of a Log-Structured File System
  9. Fault Tolerance, Journaling, and Soft Updates 容错,日志和软件升级
    Material: Prabhakaran et al. Analysis and Evolution of Journaling File Systems, Seltzer et al. Journaling Versus Soft Updates: Asynchronous Meta-data Protection in File Systems
  10. Advanced Hashing: Consistent Hashing, Share, and Crush
    Material: Karger et al. Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web, Weil et al. CRUSH: controlled, scalable, decentralized placement of replicated data
  11. Caching, Replication  缓存,副本管理
    Material: Nelson et al. Caching in the Sprite network file system, Kistler et al. Disconnected operation in the Coda File System
  12. Consistency, Availability, and Partition Tolerance  一致性,可用性和分区容忍性
    Material: DeCandia et al. Dynamo: Amazon’s Highly Available Key-value Store, Helland, Life beyond Distributed Transaction: An Apostate’s Opinion
  13. Data Deduplication  重复数据删除
    Material: Muthitacharoen et al., A Low-bandwidth Network File System, Douglis, Iyengar. Application-specific Delta-encoding via Resemblance Detection
  14. Performance Analysis  新能分析
    Material: Traeger, A nine year study of file system and storage benchmarking (at least parts of it)



  • To clearly separate between classes of file systems and a concrete example. The best example is the class of network file systems vs. NFS. At the end there should be no much question if something is an inherent property of a class of file systems or of the concrete implementation  能清楚的分辨不同类型文件系统和具体的例子。最好的例子就是网络文件系统类和NFS。最后应该对某个东西是某类文件系统的相关性质还是具体应用不存在疑问
  • To have enough time to handle the basic concepts independently from concrete usages. For example explaining B-Trees as an important file structures independent from the usage in e.g. BTRFS.    独立于具体应用,花足够的时间掌握基本概念。比如解释B-树是一个重要的文件结构,独立于比如在BTRFS 中的使用
  • The concepts are more important than the current technology or standards.  概念远比当前的一些技术和标准重要。

写一个分布式存储系统有多简单? (How easy to write a Distributed Storage System ?)

用python 写了个用户态分布式存储系统(且这么称吧),实现文件存储。(今天在google 上看到两个人和我干了同样工作,PyGfs 使用了pyro 库封装了一些远程调用,后者则是模拟实现了GFS 功能,两份代码都不长。)

自己的分布式文件系统总体分为Client 端和DataServer 端,没有NameServer 是因为通过文件名hash 得到对应的DS 服务器。存储的粒度为文件,提供write 和fetch,写文件和获取文件功能,读写过程:通过对路径+文件名进行hash,用机器数量为模数,得到目的主机编号和地址,数据通过socket 传过去,DS 的机器数在部署时可配置,写在配置文件。

TODO:文件分片,NS 实现(只保存目录树)

配置文件为ip , port 格式:



DataServer 端:

 1: #!/usr/bin/env python
 2: #encoding=utf-8
 3: import os,sys
 4: import socket
 6: class DataServer:
 7:     port=10000
 8:     def __init__(self):
 9:         pass
 10:     def run(self):
 11:         dsoc=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
 12:         dsoc.bind(('localhost',self.port))
 13:         print "listening at",self.port
 14:         while True:
 15:             dsoc.listen(1)
 16:             conn,addr=dsoc.accept()
 17:             print "connected",addr
 18:             databuf=conn.recv(13)
 19:             print "received",databuf
 20:             if databuf.startswith('WRITFILE'):
 21:                 print "OPS == write file"
 22:                 dlist=databuf.split(',')
 23:                 fnamelen=int(dlist[1])
 24:                 conn.send('OK')
 25:                 print "filenamelen is",fnamelen
 26:                 filename=conn.recv(fnamelen)
 27:                 filename=filename[filename.rindex('\\')+1:]
 28:                 print "file is",filename
 29:                 fp=open(filename,'wb')
 30:                 while True:
 31:                     data=conn.recv(1024)
 32:                     if not data:break
 33:                     fp.write(data)
 34:                 fp.flush()
 35:                 fp.close()
 36:                 print "finished!",filename
 37:             if databuf.startswith('FETCFILE'):
 38:                 print "OPS == fetch file"
 39:                 dlist=databuf.split(',')
 40:                 fnamelen=int(dlist[1])
 41:                 conn.send('OK')
 42:                 print "filenamelen is",fnamelen
 43:                 filename=conn.recv(fnamelen)
 44:                 filename=filename[filename.rindex('\\')+1:]
 45:                 print "file is",filename
 46:                 fp=open(filename,'rb')
 47:                 while True:
 48:                     data=fp.read(4096)
 49:                     if not data:
 50:                         break
 51:                     while len(data)>0:
 52:                         sent=conn.send(data)
 53:                         data=data[sent:]
 54:                 print "Fished to send ",filename
 55:             conn.close()
 57:     def setPort(self,port):
 58:         self.port=int(port)
 60: if __name__=="__main__":
 61:     ds=DataServer()
 62:     ds.setPort(sys.argv[1])
 63:     ds.run()


Client 端:

 1: #!/usr/bin/env python
 2: #encoding=utf-8
 3: import os
 4: import socket
 5: import hashlib
 7: class Configuration:
 8:     count=0
 9:     clist=[]
 10:     def __init__(self):
 11:         self.readPath("machineconf.txt")
 12:     def readPath(self,pStr):
 13:         self.pathStr=pStr
 14:         fp=open(self.pathStr,'r')
 15:         while True:
 16:             line=fp.readline()
 17:             if not line:
 18:                 break
 19:             line=line.rstrip()
 20:             self.clist.append(line)
 21:             self.count=self.count+1
 22:     def getCount(self):
 23:         return self.count
 24:     def getList(self):
 25:         return self.clist
 27: class Client:
 28:     maList=[]
 29:     macNum=0
 30:     def __init__(self):
 31:         conf=Configuration()
 32:         self.maList=conf.getList()
 33:         self.macNum=conf.getCount()
 34:     def write(self,src):
 35:         srcHash=self.hash(src)
 36:         print "File is",src
 37:         locatedNum=srcHash%self.macNum
 38:         print "Location machine number is",locatedNum
 39:         IPort=self.maList[locatedNum].split(',')
 40:         toIp=IPort[0]
 41:         toPort=IPort[1]
 42:         print "Send to",toIp,toPort
 43:         self.send(src,toIp,toPort)
 44:     def hash(self,src):
 45:         md5=hashlib.md5()
 46:         md5.update(src)
 47:         return int(md5.hexdigest()[-4:],16)
 48:     def send(self,src,ip,port):
 49:         clsoc=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
 50:         clsoc.connect((ip,int(port)))
 51:         fp=open(src,'rb')
 52:         formStr="WRITFILE,%04d"%len(src)
 53:         print "formStr",formStr,len(formStr)
 54:         clsoc.send(formStr)
 55:         resdata=clsoc.recv(1024)
 56:         if resdata.startswith('OK'):
 57:             print "OK"
 58:         print "sending....",src
 59:         clsoc.send(src)
 60:         print "sending data...."
 61:         while True:
 62:             data=fp.read(4096)
 63:             if not data:
 64:                 break
 65:             while len(data)>0:
 66:                 sent=clsoc.send(data)
 67:                 data=data[sent:]
 68:         print "Fished to send ",src
 69:         fp.close()
 70:     def fetch(self,src):
 71:         srcHash=self.hash(src)
 72:         locatedNum=srcHash%self.macNum
 73:         IPort=self.maList[locatedNum].split(',')
 74:         toIp=IPort[0]
 75:         toPort=IPort[1]
 76:         print "fetch from",toIp,toPort
 77:         self.get(src,toIp,toPort)
 78:     def get(self,src,ip,port):
 79:         clsoc=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
 80:         clsoc.connect((ip,int(port)))
 81:         formStr="FETCFILE,%04d"%len(src)
 82:         print "formStr",formStr
 83:         clsoc.send(formStr)
 84:         resdata=clsoc.recv(1024)
 85:         if resdata.startswith('OK'):
 86:             print "OK"
 87:         print "sending....",src
 88:         clsoc.send(src)
 89:         print "fetching data...."
 90:         ffile=src[src.rindex('\\')+1:]
 91:         fp=open(ffile,'wb')
 92:         while True:
 93:             data=clsoc.recv(1024)
 94:             if not data:break
 95:             fp.write(data)
 96:             print "fetching",data[-8:]
 97:         fp.flush()
 98:         fp.close()
 99:         print "finished!",src
 100: if __name__=="__main__":
 101:     mac=Client()
 102:     src="e:\\pics\\tumblr_m0pvvmcIv41r9mp65o1_500.gif"
 103:     mac.write(src)
 104:     mac.fetch(src)

从fsck 看数据一致性Consistency

昨晚和fumin 聊到数据一致性的时候,他问及什么是数据一致性Consistency,我只能机械的背出数据一致性的概念:所访问的数据是所期望的数据而不是其他数据或者垃圾数据。上午重新翻了《操作系统设计与实现》文件系统这一章,谈到了fsck 所进行的操作,fsck 是类UNIX 操作系统用于数据一致性检查的工具,它进行了以下工作:

首先数据块使用情况:建立两个表,一个是遍历所有所有inode 节点,记录块使用情况,数据块每使用一次在其位置上加一,则n 表示该数据块被使用n 次,0表示没有在使用;二是遍历空闲块链表或者blockmaps 记录所有空闲块的表,1 表示空闲,0 表示不空闲,那么正常情况应如下:

0011001001001 使用中的块

1100110110110 空闲块


0011001001001 使用中的块

0100110110110 空闲块


0011001001001 使用中的块

2100110110110 空闲块

也就是第一块在空闲块链表中记录了两次(不可能在blockmaps 里面出现这样的情况)。这种情况比较简单,将空闲块链表中多余块删除,更严重的情况:

1011001001001 使用中的块(情况A)

1100110110110 空闲块

2011001001001 使用中的块(情况B)

0100110110110 空闲块

情况A 第一个数据块即在空闲块表中也在使用块表中,如果将这个空闲块分配给新的数据,那么以前的数据就完全被覆盖且无法恢复了,为了避免这种情况的发生,应该将此空闲块从空闲块中删除。情况B 是一个数据块被两个文件所使用,可以复制该数据块内容到新的数据块,并修改对应的一个inode 该块数据地址到新的复制块的地址。

接着在检查完块使用情况后检查链接,每个文件的inode 上会有一个计数器,在文件创建的时候,计数器设置为1,每增加一个该文件的硬链接计数器加1,一般情况下只有该文件的计数器为1的时候才能够删除。在fsck 的过程中会统计每个文件的硬链接的个数,和对应文件的计数器进行比较,可能出现以下几种不一致的情况:






Block 的分析 http://blog.sina.com.cn/s/blog_698853500100rxwu.html

  • org.apache.hadoop.hdfs.protocol
    • Block.java 实现Block 类,Block 文件名为”blk_”+BlockId,
    • BlockListAsLongs.java 提供了一个long[] 类型的接口用于访问list of blocks,在进行块报告的时候有用,返回的是long[] 型,而不是Block[] 型。里面就讲到了Block 有三个longs:block_id,block length 和 genration stamp(http://hi.baidu.com/jay23jack/blog/item /388d770bb4db20b72fddd40f.html 谈到了stamp 和BlockId 的作用和区别)。
    • BlockLocalPathInfo.java 本地文件系统数据块和它在本地元数据的路径。
    • LocateBlock.java/LocateBlocks.java 非常重要的基础类,LocatedBlock 的内容有如下:


数据块(Block 类型,offset of the first byte of the block in the file),






 List<LocatedBlock> blocks (块组列表)

 underConstruction (是否在租约中)

  • org.apache.hadoop.hdfs.server.namenode
    • BlocksMap.java 块到块元数据(块属于哪个INode以及块存在哪些DataNodes上)的映射。BlockInfo 是BlocksMap 中的重要结构,保存了上面所说的映射关系, 继承了Block 类并包含了INodeFile 对象和Block[] 结构,该结构包含了(1)数据块所在DataNode 描述符,即triplets[3*i](2)triplets[3*i+1] 是前一个数据块(3)triplets[3*i+2]是后一个数据块。如下:


blockId (long)

numBytes (long)



blockId (long)

numBytes (long)



DN1|prev|next< >DN2|prev|next< >DN3|prev|next

  • org.apache.hadoop.hdfs.server.datanode
    • BlockAlreadyExistException.java 当块已经存在时不可写
    • BlockReceiver.java 接收一个块并写入到磁盘
    • BlockSender.java 从本地读取一个块发送给接收者
    • DataBlockScanner 扫描块,但不改变元数据。扫描速度限制在1MB 8MB,默认扫描时间为三周,块内包含了扫描类型时间和结果

参考:http://blog.csdn.net/AE86_FC/article/details/5842020 NameNode启动过程详细剖析