Wednesday, January 4, 2012

HDFS - Write Anatomy

Create and Write of HDFS file

Creation and writing of a file is more complicated than the read of a HDFS file. Here also NameNode(NN) never writes any data directly to DataNodes(DN). It, as per it's role, only manages the namespace and inodes. Client has to write directly to datanode. However each datanodes has to notify receipt of each block back to client and namenode. Also each datanode passes on the block to next datanode to write, that means client has to transmit block to only first datanode and rest of the block movement is handled inside the cluster. Here is the flow of data file create and write on HDFS.  



Other facts about HDFS write:
  • Interestingly client and NN does not wait till all the replicas of the block are acknowledged, it only ensure that at-least one copy of file is completely on the cluster.
  • Another fact is that client does not start transmitting data till it has one block with it, i.e. the DFSClient keep buffering data locally till one block of data or the end-of-file is reached.
  • The above illustration assumes that replication factor of files is set to three. Accordingly if the replication factor is more that or less than three, steps 6-10 would be repeated accordingly. BTW, the minimum replication factor or a file can be 1 and max can be 512.  However the default value is three.
  • Also this illustration is till before Hadoop version 0.23. I still have to look into the federated NN architecture.
  • While the lease is with a particular client, no other client can write to this file, or delete the file. However it can read the file. 
  • Write to the file can happen only at the end of the file, i.e. only append is allowed. That too is available only on ver 0.20.205 and beyond. The feature was there in earlier versions as well, however was not tested and disabled by default. 

There are some other concepts like block and replica management, which I will cover in my next post. 

Monday, January 2, 2012

HDFS - Read Anatomy

Following is the read anatomy of HDFS file:

                    
1. Client request the document
2. NN, checks the permissions and sends back the list of blocks and datanodes list (including port number to talk) for each block. 
3-6. "DFSClient" class on client-side picks up first block and requests the block from first datanode on the list. It tries two times and if no response then it adds the datanode to "deadnodes" list. And requests block from next datanode on the list.
7-8. After usccessful read of all the blocks, "DFSClient" send the deadnodes list back to NN for it to take action. 

I will talk about Write-anatomy in next post. Please keep reading...