Wednesday, 9 July 2014

Hadoop - A Framework for Data Intensive Computing Applications



What does it do?
  • Hadoop implements Google’s MapReduce, using HDFS
  • MapReduce divides applications into many small blocks of work. 
  • HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster. 
  • MapReduce can then process the data where it is located. 
  • Hadoop ‘s target is to run on clusters of the order of 10,000-nodes.


Hadoop: Assumptions
  • It is written with large clusters of computers in mind and is built around the following assumptions:
  • Hardware will fail.
  •  Processing will be run in batches. Thus there is an emphasis on high throughput as opposed to low latency.
  • Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. 
  • It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.
  • Applications need a write-once-read-many access model. 
  • Moving Computation is Cheaper than Moving Data. 
  •  Portability is important.

Example Applications and Organizations using Hadoop
  • A9.com – Amazon: To build Amazon's product search indices; process millions of sessions daily for analytics, using both the Java and streaming APIs; clusters vary from 1 to 100 nodes. 
  • Yahoo! : More than 100,000 CPUs in ~20,000 computers running Hadoop; biggest cluster: 2000 nodes (2*4cpu boxes with 4TB disk each); used to support research for Ad Systems and Web Search 
  • AOL : Used for a variety of things ranging from statistics generation to running advanced algorithms for doing behavioral analysis and targeting; cluster size is 50 machines, Intel Xeon, dual processors, dual core, each with 16GB Ram and 800 GB hard-disk giving us a total of 37 TB HDFS capacity. 
  • Facebook: To store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning; 320 machine cluster with 2,560 cores and about 1.3 PB raw storage; 
  • FOX Interactive Media : 3 X 20 machine cluster (8 cores/machine, 2TB/machine storage) ; 10 machine cluster (8 cores/machine, 1TB/machine storage); Used for log analysis, data mining and machine learning 
  • University of Nebraska Lincoln: one medium-sized Hadoop cluster (200TB) to store and serve physics data;

MapReduce Paradigm
  • Programming  model developed at Google
  • Sort/merge based distributed computing
  • Initially, it was intended for their internal search/indexing application, but now used extensively by more organizations (e.g., Yahoo, Amazon.com, IBM, etc.)
  • It is functional style programming (e.g., LISP) that is naturally parallelizable across  a large cluster of workstations or PCS.
  •  The underlying system takes care of the partitioning of the input data, scheduling the program’s execution across several machines, handling machine failures, and managing required inter-machine communication. (This is the key for Hadoop’s success)


How does MapReduce work?
  • The run time partitions the input and provides it to different Map instances;
  • Map (key, value)  (key’, value’)
  • The run time collects the (key’, value’) pairs and distributes them to several Reduce functions so that each Reduce function gets the pairs with the same key’. 
  • Each Reduce produces a single (or zero) file output.
  • Map and Reduce are user written functions

Hadoop Architecture




MapReduce-Fault tolerance
  • Worker failure: The master pings every worker periodically. If no response is received from a worker in a certain amount of time, the master marks the worker as failed. Any map tasks completed by the worker are reset back to their initial idle state, and therefore become eligible for scheduling on other workers. Similarly, any map task or reduce task in progress on a failed worker is also reset to idle and becomes eligible for rescheduling.
  • Master Failure: It is easy to make the master write periodic checkpoints of the master data structures described above. If the master task dies, a new copy can be started from the last checkpointed state. However, in most cases, the user restarts the job.

Mapping workers to Processors
  • The input data (on HDFS) is stored on the local disks of the machines in the cluster. HDFS divides each file into 64 MB blocks, and stores several copies of each block (typically 3 copies) on different machines. 
  • The MapReduce master takes the location information of the input files into account and attempts to schedule a map task on a machine that contains a replica of the corresponding input data. Failing that, it attempts to schedule a map task near a replica of that task's input data. When running large MapReduce operations on a significant fraction of the workers in a cluster, most input data is read locally and consumes no network bandwidth.


Additional support functions
  • Partitioning function: The users of MapReduce specify the number of reduce tasks/output files that they desire (R). Data gets partitioned across these tasks using a partitioning function on the intermediate key. A default partitioning function is provided that uses hashing (e.g. .hash(key) mod R.). In some cases, it may be useful to partition data by some other function of the key. The user of the MapReduce library can provide a special partitioning function. 
  • Combiner function: User can specify a Combiner function that does partial merging of  the intermediate local disk data before it is sent over the network. The Combiner function is executed on each machine that performs a map task. Typically the same code is used to implement both the combiner and the reduce functions.


Hadoop Distributed File System (HDFS)
  • The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. 
  • highly fault-tolerant and is designed to be deployed on low-cost hardware. 
  • provides high throughput access to application data and is suitable for applications that have large data sets. 
  • relaxes a few POSIX requirements to enable streaming access to file system data. 
  • part of the Apache Hadoop Core project. The project URL is http://hadoop.apache.org/core/. 

Hadoop Community
  • Hadoop Users
  1. Adobe
  2. Alibaba
  3. Amazon
  4. AOL
  5. Facebook
  6. Google
  7. IBM
  • Major Contributor
  1. Apache
  2. Cloudera
  3. Yahoo




No comments:

Post a Comment