Open Access Open Access  Restricted Access Subscription or Fee Access

Hadoop and Map Reduce for Simplified Processing of Big Data

Shobha Rani

Abstract


Big data refers to large and complex data sets made up of a variety of structured and unstructured data, which are too big, too fast, or too hard to be managed by traditional techniques. Hadoop is an open source software platform for structuring big data on computer clusters built from commodity hardware and enabling it for data analysis. It is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. MapReduce is a programming model for processing large data sets with parallel distributed algorithm on cluster. This paper presents the survey of simplified big data processing with Hadoop architecture and MapReduce framework.  

Keywords: Big data, Hadoop, MapReduce, HDFS, Big data Analytics 

 

Cite this Article Ch. Shobha Rani. Hadoop and Map Reduce for Simplified Processing of Big Data. Current Trends in Information Technology. 2016; 6(3): 1–6p. 


Full Text:

PDF

Refbacks

  • There are currently no refbacks.