HADOOP- An Open Source Framework for Big Data | IEEE Conference Publication | IEEE Xplore

HADOOP- An Open Source Framework for Big Data


Abstract:

In this paper we will discuss about an open source framework for storing and processing a huge amount of data, known as HADOOP (High Availability Distributed Object Orien...Show More

Abstract:

In this paper we will discuss about an open source framework for storing and processing a huge amount of data, known as HADOOP (High Availability Distributed Object Oriented Platform). Originally HADOOP is written in Java Language. HADOOP work on the concept of Write Once Read as many as times as you want but don’t change the content of the file (Stream Line Access Pattern). HADOOP consist a cluster containing heterogeneous computing devices with commodity hardware. A HADOOP cluster consist two things: HDFS (Hadoop Distributed File System) and MapReduce. HDFS used for data storage and MapReduce used for data process. HDFS is suitable for storing data from Tear byte to Petabyte on a cluster and it run on a commodity hardware.
Date of Conference: 27-29 April 2022
Date Added to IEEE Xplore: 17 August 2022
ISBN Information:
Conference Location: London, United Kingdom

Contact IEEE to Subscribe

References

References is not available for this document.