Install Apache Hadoop on Debian 9 / Ubuntu 16.04 / CentOS 7 (Single Node Cluster)


Apache Hadoop is an open-source software framework written in Java for distributed storage and distributed process, and it handles the very large size of data sets by distributing it across computer clusters.

Rather than rely on hardware high availability, Hadoop modules are designed to detect and handle the failure at the application layer, so gives you high-available service.

Hadoop framework consists of following modules,

  •  Hadoop Common – It contains common set of libraries and utilities that support  other Hadoop modules
  •  Hadoop Distributed File System (HDFS) – is a Java-based distributed file-system that stores data, providing very high-throughput to the application.
  •  Hadoop YARN –  It manages resources on compute clusters and using them for scheduling user’s applications.
  • Hadoop MapReduce – is a framework for large-scale data processing.

This guide will help you to get apache Hadoop installed on Debian 9 / Ubuntu 16.04 / CentOS 7. Also, this should work on Ubuntu 14.04.


Switch to the root user.

su -


sudo su -

Apache Hadoop requires Java version 8 and above. So, you can choose to install either OpenJDK or Oracle JDK.

Here, for this demo, I will be installing OpenJDK 8.

### Debian 9 / Ubuntu 16.04 ###

apt-get -y install openjdk-8-jdk wget

### CentOS 7 / RHEL 7 ###

yum -y install java-1.8.0-openjdk wget

Create Hadoop user

It is recommended to create a regular user to configure and run Apache Hadoop. So, create a user named “hadoop” and set a password.

useradd -m -d /home/hadoop -s /bin/bash hadoop

passwd hadoop

Once you created a user, configure a passwordless ssh to the local system. Create an ssh key using the following command

# su - hadoop

$ ssh-keygen

$ cat ~/.ssh/ >> ~/.ssh/authorized_keys

$ chmod 600 ~/.ssh/authorized_keys

Verify the passwordless communication to the local system. If you are doing ssh for the first time, type “yes” to add RSA keys to known hosts.

$ ssh

Download Hadoop

You can visit Apache Hadoop page to download the latest Hadoop package, or you can just issue the following command in terminal to download Hadoop 2.8.1.

$ wget

$ tar -zxvf hadoop-2.8.1.tar.gz

$ mv hadoop-2.8.1 hadoop

Install Apache Hadoop

Hadoop supports three modes of clusters

  1.     Local (Standalone) Mode – It runs as single java process.
  2.     Pseudo-Distributed Mode – Each Hadoop daemon runs in a separate process.
  3.     Fully Distributed Mode – Actual multinode cluster ranging from few nodes to extremely large cluster.

Setup environmental variables

Here, we will be configuring Hadoop in Pseudo-Distributed mode. To start with, set an environmental variables in the ~/.bashrc file.

$ vi ~/.bashrc

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk- # Change it depends on JAVA installation directory
export HADOOP_HOME=/home/hadoop/hadoop # Change it depends on Hadoop installation directory

Apply environmental variables to the current session.

$ source ~/.bashrc

Modify Configuration files

Edit the Hadoop environmental file.

vi $HADOOP_HOME/etc/hadoop/

Set JAVA_HOME environment variable.

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-

Hadoop has many configuration files, and we need to edit them depends on the cluster modes we set up (Pseudo-Distributed).

$ cd $HADOOP_HOME/etc/hadoop

Edit core-site.xml


Edit hdfs-site.xml




Edit mapred-site.xml

$ cp $HADOOP_HOME/etc/hadoop/mapred-site.xml.template $HADOOP_HOME/etc/hadoop/mapred-site.xml

Edit yarn-site.xml


Now format the NameNode using the following command. Do not forget to check the storage directory.

$ hdfs namenode -format


Allow Apache Hadoop through the firewall.


firewall-cmd --permanent --add-port=50070/tcp
firewall-cmd --permanent --add-port=8088/tcp
firewall-cmd --reload


ufw allow 50070/tcp
ufw allow 8088/tcp
ufw reload

Start NameNode daemon and DataNode daemon by using the scripts in the /sbin directory, provided by Hadoop.

$ cd $HADOOP_HOME/sbin/

Open your web browser and browse the NameNode at

Install Apache Hadoop on Debian 9 - Hadoop NameNode Information
Install Apache Hadoop on Debian 9 – Hadoop NameNode Information

Start ResourceManager daemon and NodeManager daemon.


Browse the web interface for the ResourceManager at

Install Apache Hadoop on Debian 9 - Yarn
Install Apache Hadoop on Debian 9 – Yarn

Testing Hadoop single node cluster

Before carrying out the upload, let us create a directory at HDFS.

$ hdfs dfs -mkdir /raj

Let us upload a file into HDFS directory called “raj”

$ hdfs dfs -put ~/.bashrc /raj

Uploaded files can be viewed by visiting the following URL or Utilities –> Browse the file system in NameNode.

Install Apache Hadoop on Debian 9 - Hadoop FS
Install Apache Hadoop on Debian 9 – Hadoop FS

Copy the files from HDFS to your local file systems.

$ hdfs dfs -get /raj /tmp/

You can delete the files and directories using the following commands.

hdfs dfs -rm  /raj/messages
hdfs dfs -r -f /raj

That’s All. You have successfully configured single node Hadoop cluster.

Further Reading

You might also like

Install Apache Hadoop on Debian 9 / Ubuntu 16.04 / CentOS 7 (Single Node Cluster)