Why ResourceManager crashes after sometime or while accessing HDFS in Hadoop 2 8 1 and Ubuntu 16 04

0 votes

I have setup a Hadoop 2.8.1 cluster on Ubuntu 16.04 LTS where I have 1 machine with NameNode Daemon & 2 machines with DataNode Daemon. I using it for test purpose now, so I have allocated them 20GB space for now.

Whenever I am starting all the daemons, my ResourceManager breaks either in the 1st minute or when I try to access my HDFS.

My configurations are as following:

/etc/hosts:

192.168.15.20 slave1 slave1
192.168.15.21 master2 master2
192.168.15.22 slave3 slave3

hdfs-site.xml (master)

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
       <name>dfs.replication</name>
       <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/usr1/hadoop/store/hdfs/namenode</value>
    </property>
</configuration>

hdfs-site.xml (slaves)

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration
    <property>
       <name>dfs.replication</name>
       <value>3</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/usr1/hadoop/store/hdfs/datanode</value>
    </property>
</configuration>

core-site.xml (master & slaves)

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property> 
    <name>fs.default.name</name>
   <value>hdfs://master2:9000</value>
</property>
</configuration>

JAVA HOME (hadoop-env)

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

.bashrc

# -- HADOOP ENVIRONMENT VARIABLES START -- #
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/usr/lib/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/*:.
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.egd=file:/dev/../dev/urandom"

mapred-site.xml

<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>
<property>
 <name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master2:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master2:19888</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Djava.security.egd=file:/dev/../dev/urandom</value>
</property>
</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master2:8025</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master2:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master2:8051</value>
</property>
</configuration>

looking at the ports gives the following results

Apr 15, 2018 in Big Data Hadoop by coldcode
• 2,090 points
1,439 views

1 answer to this question.

0 votes
I was facing the same problem and later I realized this problem was due to the lack of the RAM. So, I increased the RAM of DataNodes by 2 times & NameNode by 4 times and everything started working fine.

According to my experience, in your case 8-10 GB total RAM is a good fit. Generally the Java heap size of ResourceManager, NodeManager & DataNode should be 0.6-0.7 GB minimum. So, we should allocated the machines around 2 Gb of RAM minimum. And as NameNode keep hash maps of Data blocks so it requires more memory. I would recommend to allocate it 2-4GB.
answered Apr 15, 2018 by Shubham
• 13,490 points

Related Questions In Big Data Hadoop

0 votes
1 answer

Not able to start Job History Server in Hadoop 2.8.1

You have to start JobHistoryServer process specifically ...READ MORE

answered Mar 30, 2018 in Big Data Hadoop by Ashish
• 2,650 points
2,581 views
0 votes
1 answer

Format HDFS Namenode Error: Could not find or load main class ”-Djava.library.path=.home.hadoop.hadoop-3.2.1.lib.native”

Hi@fwood, According to your configuration, you didn't set ...READ MORE

answered Jun 12, 2020 in Big Data Hadoop by MD
• 95,460 points
11,061 views
0 votes
1 answer
0 votes
1 answer

Apache Hadoop Yarn example program

You can go to this location $Yarn_Home/share/hadoop/mapreduce . You'll ...READ MORE

answered Apr 4, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
1,198 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,033 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,853 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,616 views
0 votes
1 answer

Files for Configuring HDFS in Hadoop 2.2.0?

By default these Hadoop configuration files are ...READ MORE

answered Apr 15, 2018 in Big Data Hadoop by Shubham
• 13,490 points
759 views
+1 vote
2 answers

How to authenticate username & password while using Connector for Cloudera Hadoop in Tableau?

Hadoop server installed was kerberos enabled server. ...READ MORE

answered Aug 21, 2018 in Big Data Hadoop by Priyaj
• 58,020 points
1,684 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP