Hadoop Datanode runs only once and then does not start again on Windows 10

0 votes
I was trying to install Hadoop

and run a simple sample program

Datanode started only for "1" - one time and then I started getting this error

```

2021-01-06 23:48:25,610 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/C:/hadoop/sbin/datanode

2021-01-06 23:48:25,666 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/hadoop/sbin/datanode

java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;

at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)

at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:608)

at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:823)

at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:737)

at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:705)

at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)

at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)

at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)

at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)

at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)

at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)

at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)

at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)

at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)              

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

2021-01-06 23:48:25,671 ERROR datanode.DataNode: Exception in secureMain

org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0

at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:233)

at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2841)

at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2754)

at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2798)

at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2942)

at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2966)

2021-01-06 23:48:25,675 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0

```

I have referred to many different articles but to no avail. I have tried to use another version of Hadoop but the problem remains and as I am just starting out, I can't fully understand the problem therefore I need help

these are my configurations

```

                -For core-site.xml

<configuration>

 <property>

 <name>fs.defaultFS</name>

 <value>hdfs://localhost:9000</value>

 </property>

</configuration>

- For mapred-site.xml

<configuration>

 <property>

 <name>mapreduce.framework.name</name>

 <value>yarn</value>

 </property>

</configuration>

-For yarn-site.xml

<configuration>

 <property>

 <name>yarn.nodemanager.aux-services</name>

 <value>mapreduce_shuffle</value>

 </property>

 <property>

 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

 </property>

</configuration>

-For hdfs-site.xml

<configuration>

 <property>

<name>dfs.replication</name>

 <value>1</value>

 </property>

 <property>

 <name>dfs.namenode.name.dir</name>

 <value>C:\hadoop\data\namenode</value>

 </property>

 <property>

 <name>dfs.datanode.data.dir</name>

 <value>datanode</value>

 </property>

</configuration>

```
Jan 7, 2021 in Big Data Hadoop by Mueez

edited Mar 4 34 views

No answer to this question. Be the first to respond.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP