I have a CDH pseudo-distributed mode Hadoop cluster. It was working fine. Then as I was studying about the configuration files, I came across fs.default.name property present in core-site.xml file. Earlier my host was localhost, but I replaced it with hadoop.
After restarting the Hadoop daemons, Hadoop show that it is in safemode.
I got the following output:
$ hadoop dfsadmin -report
...
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
I am not able to execute -cat or -put command. It is showing that NameNode is in safe mode. Can anyone help in understanding how I can keep the hostname as Hadoop so that external systems can connect to it & my NameNode does not enter safe mode.