In this video blog, we will be discussing how to start your Hadoop daemons.
We can come to the conclusion that the Hadoop cluster is running by looking at the Hadoop daemons itself.
A Daemon is nothing but a process. So, Hadoop daemons are nothing but Hadoop processes. As Hadoop is built using Java, all the Hadoop daemons are Java processes.
We can check the list of Java processes running in your system by using the command jps.
If you are able to see the Hadoop daemons running after executing the jps command, we can safely assume that the Hadoop cluster is running.
Some of the basic Hadoop daemons are as follows:
We can find these daemons in the sbin directory of Hadoop. After moving into the sbin directory, we can start all the Hadoop daemons by using the command start-all.sh.
After executing the command, all the daemons start one by one. After all the daemons have started, we can check their presence by typing jps, which gives the list of all Java processes that are running.
We can also stop all the daemons using the command stop-all.s. We can also start or stop each daemon separately.
Now, let’s look at the start and stop commands for each of the Hadoop daemons.
Start:hadoop-daemon.sh start namenode
stop:hadoop-daemon.sh stop namenode
Start:hadoop-daemon.sh start datanode
Stop:hadoop-daemon.sh stop datanode
start:yarn-daemon.sh start resourcemanager
stop:yarn-daemon.sh stop resoucemnager
start:yarn-daemon.sh start nodemanager
stop:yarn-daemon.sh stop nodemanager
We can see that the Name node and Data node are segregated as Hadoop daemons, and the Resource manager and the Node manager are segregated as YARN daemons.