I've installed spark cluster standalone on two nodes, one of them running as master and the other as worker. Spark shell works fine on worker node with word count but this comes up when I try and run it on the master.
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Somehow executor is not triggered. The worker is available to spark master its giving such a problem.