Installing Web UI Dashboard kubernetes-dashboard on main Ubuntu 16 04 6 LTS Xenial Xerus server

+4 votes

Trying to follow the tutorial on the official kubernetes: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

I manage to install the following on Ubuntu 16.04.6 LTS (Xenial Xerus):

  1. sudo apt install docker.io -y
  2. sudo apt install -y apt-transport-https
  3. sudo apt install -y kubeadm kubelet kubectl

But when trying this command "kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml" to create a dashboard on the main server, it is giving me the following error message.

Error message: unable to recognize "https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

        ?

Apr 9, 2019 in Kubernetes by nmentityvibes
• 420 points
7,149 views
Hey!! Are you working on GKE?
No. I'm not.
So basically, you're trying to start the dashboard on a normal Linux system?
Also, have you created the cluster?
No, I have not created a cluster. I have also referred to my earlier query here https://www.edureka.co/community/33152/dashboard-kubernetes-access-application-cluster-dashboard. Now the physical hardware machine to which the nodes need to be connected has the following version installed.

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

          Does it need to be downgraded as you have answered on the earlier query?

          Can I go ahead with this version and configure the cluster with the following command?

sudo kubeadm init --pod-network-cidr=192.168.0.0/24 --apiserver-advertise-address=192.168.31.137 --kubernetes-version "1.14.1"

          192.168.31.137 is the main server to which the nodes will be connected. The virtual node server has an IP address of 192.168.122.127.

You need to create a Kubernetes cluster and then create the dashboard. 

Also, which command gives this error?

The connection to the server localhost:8080 was refused - did you specify the right host or port?
This was the end result of creating a cluster.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.137:6443 --token dq1vmq.8zqbo2atkuchyqb9 \
    --discovery-token-ca-cert-hash sha256:2a636f9bd143d6f7141bff785a672901bdfb6b77cc586a5cc18a2bb61f016727

          kubectl version command gave me this error message "The connection to the server localhost: 8080 was refused - did you specify the right host or port?" when checked on the main server.

Your cluster hasn't been created, as the error message. Follow this blog to create the cluster.

Check all the pre-requisites and configurations before proceeding.

Make sure your versions are the same on master and the worker node. Also, do kubeadm reset before you start creating a fresh new cluster.
I followed the blog and have installed and configured the pre-requisites on the nodes also and the configurations remain the same as mentioned.

Okay. 1 last option. Run the below command to delete your kubeadm environment and restart from Step 1 as told in this blog: https://www.edureka.co/blog/install-kubernetes-on-ubuntu

sudo kubeadm reset

Do let me know if it works!

I am seeing something like this, the kubeproxy is evicted, the calico network and coredns are pending to start. I am waiting for your help.

~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-75f6766678-zz52q   0/1     Pending   0          5m2s    <none>           <none>       <none>           <none>
kube-system   coredns-fb8b8dccf-7vppb                    0/1     Pending   0          22m     <none>           <none>       <none>           <none>
kube-system   coredns-fb8b8dccf-9ll8z                    0/1     Pending   0          22m     <none>           <none>       <none>           <none>
kube-system   etcd-nmentityvibes                         1/1     Running   0          21m     192.168.31.137   nmentityvibes <none>           <none>
kube-system   kube-apiserver-nmentityvibes               1/1     Running   0          21m     192.168.31.137   nmentityvibes <none>           <none>
kube-system   kube-controller-manager-nmentityvibes      1/1     Running   0          21m     192.168.31.137   nmentityvibes   <none>           <none>
kube-system   kube-proxy-gb7h8                           0/1     Evicted   0          8m34s   <none>           nmentityvibes   <none>           <none>
kube-system   kube-scheduler-nmentityvibes               1/1     Running   0          21m     192.168.31.137   nmentityvibes   <none>           <none>
Try using a different CNI. Try using Flannel instead of Calico.
The result is the same "Evicted".

~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-75f6766678-72t5n   0/1     Pending   0          7m20s   <none>           <none>       <none>           <none>
kube-system   coredns-fb8b8dccf-rf2qj                    0/1     Pending   0          11m     <none>           <none>       <none>           <none>
kube-system   coredns-fb8b8dccf-sb959                    0/1     Pending   0          11m     <none>           <none>       <none>           <none>
kube-system   etcd-nmentityvibes                         1/1     Running   0          10m     192.168.31.137   nmentityvibes<none>           <none>
kube-system   kube-apiserver-nmentityvibes               1/1     Running   0          10m     192.168.31.137   nmentityvibes<none>           <none>
kube-system   kube-controller-manager-nmentityvibes      1/1     Running   0          10m     192.168.31.137   nmentityvibes<none>           <none>
kube-system   kube-flannel-ds-amd64-4chff                0/1     Evicted   0          20s     <none>           nmentityvibes<none>           <none>
kube-system   kube-proxy-xk728                           0/1     Evicted   0          4m8s    <none>           nmentityvibes<none>           <none>
kube-system   kube-scheduler-nmentityvibes               1/1     Running   0          10m     192.168.31.137   nmentityvibes<none>           <none>
Did you join the worker nodes to the cluster?
Yes, I was able to join the worker node to the cluster.

~$ sudo kubeadm join 192.168.31.137:6443 --token 0g2ecj.ybg8svsrjmwxop66 --discovery-token-ca-cert-hash sha256:8f1fe6fd6497158006d5254af628cd71d200c695728ae68a03d7e71ca9573d21

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Activating the kubelet service

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Umm.. A lot of people have been having a similar issue.. I'll try the same on my system and get back to you as soon as possible.
What exactly are you trying to do? bring up the Kube cluster and dashboard and then what?

This is the result I am getting now....I think it started working....but unable to get to the kubernetes login dashboard...

kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-75f6766678-72t5n   1/1     Running            6          3h11m   192.168.122.127   worker01     <none>           <none>
kube-system   calico-node-s5ghx                          1/2     CrashLoopBackOff   14         63m     192.168.122.127   worker01     <none>           <none>
kube-system   coredns-fb8b8dccf-rf2qj                    0/1     CrashLoopBackOff   13         3h15m   192.168.1.2       worker01     <none>           <none>
kube-system   coredns-fb8b8dccf-sb959                    0/1     CrashLoopBackOff   12         3h15m   192.168.1.3       worker01     <none>           <none>
kube-system   etcd-nmentityvibes                            1/1     Running            0          3h14m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-apiserver-nmentityvibes                  1/1     Running            0          3h14m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-controller-manager-nmentityvibes         1/1     Running            2          3h15m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-flannel-ds-amd64-9ncvb                1/1     Running            0          68m     192.168.122.127   worker01     <none>           <none>
kube-system   kube-flannel-ds-amd64-bkq9j                0/1     Evicted            0          21m     <none>            nmentityvibes   <none>           <none>
kube-system   kube-proxy-d75m7                           0/1     Evicted            0          21m     <none>            nmentityvibes   <none>           <none>
kube-system   kube-proxy-stjp8                           1/1     Running            0          68m     192.168.122.127   worker01     <none>           <none>
kube-system   kube-scheduler-nmentityvibes                  1/1     Running            2          3h15m   192.168.31.137    nmentityvibes   <none>           <none>

sudo kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

The kubernetes dashboard container is getting created from the past 26 minutes. As mentioned as "ContainerCreating" when checked...

~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-75f6766678-72t5n   1/1     Running             6          3h49m   192.168.122.127   worker01     <none>           <none>
kube-system   calico-node-s5ghx                          1/2     CrashLoopBackOff    23         101m    192.168.122.127   worker01     <none>           <none>
kube-system   coredns-fb8b8dccf-rf2qj                    0/1     CrashLoopBackOff    19         3h54m   192.168.1.2       worker01     <none>           <none>
kube-system   coredns-fb8b8dccf-sb959                    0/1     CrashLoopBackOff    19         3h54m   192.168.1.3       worker01     <none>           <none>
kube-system   etcd-nmentityvibes                            1/1     Running             0          3h53m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-apiserver-nmentityvibes                  1/1     Running             0          3h53m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-controller-manager-nmentityvibes         1/1     Running             2          3h53m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-flannel-ds-amd64-9ncvb                1/1     Running             0          106m    192.168.122.127   worker01     <none>           <none>
kube-system   kube-flannel-ds-amd64-w5nl2                0/1     Evicted             0          16m     <none>            nmentityvibes   <none>           <none>
kube-system   kube-proxy-2vgft                           0/1     Evicted             0          15m     <none>            nmentityvibes   <none>           <none>
kube-system   kube-proxy-stjp8                           1/1     Running             0          106m    192.168.122.127   worker01     <none>           <none>
kube-system   kube-scheduler-nmentityvibes                  1/1     Running             2          3h53m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kubernetes-dashboard-5f7b999d65-drkpp      0/1     ContainerCreating   0          26m     <none>            worker01     <none>           <none>
This is as checked on the main server.

~$ kubectl get nodes -o wide
NAME         STATUS     ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
nmentityvibes   NotReady   master   4h43m   v1.14.1   192.168.31.137    <none>        Ubuntu 16.04.6 LTS   4.15.0-47-generic   docker://18.9.2
worker01     Ready      <none>   155m    v1.14.1   192.168.122.127   <none>        Ubuntu 16.04.6 LTS   4.4.0-142-generic   docker://18.9.2
Your cluster still hasn't been created properly! According to your kubectl get nodes, your master isn't ready yet! Also did you do kubeadm reset and then try with flannel? if yes, you shouldn't be having any calico pods
Are you running this on a VM?
Yes, worker01 is a VM.
Yes, I did kubeadm reset and then tried with flannel. Let me know if I need to do the reset again. I don't know why I am having calico pods.
Could you please try again! Try kubeadm reset and then start with initializing the Kubernetes master.

Technically, if you haven't installed calico CNI, Calico pod shouldn't exist.

Why don't you try the same on AWS? It's much simpler to set up the cluster there.. If you're creating a single node cluster, you can even use Minikube
What kubeadm init command have you used?
kubeadm init --apiserver-advertise-address=192.168.31.137 --pod-network-cidr=192.168.0.0/16
You're using the wrong init command. If you're using flannel, you have to change the pod-network-cidr. I have put up an answer, follow those steps with the exact same commands.
I don't support AWS, I have tried minikube and even that does not work.
You don't know the solution and simply you are redirecting me to go here and there. Your answer is wrong. You don't even know the basics of networking. Don't misguide if you don't know the answer just be quite so that someone else could answer.

You can refer the Kubernetes Official Docs. It clearly mentions the usage of flannel.

For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.

Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

Ok. I will follow you on what you say. I just got this....

~$ sudo kubeadm init --apiserver-advertise-address=192.168.31.137 --pod-network-cidr=10.244.0.0/1
[sudo] password for ev:
podSubnet: Invalid value: "10.244.0.0/1": subnet is too small

          Let me know what needs to be done. I want the Web UI (Dashboard):kubernetes-dashboard on main Ubuntu 16.04.6 LTS (Xenial Xerus) server. Till then I cannot mark your answer as correct.

The command you've used:

sudo kubeadm init --apiserver-advertise-address=192.168.31.137 --pod-network-cidr=10.244.0.0/1

You missed out a digit at the end. --pod-networ-cidr=10.244.0.0/16

Use the following command

$ kubeadm init --apiserver-advertise-address=<ip-address-of-kmaster-vm> --pod-network-cidr=10.244.0.0/16

I am still getting the same after doing this sudo kubeadm init --apiserver-advertise-address=192.168.31.137 --pod-network-cidr=10.244.0.0/16. I don't see any flannel nodes running.

~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE   IP                NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-6c6nr                 0/1     Pending   0          45m   <none>            <none>       <none>           <none>
kube-system   coredns-fb8b8dccf-96d9l                 0/1     Pending   0          45m   <none>            <none>       <none>           <none>
kube-system   etcd-nmentityvibes                         1/1     Running   0          44m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-apiserver-nmentityvibes               1/1     Running   0          44m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-controller-manager-nmentityvibes      1/1     Running   2          44m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kube-proxy-dlnzq                        1/1     Running   0          39m   192.168.122.127   worker01     <none>           <none>
kube-system   kube-proxy-mswdz                        0/1     Evicted   0          19m   <none>            nmentityvibes   <none>           <none>
kube-system   kube-scheduler-nmentityvibes               1/1     Running   1          44m   192.168.31.137    nmentityvibes   <none>           <none>
kube-system   kubernetes-dashboard-5f7b999d65-5fw4w   0/1     Pending   0          28m   <none>            <none>       <none>           <none>

1 answer to this question.

–2 votes

Follow these steps:

$ kubeadm reset

$ kubeadm init --apiserver-advertise-address=<ip-address-of-kmaster-vm> --pod-network-cidr=10.244.0.0/16

Open another terminal and execute the following command as a non root user

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

and then use flannel as your CNI

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

Level up your Kubernetes expertise with our industry-leading Kubernetes Course.

answered Apr 12, 2019 by Kalgi
• 52,350 points

reshown Apr 12, 2019 by Kalgi
i tried to follow this tutorial link of installing Kubernetes https://www.edureka.co/blog/install-kubernetes-on-ubuntu

but i in the step 6 to launch the dashboard of Kubernetes how can i do it during ubuntu server and it hasn't any browser to launch it ?is there is any command ?!

Thanks in advance

Related Questions In Kubernetes

0 votes
1 answer

Install Kubernetes Dashboard on extrenal IP

You can expose services and pods in ...READ MORE

answered Sep 25, 2018 in Kubernetes by Kalgi
• 52,350 points
766 views
+1 vote
1 answer
0 votes
3 answers

Error while joining cluster with node

Hi Kalgi after following above steps it ...READ MORE

answered Jan 17, 2019 in Others by anonymous
15,484 views
0 votes
1 answer

permissions related to AWS ECR

if you add allowContainerRegistry: true, kops will add those permissions ...READ MORE

answered Oct 9, 2018 in Kubernetes by Kalgi
• 52,350 points
1,401 views
0 votes
4 answers

Kubernetes dashboard not showing up outside

In dashboard yaml, change cluster ip to ...READ MORE

answered Mar 20, 2019 in Kubernetes by anonymous
6,579 views
+1 vote
2 answers

Accessing Kubernetes Web UI (Dashboard)

To access Kubernetes dashboard, you need to ...READ MORE

answered Apr 16, 2019 in Kubernetes by Ray
2,362 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP