Running a heterogeneous kubernetes cluster with nodes running in different network using VPN

0 votes

I’m trying to run a heterogeneous kubernetes cluster where

  • The master runs on AWS node
  • One slave on my laptop
  • Another slave runs on Raspberry-pi

When I try to run this cluster with all three nodes on different vm of the same laptop it works perfectly fine but I try to run it in the above explained way I get a lot of errors.

I’ve set up an OpenVPN on aws and connected my laptop and raspberry-pi to it. As my master runs on aws and my aws is connected to both my nodes, I expected it to work.

I’m able to access the dashboard from the master but not from the slave nodes, I get a time out.

kubectl get pods --all-namespaces -o wide:

NAMESPACE     NAME                                     READY     STATUS              RESTARTS   AGE       IP              NODE
kube-system   etcd-ip-172-31-28-6                      1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   kube-apiserver-ip-172-31-28-6            1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   kube-controller-manager-ip-172-31-28-6   1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   kube-dns-6f4fd4bdf-w6ctf                 0/3       ContainerCreating   0          15h       <none>          osboxes
kube-system   kube-proxy-2pl2f                         1/1       Running             0          15h       172.31.28.6     ip-172-31-28-6
kube-system   kube-proxy-7b89c                         0/1       CrashLoopBackOff    15         15h       192.168.2.106   edge-1
kube-system   kube-proxy-qg69g                         1/1       Running             1          15h       10.0.2.15       osboxes
kube-system   kube-scheduler-ip-172-31-28-6            1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   weave-net-pqxfp                          1/2       CrashLoopBackOff    189        15h       172.31.28.6     ip-172-31-28-6
kube-system   weave-net-thhzr                          1/2       CrashLoopBackOff    12         36m       192.168.2.106   edge-1
kube-system   weave-net-v69hj                          2/2       Running             7          15h       10.0.2.15       osboxes

kubectl -n kube-system logs --v=7 kube-dns-6f4fd4bdf-w6ctf -c kubedns:

...
I0321 09:04:25.620580   23936 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/kube-dns-6f4fd4bdf-w6ctf/log?container=kubedns
I0321 09:04:25.620605   23936 round_trippers.go:421] Request Headers:
I0321 09:04:25.620611   23936 round_trippers.go:424]     Accept: application/json, */*
I0321 09:04:25.620616   23936 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:04:25.713821   23936 round_trippers.go:439] Response Status: 400 Bad Request in 93 milliseconds
I0321 09:04:25.714106   23936 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "container \"kubedns\" in pod \"kube-dns-6f4fd4bdf-w6ctf\" is waiting to start: ContainerCreating",
  "reason": "BadRequest",
  "code": 400
}]
F0321 09:04:25.714134   23936 helpers.go:119] Error from server (BadRequest): container "kubedns" in pod "kube-dns-6f4fd4bdf-w6ctf" is waiting to start: ContainerCreating

kubectl -n kube-system logs --v=7 kube-proxy-7b89c:

...
I0321 09:06:51.803852   24289 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/kube-proxy-7b89c/log
I0321 09:06:51.803879   24289 round_trippers.go:421] Request Headers:
I0321 09:06:51.803891   24289 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:06:51.803900   24289 round_trippers.go:424]     Accept: application/json, */*
I0321 09:08:59.110869   24289 round_trippers.go:439] Response Status: 500 Internal Server Error in 127306 milliseconds
I0321 09:08:59.111129   24289 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out",
  "code": 500
}]
F0321 09:08:59.111156   24289 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out

kubectl -n kube-system logs --v=7 weave-net-pqxfp -c weave:

...
I0321 09:12:08.047206   24847 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/weave-net-pqxfp/log?container=weave
I0321 09:12:08.047233   24847 round_trippers.go:421] Request Headers:
I0321 09:12:08.047335   24847 round_trippers.go:424]     Accept: application/json, */*
I0321 09:12:08.047347   24847 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:12:08.062494   24847 round_trippers.go:439] Response Status: 200 OK in 15 milliseconds
DEBU: 2018/03/21 09:11:26.847013 [kube-peers] Checking peer "fa:10:a4:97:7e:7b" against list &{[{6e:fd:f4:ef:1e:f5 osboxes}]}
Peer not in list; removing persisted data
INFO: 2018/03/21 09:11:26.880946 Command line options: map[expect-npc:true ipalloc-init:consensus=3 db-prefix:/weavedb/weave-net http-addr:127.0.0.1:6784 ipalloc-range:10.32.0.0/12 nickname:ip-172-31-28-6 host-root:/host name:fa:10:a4:97:7e:7b no-dns:true status-addr:0.0.0.0:6782 datapath:datapath docker-api: port:6783 conn-limit:30]
INFO: 2018/03/21 09:11:26.880995 weave  2.2.1
FATA: 2018/03/21 09:11:26.881117 Inconsistent bridge state detected. Please do 'weave reset' and try again

kubectl -n kube-system logs --v=7 weave-net-thhzr -c weave:

...
I0321 09:15:13.787905   25113 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/weave-net-thhzr/log?container=weave
I0321 09:15:13.787932   25113 round_trippers.go:421] Request Headers:
I0321 09:15:13.787938   25113 round_trippers.go:424]     Accept: application/json, */*
I0321 09:15:13.787946   25113 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:17:21.126863   25113 round_trippers.go:439] Response Status: 500 Internal Server Error in 127338 milliseconds
I0321 09:17:21.127140   25113 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out",
  "code": 500
}]
F0321 09:17:21.127167   25113 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out

sudo systemctl status kubelet.service (on AWS):

Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: W0321 09:34:59.202058   19676 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: E0321 09:34:59.202452   19676 kubelet.go:2109] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.535541   19676 kuberuntime_manager.go:514] Container {Name:weave Image:weaveworks/weave-kube:2.2.1 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:weavedb ReadOnly:false MountPath:/weavedb SubPath: MountPropagation:<nil>} {Name:cni-bin ReadOnly:false MountPath:/host/opt SubPath: MountPropagation:<nil>} {Name:cni-bin2 ReadOnly:false MountPath:/host/home SubPath: MountPropagation:<nil>} {Name:cni-conf ReadOnly:false MountPath:/host/etc SubPath: MountPropagation:<nil>} {Name:dbus ReadOnly:false MountPath:/host/var/lib/dbus SubPath: MountPropagation:<nil>} {Name:lib-modules ReadOnly:false MountPath:/lib/modules SubPath: MountPropagation:<nil>} {Name:weave-net-token-vn8rh ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536504   19676 kuberuntime_manager.go:758] checking backoff for container "weave" in pod "weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536636   19676 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: E0321 09:35:01.536664   19676 pod_workers.go:186] Error syncing pod c6450070-2c61-11e8-a50d-06a3d08e1972 ("weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"

$ sudo systemctl status kubelet.service (on Laptop)

Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.662670     715 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663412     715 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663869     715 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.664295     715 pod_workers.go:186] Error syncing pod 11886465-2c61-11e8-a50d-06a3d08e1972 ("kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "CreatePodSandbox" for "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Mar 21 05:47:20 osboxes kubelet[715]: W0321 05:47:20.536161     715 pod_container_deletor.go:77] Container "bbf490835face43b70c24dbcb67c3f75872e7831b5e2605dc8bb71210910e273" not found in pod's containers

$ sudo systemctl status kubelet.service (on Raspberry Pi):

Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.188199     339 kuberuntime_manager.go:514] Container {Name:kube-proxy Image:gcr.io/google_containers/kube-proxy-amd64:v1.9.5 Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:kube-proxy ReadOnly:false MountPath:/var/lib/kube-proxy SubPath: MountPropagation:<nil>} {Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:<nil>} {Name:lib-modules ReadOnly:true MountPath:/lib/modules SubPath: MountPropagation:<nil>} {Name:kube-proxy-token-px7dt ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.189023     339 kuberuntime_manager.go:758] checking backoff for container "kube-proxy" in pod "kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"
Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.190174     339 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)
Mar 21 09:29:01 edge-1 kubelet[339]: E0321 09:29:01.190518     339 pod_workers.go:186] Error syncing pod 5bebafa1-2c61-11e8-a50d-06a3d08e1972 ("kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed 

I know its networking issue but how do i solve it?

Sep 6, 2018 in Kubernetes by lina
• 8,220 points
2,741 views

1 answer to this question.

0 votes

There is an issue with networking between the master and the nodes.

FATA: 2018/03/21 09:11:26.881117 Inconsistent bridge state detected. Please do 'weave reset' and try again

Since it's slightly complicated to run the weave command on a Kubernetes node, just reboot the node and the bridge should be recreated from scratch.

 Also make sure you’ve added slave1 and slave2’s ip addresses to the master node’s configuration and master and slave1’s ip addresses on slave2’s configuration and master and slave2’s ip addresses on slave1’s configuration so that all nodes are able to access eachother.

Try this and let me know

answered Sep 6, 2018 by Kalgi
• 52,350 points

Related Questions In Kubernetes

0 votes
2 answers

Generally how many nodes will be running in a kubernetes cluster?

Kubernetes latest version 1.17 supports 5000 nodes ...READ MORE

answered Feb 19, 2020 in Kubernetes by Raj
1,741 views
0 votes
3 answers

Using multiple commands in a kubernetes yaml file

Try something like this: containers: - name: ...READ MORE

answered Apr 23, 2019 in Kubernetes by lyza
49,860 views
0 votes
1 answer

How to build a high availability cluster in Kubernetes?

Add nodes in a HA cluster in ...READ MORE

answered Jul 12, 2019 in Kubernetes by Sirajul
• 59,230 points
1,205 views
0 votes
1 answer

permissions related to AWS ECR

if you add allowContainerRegistry: true, kops will add those permissions ...READ MORE

answered Oct 9, 2018 in Kubernetes by Kalgi
• 52,350 points
1,393 views
+1 vote
1 answer
0 votes
1 answer

Running a cronjob in kubernetes

You will find the CronJobresource in the batch/v1beta1 API group. ...READ MORE

answered Sep 6, 2018 in Kubernetes by Kalgi
• 52,350 points
760 views
0 votes
1 answer

Running A cronjob in a pod in Kubernetes

Unfortunately, you cannot run the CronJob inside a container ...READ MORE

answered Sep 17, 2018 in Kubernetes by Kalgi
• 52,350 points
2,272 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP