I have a kubernetes cluster with following service created with Type: LoadBalancer -
(Source reference: https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml)
apiVersion: v1
Kind: Service
metadata:
name: hello-kenzan
labels:
app: hello-kenzan
spec:
ports:
- port: 80
targetPort: 80
selector:
app: hello-kenzan
tier: hello-kenzan
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kenzan
labels:
app: hello-kenzan
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: hello-kenzan
tier: hello-kenzan
spec:
containers:
- image: gopikrish81/hello-kenzan:latest
name: hello-kenzan
ports:
- containerPort: 80
name: hello-kenzan
After I created the service with -
kubectl apply -f k8s/manual-deployment.yaml
kubectl get svc
It is showing External-IP as `<pending>`
But since I have created a loadbalancer type, why isnt it creating an ip?
FYI, I can access the app using `curl <master node>:<nodeport>`
Or even I can access it through proxy forwarding.
**UPDATE as on 29/1**
I followed the answer steps as mentioned in this post https://stackoverflow.com/questions/50668070/kube-controller-manager-dont-start-when-using-cloud-provider-aws-with-kubeadm
1) I modified the file "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" by adding the below command under [Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf
And I created this cloud-config.conf as below -
[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes
I am not sure what for this Tag and ID refer to but when I run the below command I can see the output mentioning clusterName as "kubernetes"
kubeadm config view
Then I did executed,
systemctl daemon-reload
system restart kubelet
2) Then as mentioned in that, I added `--cloud-provider=aws` in both kube-controller-manager.yaml and kube-apiserver.yaml
3) I also added below annotation in the manual-deployment.yaml of my application
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml
Now, when I deployed using `kubectl apply -f k8s/manual-deployment.yaml` the pod itself is not getting created when I checked with `kubectl get po --all-namespaces`
So I tried to remove step 2 above and again did deployment and now pod was getting created successfully. But still it is showing `<pending>` for EXTERNAL-IP when I did `kubectl get svc`
I even renamed my master and worker node to be same as EC2 Instance Private DNS: ip-10-118-6-35.ec2.internal and ip-10-118-11-225.ec2.internal as mentioned in below post and reconfigured the cluster but still no luck.
https://medium.com/jane-ai-engineering-blog/kubernetes-on-aws-6281e3a830fe (under the section : Proper Node Names)
Also, in my EC2 instances, I can see IAM role attached and when I see the details for that, I can see there are 8 policies applied to that role. And in one of the policy I can see this below and many other Actions are there which I am not posting here -
{
"Action": "elasticloadbalancing:*",
"Resource": "*",
"Effect": "Allow"
}
I am clueless if some other settings I am missing. Please suggest!
**UPDATE as on 30/1**
I did the below additional steps as mentioned in this blog - https://blog.scottlowe.org/2018/09/28/setting-up-the-kubernetes-aws-cloud-provider/
1) Added AWS tags to all of my EC2 instances (master and worker nodes) as "kubernetes.io/cluster/kubernetes" and also to my security group
2) I havent added apiServerExtraArgs, controllerManagerExtraArgs and nodeRegistration manually in configuration file. But what I did was I reset the cluster entirely using `"sudo kubeadm reset -f"` and then I added this in kubeadm conf file in both master and worker nodes -
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf
cloud-config.conf -
[Global]
KubernetesClusterTag=kubernetes.io/cluster/kubernetes
KubernetesClusterID=kubernetes
Then executed in both master and worker nodes -
systemctl daemon-reload
system restart kubelet
3) Now I created the cluster using below command in master node
sudo kubeadm init --pod-network-cidr=192.168.1.0/16 --apiserver-advertise-address=10.118.6.35
4) Then I was able to join the worker node to the cluster successfully and deployed flannel CNI.
After this, get nodes showed Ready status.
One important point to note is that there is kube-apiserver.yaml and kube-controller-manager.yaml files in /etc/kubernetes/manifests path.
When I added `--cloud-provider=aws` in both of these yaml files, my deployments was not happening and pod was not getting created at all. So when I removed the tag `--cloud-provider=aws` from kube-apiserver.yaml, deployments and pods were success.
When I did modify the yaml for kube-apiserver and kube-controller-manager, both the pods got created again successfully. But since pods were not getting created, I removed the tag from kube-apiserver.yaml alone.
Also, I checked the logs with `kubectl logs kube-controller-manager-ip-10-118-6-35.ec2.internal -n kube-system`
But I dont see any exceptions or abnormalities. I can see this in last part -
IO130 19:14:17.444485 1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-kenzan", UID:"c........", APIVersion:"apps/v1", ResourceVersion:"16212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-kenzan-56686879-ghrhj
Even tried to add this below annotation to manual-deployment.yaml but still shows the same `<Pending>`
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0