Canary and Blue-Green Deployments Enabled by KubeStellar — Part 2— Yeah — it works! Using external-dns from Bitnami and AWS Route53

Andy Anderson
7 min readMar 31, 2024

--

source: https://community.aws/content/2dxHaSe8R2Kpzqt1wvFMqM5PyLl/navigating-amazon-eks-automating-dns-records-for-microservices-using-externaldns?lang=en

Yes, you read the failed approach here.

But I just could not let this rest. I found another way. Probably the way that most operators use. Multi-Cluster Blue-green and Canary can be accomplished without the use of an Application Load Balancer (ALB). In fact, in multi-cluster, I am not sure if it is useful to use a traditional ALB. Could AWS make changes to support blue-green with it, sure. But Route53 does the job and does it more simply.

So how do you get a blue-green or canary deployment to work cross-cluster, cross-region, cross-vpc using KubeStellar on EKS clusters? The external-dns project from Bitnami with AWS’s Route53 is a good solution here. Here we go. My source of inspiration and information is from https://ddulic.dev/external-dns-migrate-services-between-k8s-clusters. Thank you Damir.

Steps to Implement Blue-Green Deployment with External-DNS and Route53

The setup

  1. I had limited success with the simple ‘eksctl create cluster’ for creating EKS Kubernetes clusters that could communicate with public internet endpoints. I am sure there is a way to do it but I stuck with what I know works -I deployed 3 eks clusters with “vpc and more” — https://clubanderson.medium.com/how-to-install-kubestellar-on-a-collection-of-aws-eks-managed-clusters-aa1615e671a0
  2. Remember to spin up nodes in nodegroups for each of the 3 clusters and add the ebs add-on (after connecting OIDC and creating policies for all 3 clusters)
  3. Get all the kubeconfigs
eksctl utils write-kubeconfig - cluster=bg-wec1 - kubeconfig=eks.kubeconfig - region us-east-1
eksctl utils write-kubeconfig - cluster=bg-wec2 - kubeconfig=eks.kubeconfig - region us-east-1
eksctl utils write-kubeconfig - cluster=bg-core - kubeconfig=eks.kubeconfig - region us-east-1

NOTE: write the hub to the kubeconfig last so that your context is defaulted to the hub to start this exercise

4. install ingress-nginx (Network Load Balancer) FIRST to support kubestellar (https://clubanderson.medium.com/how-to-install-kubestellar-on-a-collection-of-aws-eks-managed-clusters-aa1615e671a0)

kubectl --context bg-core apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/aws/deploy.yaml

4.a. add ssl passthrough:

kubectl --context bg-core edit deployment.apps/ingress-nginx-controller -n ingress-nginx

4.b. add ‘- — enable-ssl-passthrough’

add more ports for ks:

kubectl - context hub-eks edit service ingress-nginx-controller -n ingress-nginx
- name: proxied-tcp-9443
nodePort: 31345
port: 9443
protocol: TCP
targetPort: 443
- name: proxied-tcp-9080
nodePort: 31226
port: 9080
protocol: TCP
targetPort: 80

4.c. check for a new network load balancer in your AWS console at https://console.aws.amazon.com/ec2/home?#LoadBalancers

5. Now that ingress is established, you can Install KubeStellar (WDS0 hosted control plane, WDS1 pointing at bg-wec1 and WDS2 pointing at bg-wec2). Use the instructions at https://github.com/kubestellar/kubeflex/blob/main/docs/users.md#use-a-different-dns-service. Remember to use the “ — domain” flag for kubeflex. The domain you give it is one you define in AWS Route53. The domain will play an important role in ingress for KubeStellar IMBS/ITS and WDS’. Without a domain, this failed approach and the working approach (next blog in the series) will not work!

6. Test your KubeStellar endpoints are working in conjunction with your Route53 domain. A successful response is ‘403’ in this case because we are not using the proper certificates to access the control planes of IMBS1, WDS1, and WDS2. The fact there is an invalid response indicates the control planes are alive and responding. That’s all we need to confirm here.

curl -k https://imbs1.kubestellar.org:9443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}%
curl -k https://wds1.kubestellar.org:9443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}%
curl -k https://wds2.kubestellar.org:9443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}%

Where are we? Check-in #1

Ok, so let’s take inventory here of what we have done so far. We have 3 EKS clusters (bg-core, bg-wec1, and bg-wec2). We have ingress-nginx setup and working on bg-core. We have 2 KubeStellar WDS setup to stage workloads for delivery to bg-wec1 and bg-wec2 respectively. IMBS1, WDS1, and WDS2 controlplanes are responding over ingress using their domain.

I am going to install a different version of a single app to WDS1 (which will sync to the bg-wec1 cluster remote cluster using KubeStellar) and WDS2 (which will sync to the bg-wec2 cluster remote cluster using KubeStellar).

Let’s get started. We are going deploy 2 different versions of a single app on 2 different KubeStellar WDS’. First we need to configure the AWS DNS policy.

  1. Install AWS DNS Policy
aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://policy.json
POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text)

2. Create an iam serviceaccount on each of the 3 eks clusters with eksctl (this can probably done within the cluster using kubectl, but I did not experiment)

eksctl create iamserviceaccount \
--cluster bg-core \
--name "external-dns" \
--namespace "default" \
--attach-policy-arn $POLICY_ARN \
--approve

eksctl create iamserviceaccount \
--cluster bg-wec1 \
--name "external-dns" \
--namespace "default" \
--attach-policy-arn $POLICY_ARN \
--approve

eksctl create iamserviceaccount \
--cluster bg-wec2 \
--name "external-dns" \
--namespace "default" \
--attach-policy-arn $POLICY_ARN \
--approve

The eksctl creates pointers in AWS and a serviceaccount in your cluster. This is the glue between your cluster and AWS that is needed for the external-dns controller to manage the Route53 entries for your domain.

3. Update Domain Filter in External-DNS Deployment Definition

Update the domain filter in step2-external-dns deployment definition in step2-externaldns/v1/bg-wec1-external-dns-with-rbac.yml and step2-external-dns/v2/bg-wec2-external-dns-with-rbac.yml to match your domain name in Route53.

NOTE: this is not the same domain name from the previous blog — Create a new Route53 domain. I used “mcc.research.com”.

    - --domain-filter= your.domain.here

4. Deploy external-dns with KubeStellar

# create the external-dns controller (bindingpolicy, clusterrole, clusterrolebinding, and deployment) on both KubeStellar WDS
kubectl - context wds1 apply -f step2-external-dns/v1/
kubectl - context wds2 apply -f step2-external-dns/v2/

5. Check that external-dns was applied to the remote clusters

# we are just checking that the external-dns controller go deployed via KubeStellar
kubectl --context bg-wec1 --namespace=kube-system get pods -l "app.kubernetes.io/name=external-dns,app.kubernetes.io/instance=externaldns-release"
kubectl --context bg-wec1 get all -n default | grep external-dns

kubectl --context bg-wec2 --namespace=kube-system get pods -l "app.kubernetes.io/name=external-dns,app.kubernetes.io/instance=externaldns-release"
kubectl --context bg-wec2 get all -n default | grep external-dns

6. Update the external-dns hostname annotation in the service definition in step3-hello-kubernetes/v1/bg-wec1-hello-kubernetes.yml and step3-hello-kubernetes/v2/bg-wec2-hello-kubernetes.yml

  external-dns.alpha.kubernetes.io/hostname: hello-kube.your.domain.here

7. Deploy the “hello-kubernetes” application with KubeStellar. The project comes in Helm chart format, but I pulled out the service, deployment, and serviceaccount for the purposes of this demo. It’s easier to demonstrate this way. I might put it back together as a kustomize package in the future to help with variable replacement. For “hello-kubernetes” to deploy across KubeStellar to bg-wec1 and bg-wec2, I applied a bindingpolicy for WDS1 to get ‘app.kubernetes.io/part-of=v1’ and WDS2 to get ‘app.kubernetes.io/part-of=v2’

  kubectl --context wds1 apply -f step3-hello-kubernetes/v1/  
kubectl --context wds2 apply -f step3-hello-kubernetes/v2/

8. Check if hello-kubernetes was applied the remote clusters

kubectl --context bg-wec1 get all -n hello-kubernetes
kubectl --context bg-wec2 get all -n hello-kubernetes

9. Check route53 for dns updates

10. If you do not see updates, check logs at

kubectl - context bg-wec1 logs deployment/external-dns
kubectl - context bg-wec2 logs deployment/external-dns

11. Try the url you specified in the external-dns hostname annotation in the hello-kubernetes service definition (be sure to put in a nameserver from your domains NS record)

for i in {1..500}; do domain=$(dig hello-kube.your.domain.here. TXT @your.domains.ns.record. +short); echo -e  "$domain" >> RecursiveResolver_results.txt; done

awk ' " " ' RecursiveResolver_results.txt | sort | uniq -c

you should see 50% of traffic going to each version/service

source: https://repost.aws/knowledge-center/route-53-fix-dns-weighted-routing-issue#

12. Update the external-dns aws-weight (shift more traffic to v2 of hello-kubernetes) annotation in the service definition in:

edit step3-hello-kubernetes/v1/bg-wec1-hello-kubernetes.yml

      external-dns.alpha.kubernetes.io/aws-weight: "25"

edit step3-hello-kubernetes/v2/bg-wec2-hello-kubernetes.yml

      external-dns.alpha.kubernetes.io/aws-weight: "75"

13. Re-deploy hello-kubernetes application with KubeStellar

kubectl --context wds1 apply -f step3-hello-kubernetes/v1/  
kubectl --context wds2 apply -f step3-hello-kubernetes/v2/

14. Again, try the url you specified in the external-dns hostname annotation in the hello-kubernetes service definition (be sure to put in a nameserver from your domains NS record)

for i in {1..500}; do domain=$(dig hello-kube.your.domain.here. TXT @your.domains.ns.record. +short); echo -e  "$domain" >> RecursiveResolver_results.txt; done

awk ' " " ' RecursiveResolver_results.txt | sort | uniq -c

You should see 25% of traffic going to each v1 version of the hello-kubernetes service and 75% of traffic going to the v2 version of the hello-kubernetes service.

You can fiddle around with the aws-weight to see you can accomplish canary and blue-green deployments with this approach. This was a ton easier, not to mention a more acceptable practice/pattern, to use. I hope this was helpful and you learned something.

Thanks for stopping by!

--

--

Andy Anderson

IBM Research - KubeStellar, DevOps, Technology Adoption, and Kubernetes. Views are my own.