This blog is to demo how to use TSM traffic management policy to achieve “canary” or incrementally release a new version of deployment.
dzhang@david-ubuntu01:~$ k get pod
NAME READY STATUS RESTARTS AGE
details-v1-75b6944bb9-pz5fj 2/2 Running 0 150m
productpage-v1-9fff55d4-bkhgr 2/2 Running 0 150m
ratings-v1-56d454b5b6-28f25 2/2 Running 0 150m
reviews-v1-5c76cff476-q7tjp 2/2 Running 0 150m
reviews-v2-7f86c797f6-mkfwh 2/2 Running 0 150m
reviews-v3-64ddc56888-nf5jg 2/2 Running 0 150m
dzhang@david-ubuntu01:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.96.87.108 <none> 9080/TCP 150m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d18h
productpage ClusterIP 10.96.199.199 <none> 9080/TCP 150m
ratings ClusterIP 10.96.133.121 <none> 9080/TCP 150m
reviews ClusterIP 10.96.49.244 <none> 9080/TCP 150m
supervisor ClusterIP None <none> 6443/TCP 6d18h
As our current review service is running at v2. We are going to upgrade the review service from v2 to v3. What we did here is sending 90% traffic still to v2 but 10% traffic to v3. Once we are happy with v3, we can gradually move more and more application traffic to v3.
By use of an API, i created a traffic management policy to achieve what we desire.
PUT[<https://prod-2.nsxservicemesh.vmware.com/tsm/v1alpha2/project/default/global-namespaces/{{gns}>](<https://prod-2.nsxservicemesh.vmware.com/tsm/v1alpha2/project/default/global-namespaces/%7B%7Bgns%7D>)}/traffic-routing-policies/{{tm-policy}}
{
"description": "policy to split 90% of traffic to version 2 and 10% of traffic to version 3",
"service": "reviews",
"traffic_policy": {
"http": [
{
"targets": [
{
"service_version": "v2",
"weight": 90
},
{
"service_version": "v3",
"weight": 10
}
]
}
]
},
"id": "tm-policy1"
}
We can see the corresponding Istio virtual service was updated.
dzhang@david-ubuntu01:~/bookinfo$ k get vs nsxsm.gns.bookinfo.reviews -o yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
creationTimestamp: "2022-05-20T03:54:17Z"
generation: 3
labels:
createdby: allspark
gns: bookinfo
routing-policy-id: tm-policy1
name: nsxsm.gns.bookinfo.reviews
namespace: default
resourceVersion: "2980900"
uid: 4d532c4c-3f1c-4e7c-96b2-d86efc0e50c3
spec:
exportTo:
- .
hosts:
- reviews.intranet.vmconaws.net
- reviews.default.svc.cluster.local
- 251.11.204.141
http:
- headers:
request:
add:
x-allspark-request-header: vmc-tanzu-c001
route:
- destination:
host: v2.reviews.intranet.vmconaws.net
weight: 90
- destination:
host: v3.reviews.intranet.vmconaws.net
weight: 10
# Please note that we dont use subset here.
# nsxsm.gns.bookinfo.reviews ["reviews.intranet.vmconaws.net","reviews.default.svc.cluster.local","251.11.204.141"]
Let us send some traffic to test the traffic management policy. You can see that ~90% traffic were sent to v2 review service and ~10 traffic were sent to the new version of review service.
Assuming that we are quite happy with v3, we can change the traffic management policy to send 90% traffic to v3 and 10% traffic to v2.
{
"description": "policy to split 10% of traffic to version 2 and 90% of traffic to version 3",
"service": "reviews",
"traffic_policy": {
"http": [
{
"targets": [
{
"service_version": "v2",
"weight": 10
},
{
"service_version": "v3",
"weight": 90
}
]
}
]
},
"id": "tm-policy1"
}