True LoadBalancing in Kubernetes?
True LoadBalancing in Kubernetes?
What is a Load Balancer?
Load balancing improves the distribution of workloads across multiple
computing resources, such as computers, a computer cluster, network
links, central processing units, or disk drives
NodePort is not load balancer. (I know that kube-proxy
load balance the traffic among the pod once the traffic is inside the cluster) I mean, the end user hits http://NODEIP:30111
(For example) URL to access the application. Even though the traffic is load balanced among the POD, users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?
kube-proxy
http://NODEIP:30111
Here also same, imagine the ingress-controller is deployed and ingress-service too. The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster, then ingress-controller load balance the traffic among the pods. Here also end users hitting single node which is K8s's minion but a real Load Balancer, right?
I'm having a doubt, how cloud provider's LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?
If above point is true. Where is the true load balancing the traffic among the pods/appropriate nodes.
Can I implement true load balancing in K8s? I asked a related question here
1 Answer
1
NodePort is not load balancer.
You're right about this in one way, yes it's not designed to be a load balancer.
users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?
With NodePort, you have to hit a single node at any one time, but you have to remember that kube-proxy
is running on ALL nodes. So you can hit the NodePort on any node in the cluster (even a node the workload isn't running on) and you'll still hit the endpoint you want to hit. This becomes important later.
kube-proxy
The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster
No, this isn't how it works.
Your ingress controller needs to be exposed externally still. If you're using a cloud provider, a commonly used pattern is to expose your ingress controller with Service of Type=LoadBalancer
. The LoadBalancing still happens with Services, but Ingress allows you to use that Service in a more user friendly way. Don't confuse ingress with loadbalancing.
Type=LoadBalancer
I'm having a doubt how cloud provider LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?
If you look at a provisioned service in Kubernetes, you'll see why it makes sense.
Here's a Service of Type LoadBalancer:
kubectl get svc nginx-ingress-controller -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer <redacted> internal-a4c8... 80:32394/TCP,443:31281/TCP 147d
You can see I've deployed an ingress controller with type LoadBalancer. This has created an AWS ELB, but also notice, like NodePort
it's mapped port 80 on the ingress controller pod to port 32394
.
NodePort
32394
So, let's look at the actual LoadBalancer in AWS:
aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125
{
"LoadBalancerDescriptions": [
{
"LoadBalancerName": "a4c80f4eb1d7c11e886d80652b702125",
"DNSName": "internal-a4c8<redacted>",
"CanonicalHostedZoneNameID": "<redacted>",
"ListenerDescriptions": [
{
"Listener": {
"Protocol": "TCP",
"LoadBalancerPort": 443,
"InstanceProtocol": "TCP",
"InstancePort": 31281
},
"PolicyNames":
},
{
"Listener": {
"Protocol": "HTTP",
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"InstancePort": 32394
},
"PolicyNames":
}
],
"Policies": {
"AppCookieStickinessPolicies": ,
"LBCookieStickinessPolicies": ,
"OtherPolicies":
},
"BackendServerDescriptions": ,
"AvailabilityZones": [
"us-west-2a",
"us-west-2b",
"us-west-2c"
],
"Subnets": [
"<redacted>",
"<redacted>",
"<redacted>"
],
"VPCId": "<redacted>",
"Instances": [
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
}
],
"HealthCheck": {
"Target": "TCP:32394",
"Interval": 10,
"Timeout": 5,
"UnhealthyThreshold": 6,
"HealthyThreshold": 2
},
"SourceSecurityGroup": {
"OwnerAlias": "337287630927",
"GroupName": "k8s-elb-a4c80f4eb1d7c11e886d80652b702125"
},
"SecurityGroups": [
"sg-8e0749f1"
],
"CreatedTime": "2018-03-01T18:13:53.990Z",
"Scheme": "internal"
}
]
}
The most important things to note here are:
The LoadBalancer is mapping port 80 in ELB to the NodePort:
{
"Listener": {
"Protocol": "HTTP",
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"InstancePort": 32394
},
"PolicyNames":
}
You'll also see that there are multiple target Instances
, not one:
Instances
aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125 | jq '.LoadBalancerDescriptions.Instances | length'
8
And finally, if you look at the number of nodes in my cluster, you'll see it's actually all the nodes that have been added to the LoadBalancer:
kubectl get nodes -l "node-role.kubernetes.io/node=" --no-headers=true | wc -l
8
So, in summary - Kubernetes does implement true LoadBalancing with services (whether that be NodePort or LoadBalancer types) and the ingress just makes that service more accessible to the outside world
Ingress
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
Thanks for the detailed answer. Regarding
Ingress
: If ingress-controller is deployed on bare-metal K8s, the mentioned scenario is true?? i.e the specified sub-domain should point to a node.– Veerendra
3 mins ago