What is load balancing??

0

 Load Balancing Structure


Load Balancing


Load balancing refers to efficiently distributing incoming network traffic across a group of back-end servers, also known as a server farm or server pool.


Modern high-traffic websites must serve hundreds of thousands, if not millions, of concurrent requests from users or clients and return the correct text, images, video, or application data, all in a fast and reliable manner. To cost-effectively scale to meet these high volumes, modern computing best practice generally requires adding more servers.

load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers.

In this manner, a load balancer performs the following functions:
1) Distributes client requests or network load efficiently across multiple servers
2) Ensures high availability and reliability by sending requests only to servers that are online
3) Provides the flexibility to add or subtract servers as demand dictates.

Load Balancing Algorithm


Different load balancing algorithms provide different benefits; the choice of load balancing method depends on your needs:

1) Round Robin – Round Robin means servers will be selected sequentially. The load balancer will select the first server on its list for the first request, then move down the list in order, starting over at the top when it reaches the end.

2) Least Connections – Least Connections means the load balancer will select the server with the least connections and is recommended when traffic results in longer sessions. 

3) IP Hash/Source – With the Source algorithm, the load balancer will select which server to use based on a hash of the source IP of the request, such as the visitor's IP address. This method ensures that a particular user will consistently connect to the same server.

Session Persistence/Sticky Session


Information about a user’s session is often stored locally in the browser. For example, in a shopping cart application the items in a user’s cart might be stored at the browser level until the user is ready to purchase them. Changing which server receives requests from that client in the middle of the shopping session can cause performance issues or outright transaction failure. In such cases, it is essential that all requests from a client are sent to the same server for the duration of the session. This is known as session persistence.

Resource:- https://www.nginx.com/resources/glossary/load-balancing/
Tags:

Post a Comment

0Comments

Post a Comment (0)