blog




  • Essay / Network Load Balancing - 659

    Load balancing is a method of distributing workloads across multiple computers, such as a cluster, central processing unit, network links, or disk drives in order to optimize the use of resources, maximize throughput, minimize response time and avoid overloading any of the resources. By using multiple components using load balancing, it can increase reliability and speed through redundancy, instead of using a single component to achieve the desired result. Load balancing is accomplished through dedicated software or hardware, such as a multi-layer switch or domain name switch (DNS) server process. Server farms are just one of many uses that benefit from load balancing. Load balancing allows for a significantly higher level of fault tolerance. When a router learns multiple routes to a specific network through multiple routing protocols, it installs the route with the lowest administrative distance into the routing table. Sometimes the router must select a route from many different paths along the same routing path with the same administrative distance. In this case, the router chooses the path with the smallest number of instances to its destination. Each routing process calculates its paths differently and the paths may need to be manipulated in order to achieve the desired load balancing method. Network Load Balancing Network load balancing distributes IP traffic to multiple instances of a TCP/IP service, such as a web server, each running on a host within the cluster. It transparently divides the client request between hosts and allows the client to access the cluster using one or more "virtual" IP addresses. From the client's perspective, the cluster appears to be a single server that responds to these client requests. As company traffic increases ...... middle of paper ...... y in terms of processing speed and memory, a ratio or weighted method may be the best option. The default method is called round robin. In this method, the connection request is forwarded to the next server in line, ultimately transmitting requests evenly across the cluster. This works in most configurations. The ratio is transmitted by ratio designed specifically for a defined set of rules defined by the administrator. This allows for a defined distribution of requests specific to the speed and memory of each server. There are other ratio methods called Dynamic Ratio Node and Dynamic Ratio Member which are similar to Ratio except that the ratio is system driven and whose values ​​are not static. Weighted methods work best when server capabilities are different and require a weighted distribution of connection requests (University of Tennessee, 2014).