Load balancers :
A load balancer, also referred to as an IP sprayer, enables horizontal scalability by dispatching TCP/IP traffic among several identically configured servers. Depending on the product used for load balancing, different protocols are supported.
Load balancer is implemented using the Load Balancer Edge component provided with the Network Deployment package, which provides load balancing capabilities for HTTP, FTP, SSL, SMTP, NNTP, IMAP, POP3, Telnet, SIP, and any other TCP based application.
Horizontal scaling topology with an IP sprayer
Load balancing products can be used to distribute HTTP requests among Web servers running on multiple physical machines. The Load Balancer component of Network Dispatcher, for example, is an IP sprayer(LB) that performs intelligent load balancing among Web servers based on server availability and workload.
Figure below illustrates a horizontal scaling configuration that uses an IP sprayer to
redistribute requests between Web servers on multiple machines.
Simple IP sprayer horizontally scaled topology:
The active Load Balancer hosts the highly available TCP/IP address, the cluster address of your service and sprays requests to the Web servers. At the same time, the Load Balancer keeps track of the Web servers health and routes requests around Web servers that are not available. To avoid having the Load Balancer be a single point of failure, the Load Balancer is set up in a hot-standby cluster. The primary Load Balancer communicates its state and routing table to the secondary Load Balancer. The secondary Load Balancer monitors the primary Load Balancer through heartbeat and takes over when it detects a problem with the primary Load Balancer. Only one Load Balancer is active at a time.
=====================================================
Source:
https://www.citrix.com/content/dam/citrix/en_us/documents/partner-documents/configuring-citrix-netscaler-for-ibm-websphere-application-services-en.pdf
Understanding IBM HTTP Server plug-in Load
Balancing in a clustered environment:
Balancing in a clustered environment:
Problem(Abstract)
After setting up the HTTP plug-in for load balancing in a clustered IBM
After setting up the HTTP plug-in for load balancing in a clustered IBM
WebSphere environment, the request load is not evenly distributed among
back-end WebSphere Application Servers.
Cause
In most cases, the preceding behavior is observed because of a misunderstanding
of how HTTP plug-in load balancing algorithms work or might be due to an
improper configuration. Also, the type of Web server (multi-threaded versus
single threaded) being used can effect this behavior.
Resolving the problem
The following document is designed to assist you in understanding how
HTTP plug-in load balancing works along with providing you some helpful
tuning parameters and suggestions to better maximize the ability of the HTTP
plug-in to distribute load evenly.
Note: The following information is written specifically for the IBM HTTP
Server, however, this information in general is applicable to other Web servers
which currently support the HTTP plug-in (for example: IIS, SunOne, Domino,
and so on).
and so on).
Also, The WebSphere plug-in versions 6.1 and later offer the property
"IgnoreAffinityRequests" to address the limitation outlined in this technote.
In addition, WebSphere versions 6.1 and later offer better facilities for
updating the configuration through the administrative panels without manual editing.
For additional information regarding this plug-in property, visit
IgnoreAffinityRequests
IgnoreAffinityRequests
Load Balancing
- BackgroundIn clustered Application Server environments, IBM HTTP Servers sprayMost commercial Web applications use HTTP sessions for holdingThe round robin algorithm used by the HTTP plug-in in releases of V5.0,
- V5.1 and V6.0 can be roughly described as follows:
- some kind of state information while using the stateless HTTP protocol.
- The IBM HTTP Server attempts to ensure that all the Web requests
- associated with a HTTP session are directed to the application server
- who is the primary owner of the session. These requests are called
- session-ed requests, session-affinity-requests, and so on. In this document
- the term ‘sticky requests’ or ‘sticky routing’ will be used to refer to Web
- requests associated with HTTP sessions and their routing to a cluster member.
- Web requests to the cluster members for balancing the work load among
- relevant application servers. The strategy for load balancing and the necessary
- parameters can be specified in the plugin-cfg.xml file. The default and the
- most commonly used strategy for workload balancing is
- ‘Weighted Round Robin’. For details refer to the IBM Redbooks technote,
- While setting up its internal routing table, the HTTP plug-in componentFor example, if we have three cluster members with specified static weights as
- 8, 6, and 18, the internal routing table will have 4, 3, and 9 as the starting dynamic
- weights of the cluster members after factoring out 2 = GCD(4, 3, 9).
- eliminates the non-trivial greatest common divisor (GCD) from the set of
- cluster member weights specified in the plugin-cfg.xml file.
No comments:
Post a Comment