Tuesday 19 December 2017

A brief history of garbage collection

How does garbage collection work?

The benefits of garbage collection are indisputable -- increased reliability, decoupling of memory management from class interface design, and less developer time spent chasing memory management errors. The well-known problems of dangling pointers and memory leaks simply do not occur in Java programs. (Java programs can exhibit a form of memory leak, more accurately called unintentional object retention, but this is a different problem.) However, garbage collection is not without its costs -- among them performance impact, pauses, configuration complexity, and nondeterministic finalization.
An ideal garbage collection implementation would be totally invisible -- there would be no garbage collection pauses, no CPU time would be lost to garbage collection, the garbage collector wouldn't interact negatively with virtual memory or the cache, and the heap wouldn't need to be any larger than the residency (heap occupancy) of the application. Of course, there are no perfect garbage collectors, but garbage collectors have improved significantly over the past ten years.

How does garbage collection work?

There are several basic strategies for garbage collection: reference counting, mark-sweep, mark-compact, and copying. In addition, some algorithms can do their job incrementally (the entire heap need not be collected at once, resulting in shorter collection pauses), and some can run while the user program runs (concurrent collectors). Others must perform an entire collection at once while the user program is suspended (so-called stop-the-world collectors). Finally, there are hybrid collectors, such as the generational collector employed by the 1.2 and later JDKs, which use different collection algorithms on different areas of the heap.
When evaluating a garbage collection algorithm, we might consider any or all of the following criteria:
  • Pause time. Does the collector stop the world to perform collection? For how long? Can pauses be bounded in time?
  • Pause predictability. Can garbage collection pauses be scheduled at times that are convenient for the user program, rather than for the garbage collector?
  • CPU usage. What percentage of the total available CPU time is spent in garbage collection?
  • Memory footprint. Many garbage collection algorithms require dividing the heap into separate memory spaces, some of which may be inaccessible to the user program at certain times. This means that the actual size of the heap may be several times bigger than the maximum heap residency of the user program.
  • Virtual memory interaction. On systems with limited physical memory, a full garbage collection may fault nonresident pages into memory to examine them during the collection process. Because the cost of a page fault is high, it is desirable that a garbage collector properly manage locality of reference.
  • Cache interaction. Even on systems where the entire heap can fit into main memory, which is true of virtually all Java applications, garbage collection will often have the effect of flushing data used by the user program out of the cache, imposing a performance cost on the user program.
  • Effects on program locality. While some believe that the job of the garbage collector is simply to reclaim unreachable memory, others believe that the garbage collector should also attempt to improve the reference locality of the user program. Compacting and copying collectors relocate objects during collection, which has the potential to improve locality.
  • Compiler and runtime impact. Some garbage collection algorithms require significant cooperation from the compiler or runtime environment, such as updating reference counts whenever a pointer assignment is performed. This creates both work for the compiler, which must generate these bookkeeping instructions, and overhead for the runtime environment, which must execute these additional instructions. What is the performance impact of these requirements? Does it interfere with compile-time optimizations?
Regardless of the algorithm chosen, trends in hardware and software have made garbage collection far more practical. Empirical studies from the 1970s and 1980s show garbage collection consuming between 25 percent and 40 percent of the runtime in large Lisp programs. While garbage collection may not yet be totally invisible, it sure has come a long way.

The basic algorithms

The problem faced by all garbage collection algorithms is the same -- identify blocks of memory that have been dispensed by the allocator, but are unreachable by the user program. What do we mean by unreachable? Memory blocks can be reached in one of two ways -- if the user program holds a reference to that block in a root, or if there is a reference to that block held in another reachable block. In a Java program, a root is a reference to an object held in a static variable or in a local variable on an active stack frame. The set of reachable objects is the transitive closure of the root set under the points-to relation.

Reference counting

The most straightforward garbage collection strategy is reference counting. Reference counting is simple, but requires significant assistance from the compiler and imposes overhead on the mutator (the term for the user program, from the perspective of the garbage collector). Each object has an associated reference count -- the number of active references to that object. If an object's reference count is zero, it is garbage (unreachable from the user program) and can be recycled. Every time a pointer reference is modified, such as through an assignment statement, or when a reference goes out of scope, the compiler must generate code to update the referenced object's reference count. If an object's reference count goes to zero, the runtime can reclaim the block immediately (and decrement the reference counts of any blocks that the reclaimed block references), or place it on a queue for deferred collection.
Many ANSI C++ library classes, such as string, employ reference counting to provide the appearance of garbage collection. By overloading the assignment operator and exploiting the deterministic finalization provided by C++ scoping, C++ programs can use the string class as if it were garbage collected. Reference counting is simple, lends itself well to incremental collection, and the collection process tends to have good locality of reference, but it is rarely used in production garbage collectors for a number of reasons, such as its inability to reclaim unreachable cyclic structures (objects that reference each other directly or indirectly, like a circularly linked list or a tree that contains back-pointers to the parent node).

Tracing collectors

None of the standard garbage collectors in the JDK uses reference counting; instead, they all use some form of tracing collector. A tracing collector stops the world (although not necessarily for the entire duration of the collection) and starts tracing objects, starting at the root set and following references until all reachable objects have been examined. Roots can be found in program registers, in local (stack-based) variables in each thread's stack, and in static variables.

Mark-sweep collectors

The most basic form of tracing collector, first proposed by Lisp inventor John McCarthy in 1960, is the mark-sweep collector, in which the world is stopped and the collector visits each live node, starting from the roots, and marks each node it visits. When there are no more references to follow, collection is complete, and then the heap is swept (that is, every object in the heap is examined), and any object not marked is reclaimed as garbage and returned to the free list. Figure 1 illustrates a heap prior to garbage collection; the shaded blocks are garbage because they are unreachable by the user program:


Copying collectors

In a copying collector, another form of tracing collector, the heap is divided into two equally sized semi-spaces, one of which contains active data and the other is unused. When the active space fills up, the world is stopped and live objects are copied from the active space into the inactive space. The roles of the spaces are then flipped, with the old inactive space becoming the new active space.
Copying collection has the advantage of only visiting live objects, which means garbage objects will not be examined, nor will they need to be paged into memory or brought into the cache. The duration of collection cycles in a copying collector is driven by the number of live objects. However, copying collectors have the added cost of copying the data from one space to another, adjusting all references to point to the new copy. In particular, long-lived objects will be copied back and forth on every collection.

Heap compaction

Copying collectors have another benefit, which is that the set of live objects are compacted into the bottom of the heap. This not only improves locality of reference of the user program and eliminates heap fragmentation, but also greatly reduces the cost of object allocation -- object allocation becomes a simple pointer addition on the top-of-heap pointer. There is no need to maintain free lists or look-aside lists, or perform best-fit or first-fit algorithms -- allocating N bytes is as simple as adding N to the top-of-heap pointer and returning its previous value, as suggested in Listing 1:
Listing 1. Inexpensive memory allocation in a copying collector
1
2
3
4
5
6
7
8
void *malloc(int n) {
    if (heapTop - heapStart < n)
        doGarbageCollection();
 
    void *wasStart = heapStart;
    heapStart += n;
    return wasStart;
}
Developers who have implemented sophisticated memory management schemes for non-garbage-collected languages may be surprised at how inexpensive allocation is -- a simple pointer addition -- in a copying collector. This may be one of the reasons for the pervasive belief that object allocation is expensive -- earlier JVM implementations did not use copying collectors, and developers are still implicitly assuming allocation cost is similar to other languages, like C, when in fact it may be significantly cheaper in the Java runtime. Not only is the cost of allocation smaller, but for objects that become garbage before the next collection cycle, the deallocation cost is zero, as the garbage object will be neither visited nor copied.

Mark-compact collectors

The copying algorithm has excellent performance characteristics, but it has the drawback of requiring twice as much memory as a mark-sweep collector. The mark-compact algorithm combines mark-sweep and copying in a way that avoids this problem, at the cost of some increased collection complexity. Like mark-sweep, mark-compact is a two-phase process, where each live object is visited and marked in the marking phase. Then, marked objects are copied such that all the live objects are compacted at the bottom of the heap. If a complete compaction is performed at every collection, the resulting heap is similar to the result of a copying collector -- there is a clear demarcation between the active portion of the heap and the free area, so that allocation costs are comparable to a copying collector. Long-lived objects tend to accumulate at the bottom of the heap, so they are not copied repeatedly as they are in a copying collector.




Sunday 10 December 2017

Importance of the Java heap size In WebSphere Application Server

Introduction

Java heap is the area of memory that is used by the Java virtual machine (JVM) for storing Java objects. Optimal Java heap size is application and use dependent. Setting the JVM heap size is directly related to the number of server instances that needs to be started on a specific node and the total RAM available on the machine. The maximum heap should be incremented not to exceed 50% of overall physical memory. The Java heap memory is used by the applications that are deployed and the component running in WebSphere Application Server. It is extremely important to monitor for Java heap size usage, which can be done by enabling verbose Garbage Collection. Every WebSphere Application Server instance runs in its own JVM. The default JVM setting for the initial heap, which is 50 MB, and the maximum heap, which is 256 MB, are usually good enough for very small volume applications. However, they are not good for a live production environment.

The following list describes Java Heap Size issues that I have seen in numerous common Java Virtual Machine problems throughout my technical support years. Most of these issues can easily be prevented by taking simple precautionary steps. So, if you are a WebSphere Application Server administrator, this is your must-read!


Why does the JVM heap size setting need to be tuned?

JVM heap size settings will likely need to be tuned to support a combination of the following scenarios:

A very large application is deployed.
A large number of applications are deployed.
A high volume of transactions need to be handled concurrently and a large size request.

What is the importance of setting the Java heap size to a larger value?

Allows more objects to be created.
Takes longer to fill.
Allows the application to run longer between Garbage Collection events.


What issues occur when you set the Java heap size to a smaller value?

Holds fewer objects.
Fills more quickly.
Garbage collected more frequently.
May lead to an out-of-memory error.

What are the two main areas to watch for when it comes to JVM heap size?

How quickly does the heap size grow?
How long does it takes to perform Garbage Collection?


What are the common Java heap size issues in WebSphere Application Server?

A common issue with Low heap size is an out-of-memory error. If you are deploying a very large application using the administrative console, it can fail with out-of-memory error. In this case, you need to increase the maximum Java heap size value of the Deployment Manager and the Node Agent.
DRS uses the High Availability Manager to transfer data from one server to the other server. An out-of-memory error can happen on the server that cannot handle large objects. The solution is to increase the Java heap size on the failing server.
Installing a large application using a wsadmin script will throw an out-of-memory error. The solution is to increase the Java Heap in the wsadmin script. See the following blog for information on how to edit the wsadmin script.
https://www.ibm.com/developerworks/community/blogs/timdp/entry/avoiding_java_lang_outofmemoryerror_when_installing_application_with_wsadmin35


What needs to be done if you see an out-of-memory error with the default Java Heap Size?

If you are seeing a Java heap out-of-memory error and you are using the default or a small heap size, the first step is to increase the heap size. Sometimes this approach resolves the problem as it was just the application needing more memory than was configured. Other times, you will still see the out-of-memory error, but with the larger heap size. There is more of a potential for leaking objects, which makes them easier to find in a heap dump. With a small heap, there will not be many leaking objects, which makes them hard to find.


How do you determine that the maximum heap size is too large or too small for your application?

If garbage collection takes a long time to clean up objects with a large heap, you can reduce the maximum heap size. If garbage collection frequency is too high, the heap might be too small for the application and garbage collection needs to run frequently. Thus, you might increase the maximum heap size.


How do you change the Java heap size settings?

The JVM heap size settings can be changed from the administrative console using these steps:
Expand Servers > Server Types > WebSphere application servers and click your server name.
Click Java and process management > Process definition > Java virtual machine.
The JVM Heap size can be adjusted by using the Xms: Initial Java Heap Size and Xmx: Maximum Java Heap Size command-line parameters.

"Unveiling the java.lang.Out OfMemoryError"



What Is a java.lang.OutOfMemoryError?
 
A java.lang.OutOfMemoryError is a subclass of java.lang.VirtualMachineError that is thrown when the Java Virtual Machine is broken or has run out of resources that are necessary to continue the operation of the Java Virtual Machine. Obviously, memory is the exhausted resource for a java.lang.OutOfMemoryError, which is thrown when the Java Virtual Machine cannot allocate an object due to memory constraints. Unfortunately, the Java specification of java.lang.OutOfMemoryError does not elaborate further on what kind of memory it's talking about.
There are six different types of runtime data areas, or memory areas, in the Java Virtual Machine


1. Program Counter Register
2. Java Virtual Machine Stack
3. Heap
4. Method Area
5. Runtime Constant Pool
6. Native Method Stack
 
The Program Counter Register, also known as the pc register, stores the address of the Java byte code instruction that is currently being executed (just like the processor register in your central processing unit of the device from which you are reading or printing this article). You will not see a java.lang.OutOfMemoryError from the pc register since a program counter is not conventionally considered as a memory. 
Java Virtual Machine Stacks contain frames where data, return values, and partial execution results are stored. Java Virtual Machine Stacks can be expanded during runtime. If there's not enough memory for the expansion of an existing Java Virtual Machine stack, or for the creation of a new Java Virtual Machine stack for a new thread, the Java Virtual Machine will throw a java.lang.OutOfMemoryError.
The Heap is where instances of Java classes and arrays are allocated. A java.lang.OutOfMemoryError will be thrown when there is not enough memory available for instances of Java classes or arrays.
 
The Method Area stores class-related information, the runtime constant pool, for instances, the code for methods and constructors, and field/method data. If there's not enough memory in the method area, you will encounter java.lang.OutOfMemoryError. 
The Runtime Constant Pool contains constants such as field references and literals (Java Literals are syntactic representations of boolean, character, numeric, or string data. ). A java.lang.OutOfMemoryError will be thrown when not enough memory is available for the construction of the runtime constant pool area. 
Native Memory Stacks store conventional stacks, also known as C stacks, to support native methods that are written in a non-Java language such as C/C++. Native memory stacks can be expanded during runtime. If there's not enough memory for the expansion of an existing native memory stack or for the creation of a new native memory stack for a new thread, you would see a java.lang.OutOfMemoryError. 
You may have seen a java.lang.StackOverflowError, which is completely different from a java.lang.OutOfMemoryError. A java.lang.StackOverflowError is thrown when native memory stacks or Java Virtual Machine stacks need more memory than is configured. In most IBM Java Virtual Machine implementations, the -Xmso command-line option controls the stack size for operation system threads or native thread, and the -Xss command-line option controls the stack size for Java threads. In some implementations, such as Sun Microsystems HotSpot Java Virtual Machine, the Java methods share stack frames with C/C++ native code. The maximum stack size for a thread can be configured with the -Xss Java command-line option. The default sizes of these options vary by platform and implementation, but are usually between 256 Kbytes-1024 Kbytes. Please refer to the documentation of your Java virtual machine for more specific information.
Now that we understand which memory areas could cause a java.lang.OutOfMemoryError, let's take a look at actual error messages. What does a java.lang.OutOfMemoryError look like and how can I address each symptom? Have you ever seen a java.lang.OutOfMemoryError similar to the following?
 
java.lang.OutOfMemoryError: Requested array size exceeds VM limit:
This error message indicates that there is a memory request for an array but that's too large for a predefined limit of a virtual machine. What do we do if we encounter this kind of java.lang.OutOfMemoryError? We need to check the source code to make sure that there's no huge array created dynamically or statically. Fortunately, latest virtual machines usually do not have this limit.
java.lang.OutOfMemoryError: PermGen space:
 
You will see an OutOfMemoryError when the Permanent Generation area of the Java heap is full, like the above message. 
On some Java Virtual Machines, such as Sun Microsystems' HotSpot Java Virtual Machine, a dedicated memory area called permanent generation (or permanent region) stores objects that describe classes and methods. We can visualize the usage of a permanent generation with the IBM Pattern Modeling and Analysis Tool for the Java Garbage Collector.

Important question and answers for Websphere Application server

1Q).. How does nodeagent monitor the application server and how does it know the previous state of the application server?

When the nodeagent monitors the application server (with the monitoring policy created as mentioned in question 1) it saves the server state information in the monitoring.state file. It will maintain the previous server state and the application server PID. In case of an application server crash or hang, the nodeagent will get the previous state of the server from the monitoring.state file and then try to start the application server automatically.

Note: If you notice StringIndexOutOfBoundsException or any other exception in the NodeAgent.loadNodeState stack (nodeagent Systemout.log file), it means the monitored.state file is corrupted. You must stop all servers, delete the file and then start the nodeagent again. For example:

Caused by: java.lang.StringIndexOutOfBoundsException
at java.lang.String.substring(String.java:1115)
at com.ibm.ws.management.nodeagent.NodeAgent.loadNodeState(NodeAgent .java:3210)

2Q). My application servers were monitored by the nodeagent. When the server was hung, why didn't the nodeagent restart the server?

Nodeagent PidWaiter sends the signal every ping time out interval to get the status of the application server. If the PidWaiter does not get the response back from Application Server then AppServer is considered hung. Once the application server is identified as unresponsive/hung the nodeagent PidWaiter sends a SIGTERM to the process, which does not guarantee the process is immediately stopped. It sends the signal wait for the process to normally shutdown. If the server doesn't respond to any request, the server just stays hung forever.

If you want the server to be killed when it's hung or doesn't respond to the nodeagent ping, then you need to set "com.ibm.server.allow.sigkill" property to true in the nodeagent custom property. Please review section "Java virtual machine settings" in the product documentation for more information.

3Q).. How can we start the application servers in parallel? In other words, can I start all application servers at the same time (not in sequence)?

Yes, you can do it using the com.ibm.websphere.management.nodeagent.bootstrap.maxthreadpool custom property.

Set the property under System Administration > Node agent > nodeagent_name > Java and process management > Process definition > Java virtual machine > Custom properties.

Use this property to control the number of threads that can be included in a newly created thread pool. A dedicated thread is created to start each application server Java virtual machine (JVM). The JVMs with dedicated threads in this thread pool are the JVMs that are started in parallel whenever the node agent starts.

You can specify an integer from 0 - 5 as the value for this property. If the value you specify is greater than 0, a thread pool is created with that value as the maximum number of threads that can be included in this newly created thread pool. The following table lists the supported values for this custom property and their effect.

Property threadpool.maxsize is set to 0 or not specified - The node agent starts up to five JVMs in parallel.
Property threadpool.maxsize is set to 1 - The node agent starts the JVMs serially.
Property threadpool.maxsize value between 2 and 5 - The node agent starts a number of JVMs equal to the specified value in parallel.

Note: With this property you can only start a maximum of 5 servers at a time.

Load Balancing



Load balancers :


A load balancer, also referred to as an IP sprayer, enables horizontal scalability by dispatching TCP/IP traffic among several identically configured servers. Depending on the product used for load balancing, different protocols are supported.
Load balancer is implemented using the Load Balancer Edge component provided with the Network Deployment package, which provides load balancing capabilities for HTTP, FTP, SSL, SMTP, NNTP, IMAP, POP3, Telnet, SIP, and any other TCP based application.
Horizontal scaling topology with an IP sprayer
Load balancing products can be used to distribute HTTP requests among Web servers running on multiple physical machines. The Load Balancer component of Network Dispatcher, for example, is an IP sprayer(LB) that performs intelligent load balancing among Web servers based on server availability and workload.
 

           Figure below illustrates a horizontal scaling configuration that uses an IP sprayer to    
            redistribute requests between Web servers on multiple machines.
           Simple IP sprayer horizontally scaled topology:


The active Load Balancer hosts the highly available TCP/IP address, the cluster address of your service and sprays requests to the Web servers. At the same time, the Load Balancer keeps track of the Web servers health and routes requests around Web servers that are not available. To avoid having the Load Balancer be a single point of failure, the Load Balancer is set up in a hot-standby cluster. The primary Load Balancer communicates its state and routing table to the secondary Load Balancer. The secondary Load Balancer monitors the primary Load Balancer through heartbeat and takes over when it detects a problem with the primary Load Balancer. Only one Load Balancer is active at a time.

=====================================================
Source:
https://www.citrix.com/content/dam/citrix/en_us/documents/partner-documents/configuring-citrix-netscaler-for-ibm-websphere-application-services-en.pdf 
Understanding IBM HTTP Server plug-in Load
Balancing in a clustered environment: 
Problem(Abstract)
After setting up the HTTP plug-in for load balancing in a clustered IBM 
WebSphere environment, the request load is not evenly distributed among
 back-end WebSphere Application Servers.

Cause

In most cases, the preceding behavior is observed because of a misunderstanding 
of how HTTP plug-in load balancing algorithms work or might be due to an
 improper configuration. Also, the type of Web server (multi-threaded versus 
single threaded) being used can effect this behavior.

Resolving the problem

The following document is designed to assist you in understanding how 
HTTP plug-in load balancing works along with providing you some helpful
 tuning parameters and suggestions to better maximize the ability of the HTTP
 plug-in to distribute load evenly.

Note: The following information is written specifically for the IBM HTTP 
Server, however, this information in general is applicable to other Web servers
 which currently support the HTTP plug-in (for example: IIS, SunOne, Domino,
 and so on).
Also, The WebSphere plug-in versions 6.1 and later offer the property
 "IgnoreAffinityRequests" to address the limitation outlined in this technote.
 In addition, WebSphere versions 6.1 and later offer better facilities for
 updating the configuration through the administrative panels without manual editing.

For additional information regarding this plug-in property, visit
IgnoreAffinityRequests


Load Balancing
  • Background
    In clustered Application Server environments, IBM HTTP Servers spray
    Most commercial Web applications use HTTP sessions for holding 
    The round robin algorithm used by the HTTP plug-in in releases of V5.0, 
  • V5.1 and V6.0 can be roughly described as follows:
  • some kind of state information while using the stateless HTTP protocol.
  •  The IBM HTTP Server attempts to ensure that all the Web requests
  •  associated with a HTTP session are directed to the application server 
  • who is the primary owner of the session. These requests are called 
  • session-ed requests, session-affinity-requests, and so on. In this document
  •  the term ‘sticky requests’ or ‘sticky routing’ will be used to refer to Web 
  • requests associated with HTTP sessions and their routing to a cluster member.
  •  Web requests to the cluster members for balancing the work load among
  •  relevant application servers. The strategy for load balancing and the necessary
  •  parameters can be specified in the plugin-cfg.xml file. The default and the
  •  most commonly used strategy for workload balancing is 
  • ‘Weighted Round Robin’. For details refer to the IBM Redbooks technote,
  • While setting up its internal routing table, the HTTP plug-in component

    For example, if we have three cluster members with specified static weights as 
  • 8, 6, and 18, the internal routing table will have 4, 3, and 9 as the starting dynamic 
  • weights of the cluster members after factoring out 2 = GCD(4, 3, 9).
  •  eliminates the non-trivial greatest common divisor (GCD) from the set of 
  • cluster member weights specified in the plugin-cfg.xml file.
  • <ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin"
    Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" 
    RemoveSpecialHeaders="true" RetryInterval="60">

    <Server CloneID="10k66djk2" ConnectTimeout="0" 
    ExtendedHandshake="false" LoadBalanceWeight="8"
    MaxConnections="0" Name="Server1_WebSphere_Appserver" 
    WaitForContinue="false">
    <Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
    LoadBalanceWeight="6" MaxConnections="0" Name="Server2_WebSphere_Appserver"
     WaitForContinue="false">
    <Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <Server CloneID="10k68xtw10" ConnectTimeout="0" ExtendedHandshake="false" 
    LoadBalanceWeight="18" MaxConnections="0" 
    Name="Server3_WebSphere_Appserver" WaitForContinue="false">
    <Transport Hostname="server3.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <PrimaryServers>
    <Server Name="Server1_WebSphere_Appserver"/>
    <Server Name="Server2_WebSphere_Appserver"/>
    <Server Name="Server3_WebSphere_Appserver"/>
    </PrimaryServers>
    </ServerCluster>






<ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin"
Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" 
RemoveSpecialHeaders="true" RetryInterval="60">

<Server CloneID="10k66djk2" ConnectTimeout="0" 
ExtendedHandshake="false" LoadBalanceWeight="1" 
MaxConnections="0" Name="Server1_WebSphere_Appserver"
 WaitForContinue="false">
<Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
</Server>

<Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
LoadBalanceWeight="1" MaxConnections="0" 
Name="Server2_WebSphere_Appserver" WaitForContinue="false">
<Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
</Server>

<PrimaryServers>
<Server Name="Server1_WebSphere_Appserver"/>
<Server Name="Server2_WebSphere_Appserver"/>
</PrimaryServers>
</ServerCluster>



<ServerCluster CloneSeparatorChange="false" LoadBalance="Random"
Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" 
RemoveSpecialHeaders="true" RetryInterval="60">

<Server CloneID="10k66djk2" ConnectTimeout="0" ExtendedHandshake="false"
 LoadBalanceWeight="2" MaxConnections="0" Name="Server1_WebSphere_Appserver"
 WaitForContinue="false">
<Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
</Server>

<Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
LoadBalanceWeight="2" MaxConnections="0" 
Name="Server2_WebSphere_Appserver" WaitForContinue="false">
<Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
</Server>

<PrimaryServers>
<Server Name="Server1_WebSphere_Appserver"/>
<Server Name="Server2_WebSphere_Appserver"/>
</PrimaryServers>
</ServerCluster>

UNIX:
<IfModule worker.c>
ThreadLimit 250
ServerLimit 2
StartServers 2
MaxClients 500
MinSpareThreads 2
MaxSpareThreads 325

ThreadsPerChild 250
MaxRequestsPerChild 10000
</IfModule>

fix pack installation on websphere using command line (imcl)

 Hi friends, Today here is one more important topic in WAS. For installing fix packs , ifixes with command line we use imcl commnd.

* During installing Products and Fixes with IBM Installation Manager, Installation Manager searches open repositories where Packages and Fixes exist. But how can you verify the applicable packages, updates, and features are in a target repository from the 
 command line?

Installation Manager provides the command line tool, imcl, to manage installation. The imcl command can be found in <IM_ROOT>/eclipse/tools subdirectory.

**Following are the commands and description for it

1] encryptString stringToEncrypt:

   Encrypt the entered string. Use the encryptString command with the -passwordKey option to increase encryption security.

2] exportInstallData outputFileName:

     Export the installation data to the specified file in a compressed file format where outputFileName is the name of the generated file that contains the exported data.

3] input response_file:

Specify a response file for silent installation with the input command.

Use the input command with these options:

    -installationDirectory
    -keyring: This option is deprecated.
    -masterPasswordFile: Use with the -secureStorageFile storage_file -masterPasswordFile master_password_file option.
    -password: This option is deprecated.
    -prompt
    -secureStorageFile
    -variables

install packageID[_version][,featureID]

Use the install command with these options:

    -acceptLicense
    -connectPassportAdvantage
    -eclipseLocation
    -installationDirectory
    -installFixes
    -keyring: This option is deprecated.
    -masterPasswordFile: Use with the -secureStorageFile option.
    -password: This option is deprecated.
    -preferences
    -prompt
    -properties
    -repositories
    -secureStorageFile
    -sharedResourcesDirectory
    -useServiceRepository

Do not use the install command with these commands:

    import
    input
    modify
    rollback
    uninstall
    uninstallAll
    updateAll

4] listAvailableFixes packageID_version:

    Print information to the console about the available fixes for the specified package.
  
    Use the listAvailableFixes command with these options:

    -connectPassportAdvantage
    -keyring: This option is deprecated.
    -long
    -masterPasswordFile: Use with the -secureStorageFile option.
    -password: This option is deprecated.
    -prompt
    -preferences
    -repositories
    -secureStorageFile
    -showPlatforms
    -useServiceRepository

5] listAvailablePackages:

    Print information to the console about the available packages.

6] listInstallationDirectories:

    Print information to the console about the installation directory, the shared resources directory, the name of the package group, and installed translations.

7] listInstalledPackages:

    Print information to the console about the installed packages.

**More commands are there. But we use mostly above commands.

**Below are some commands with example:


1] How to verify the installed packages

Use command: "imcl listInstalledPackges" to view a list of packages that are already installed by the Installation Manager.

Windows: imcl.exe listInstalledPackages
AIX, HP-UX, Linux, Solaris: imcl listInstalledPackages

Example:
 ./imcl listInstalledPackages
com.ibm.websphere.ND.v80_8.0.3.20120320_0536
com.ibm.websphere.WCT.v80_8.0.5.20121022_1902
com.ibm.websphere.PLG.v80_8.0.6.20130328_1645
com.ibm.websphere.IHS.v80_8.0.5.20121022_1902


If you would like to verify more details like version, features, installed fixes, and rollback versions, run the command with the "-verbose "option.

Example:
./imcl listInstalledPackages -verbose

2] How to verify the installable packages

Use command: "imcl listAvailablePackages" to list the packages that are installable to the existing products.

Windows: imcl.exe listAvailablePackages -repositories [source_repository]
AIX, HP-UX, Linux, Solaris: imcl listAvailablePackages -repositories [source_repository]

Example:
 ./imcl listAvailablePackages -repositories /usr/IBMWASREPO
com.ibm.websphere.BASE.v85_8.5.5001.20131018_2242
com.ibm.websphere.BASE.v85_8.5.5002.20140408_1947

**Uninstalling packages by using imcl

Uninstall packages from the tools directory by using Installation Manager command line (imcl) uninstall commands.
Before you begin

•To identify the package_id_version,feature_id, run the listAvailablePackages command.

Procedure
To uninstall a package by using imcl:
1) Navigate to the tool directory
2) Run the uninstall command: using imcl

    imcl uninstall package_id_version,feature_id -installationDirectory installation_directory

List the Installed packages to verify the packages you want to uninstall
3] List the directories where the Websphere Packages are installed:
example:
  [root@wasnode tools]# ./imcl listInstallationDirectories
/opt/IBM/WebSphere/AppServer

4]  Launch the uninstall option for imcl command line:

Example:
[root@wasnode tools]# ./imcl uninstall com.ibm.websphere.ND.v85_8.5.5003.20140730_1249 com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103 -installationDirectory /opt/IBM/WebSphere/AppServer

5] Validate it using “imcl listInstalledPackages” to ensure the WAS packages are removed:

  example:

[root@wasnode tools]# ./imcl listInstalledPackages
com.ibm.cic.agent_1.6.2000.20130301_2248

**Install of Websphere ND 8.5 using imcl command line

Command: ./imcl install
To install the Product using imcl command you need to use “install” Option of imcl
a) Ensure all the prerequisites are satisfied like space , permission etc
b) Extract the WAS binaries in the server which you have downloaded
c) Execute the “imcl listAvailablePackages “ in the repository to validate the packages

example:
[root@wasnode tools]# cd /opt/IBM/InstallationManager/eclipse/tools
[root@wasnode tools]# ./imcl listAvailablePackages -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/WASND
com.ibm.websphere.ND.v85_8.5.5000.20130514_1044

2]  Execute “imcl install” to Install the WAS ND Packages “com.ibm.websphere.ND.v85_8.5.5000.20130514_1044″

Example:
[root@wasnode tools]#./imcl install com.ibm.websphere.ND.v85_8.5.5000.20130514_1044 -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/WASND -installationDirectory /opt/IBM/WebSphere/AppServer -acceptLicense -sP

3] Similarly you can install the SDK 7 Package too:

Example:

[root@wasnode tools]# ./imcl listAvailablePackages -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/SDK
com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103
com.ibm.websphere.liberty.IBMJAVA.v70_7.0.4001.20130510_2103
[root@wasnode tools]#[root@wasnode tools]# ./imcl install com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103 -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/SDK -installationDirectory /opt/IBM/WebSphere/AppServer -acceptLicense -sP

4]List the Installed packages to verify the packages you have installed:

Example:
[root@wasnode tools]# ./imcl listInstalledPackages
com.ibm.cic.agent_1.6.2000.20130301_2248
com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103
com.ibm.websphere.ND.v85_8.5.5003.20140730_1249

5] List the directories where the Websphere Packages are installed:

Example:
[root@wasnode tools]# ./imcl listInstallationDirectories
/opt/IBM/WebSphere/AppServer

Finally After installation you can check ./versioninfo.sh on path  "cd /opt/IBM/WebSphere/AppServer/bin/"

Useful commands for WebSphere Application Server:

Finding what versions are running:
  
While version information is available from the admin console, it is also available for most IBM products in the source file product.xml .Beginning with WebSphere Application Server Version 4.0.x, this file will also include information on eFixes that have been installed. Access this file for WebSphere Application Server using these commands: