Wikipedia

Search results

Tuesday, 16 January 2018

WAS Certification ( Architecture)

Apply design considerations of IBM WebSphere Application Server, V7.0 when installing an enterprise environment (e.g., LDAP, database servers, WebSphere messaging, etc.)
 
The following topics are important to consider when designing a WebSphere Application Server deployment. They will most likely impact your design significantly:
Scalability - Scalability, as used in this book, means the ability of the environment to grow as the infrastructure load grows. Scalability depends on many different factors such as hardware,operating system, middleware, and so on.
Caching - WebSphere Application Server provides many different caching features at different locations in the architecture. WebSphere Application Network Deployment provides caching features at each possible layer of the infrastructure:
Infrastructure edge:
 
Caching Proxy provided by the Edge Components
WebSphere Proxy Server
HTTP server layer:
 
Edge Side Include (ESI) fragment caching capabilities provided by the WebSphere plug-in
Caching capabilities of the HTTP server itself (like the Fast Response Cache
Accelerator (FRCA) as provided by most implementations of IBM HTTP Server)
 
Application server layer:
 
Dynamic caching service inside the application server's JVM
 
WebSphere Proxy Server
 
High availability – Designing an infrastructure for high availability means to design an
infrastructure in a way that your environment can survive the failure of one or multiple
components. High availability implies redundancy by avoiding any single point of failure on any
layer (network, hardware, processes, and so forth). The number of failing components your
environment has to survive without losing service depends on the requirements for the specific
environment.

Load-balancing and fail-over - As a result of the design considerations for high availability you will most likely identify a number of components that need redundancy. Having redundant
systems in an architecture requires you to think about how to implement this redundancy to
ensure getting the most benefit from the systems during normal operations, and how you will
manage a seamless fail-over in case a component fails. In a typical WebSphere Application
Server environment, there are a variety of components that need to be considered when
implementing load-balancing and fail-over capabilities:

Caching proxy servers
HTTP servers
Containers (such as the Web, SIP, and Portlet containers)
Enterprise JavaBeans (EJB) containers
Messaging engines
Back-end serves (database, enterprise information systems, and so forth)
User registries

Disaster recovery - Disaster recovery concentrates on the actions, processes, and preparations

to recover from a disaster that has struck your infrastructure. Important points when planning for disaster recovery are as follows:

Recovery Time Objective (RTO) - How much time can pass before the failed component
must be up and running?
Recovery Point Objective (RPO) - How much data loss is affordable? The RPO sets the
interval for which no data backup can be provided. If no data loss can be afforded, the RPO
would be zero.
http://www.ibm.com/developerworks/websphere/techjournal/0707_col_alcott/0707_col_alcott.html
Security - Consider how security will affect your infrastructure:

• Understand the security policy and requirements for your future environment.
• Work with a security subject matter expert to develop a security infrastructure
   that adheres to the requirements and integrates in the existing infrastructure.
• Make sure that sufficient physical security is in place.
• Make sure the application developers understand the security requirements and
   code the application accordingly.
• Consider the user registry (or registries) you plan to use. WebSphere Application
   Server V7.0 supports multiple user registries and multiple security domains.
• Make sure that the user registries are not breaking the high availability
   requirements. Even if the user registries you are using are out of scope of the
  WebSphere Application Server project, considerations for high availability need
 to be taken and requested. For example, make sure that your LDAP user
registries are made highly available and are not a single point of failure.
• Define the trust domain for your environment. All computers in the same
   WebSphere security domain trust each other. This trust domain can be
  extended, and when using SPNEGO / Kerberos, even out to the Windows
 desktop of the users in your enterprise.
• Assess your current implementation design and ensure that every possible
   access to your systems is secured.
• Consider the level of auditing required and how to implement it.
• Consider how you will secure stored data. Think of operating system security
   and encryption of stored data.
• Define a password policy, including considerations for handling password
   expirations for individual users.
• Consider encryption requirements for network traffic. Encryption introduces
   overhead and increased resource costs, so use encryption only where
  appropriate.
• Define the encryption (SSL) endpoints in your communications.
• Plan for certificates and their expiration:
• Decide which traffic requires certificates from a trusted certificate authority and
   for which traffic a self-signed certificate is sufficient. Secured connections to the
  outside world usually use a trusted certificate, but for connections inside the
 enterprise, self-signed certificates are usually enough.
• Develop a management strategy for the certificates. Many trusted certificate
   authorities provide online tools which support certificate management. But what
  about self-signed certificates?
• How are you going to back up your certificates? Always keep in mind that your
certificates are the key to your data. Your encrypted data is useless if you lose
your certificates.
• Plan how you will secure your certificates. Certificates are the key to your data,
   therefore make sure that they are secured and backed up properly.
• http://www.ibm.com/developerworks/websphere/techjournal//0512_botzum/0512
   _botzum1.html

Relate the various components of the IBM WebSphere Application Server Network
Deployment V7.0 runtime architecture.

WebSphere Application Server Network Deployment (ND) provides the capabilities to develop more enhanced server infrastructures. It extends the WebSphere Application Server base package and includes the following features:

• Clustering capabilities
• Edge components
• Dynamic scalability
• High availability
• Advanced management features for distributed configurations
WebSphere Application Server Network Deployment Edge Components provide high
performance and high availability features. For example, the Load Balancer (a software load
balancer) provides horizontal scalability by dispatching HTTP requests among several Web
server or application server nodes supporting various dispatching options and algorithms to
assure high availability in high volume environments. The usage of Edge Component Load
Balancer can reduce Web server congestion, increase content availability, and provide scaling ability for the Web server.

Illustrate workload management and failover strategies using IBM WebSphere Application Server Network Deployment V7.0.


Two IBM HTTP Server Web servers configured in a cluster
Incoming requests for static content are served by the Web server. Requests for dynamic
content is forwarded to the appropriate application server by the Web server plug-in.
A Caching Proxy that keeps a local cache of recently accessed pages Caching Proxy is included in WebSphere Application Server Edge Components. Cacheable
content includes static Web pages and JSPs with dynamically generated but infrequently
changed fragments. The Caching Proxy can satisfy subsequent requests for the same content by delivering it directly from the local cache, which is much quicker than retrieving it again from the content host.

A backup server is configured for high availability.

A Load Balancer to direct incoming requests to the Caching Proxy and a second Load Balancer to manage the workload across the HTTP servers
Load Balancer is included in WebSphere Application Server Edge Components. The Load
Balancers distribute incoming client requests across servers, balancing workload and providing high availability by routing around unavailable servers.

A backup server is configured for each primary Load Balancer to provide high availability.

A dedicated server to host the deployment manager
The deployment manager is required for administration but is not critical to the runtime execution of applications. It has a master copy of the configuration that should be backed up on a regular basis.
Two clusters consisting of three application servers
Each cluster spans two machines. In this topology, one cluster contains application servers that provide the Web container functionality of the applications (servlets, JSPs), and the second cluster contains the EJB container functionality. Whether you choose to do this or not is a matter of careful consideration. Although it provides failover and workload management capabilities for both Web and EJB containers, it can also affect performance.
A dedicated database server running IBM DB2 V9.

When to use a high availability manager
 
A high availability manager consumes valuable system resources, such as CPU cycles, heap
memory, and sockets. These resources are consumed both by the high availability manager and by product components that use the services that the high availability manager provides. The amount of resources that both the high availability manager and these product components consume increases nonlinearly as the size of a core group increases.
For large core groups, the amount of resources that the high availability manager consumes can become significant. Disabling the high availability manager frees these resources. However, before you disable the high availability manager, you should thoroughly investigate the current and future needs of your system to ensure that disabling the high availability manager does not also disable other functions that you use that require the high availability manager. For example, both memory to memory session replication, and remote request dispatcher (RRD) require the high availability manager to be enabled.
The capability to disable the high availability manager is most useful for topologies where none of the high availability manager provided services are used. In certain topologies, only some of the processes use the services that the high availability manager provides. In these topologies, you can disable the high availability manager on a per-process basis, which optimizes the amount of resources that the high availability manager uses

Do not disable the high availability manager on administrative processes, such as node agents and the deployment manager, unless the high availability manager is disabled on all application server processes in that core group.
Some of the services that the high availability manager provides are cluster based. Therefore,because cluster members must be homogeneous, if you disable the high availability manager on one member of a cluster, you must disable it on all of the other members of that cluster.

When determining if you must leave the high availability manager enabled on a given application server process, consider if the process requires any of the following high availability manager services:
* Memory-to-memory replication
* Singleton failover
* Workload management routing
* On-demand configuration routing

Memory-to-memory replication
Memory-to-memory replication is a cluster-based service that you configure or enable at the
application server level. If memory-to-memory replication is enabled on any cluster member,
then the high availability manager must be enabled on all of the members of that cluster.
Memory-to-memory replication is automatically enabled if:
* Memory-to-memory replication is enabled for Web container HTTP sessions.
* Cache replication is enabled for the dynamic cache service.
* EJB stateful session bean failover is enabled for an application server.
Singleton failover Singleton failover is a cluster-based service. The high availability manager must be enabled on all members of a cluster if:
* The cluster is configured to use the high availability manager to manage the recovery
of transaction logs.
* One or more instances of the default messaging provider are configured to run in the
cluster. The default messaging provider that is provided with the product is also referred
to as the service integration bus.

Workload management routingWorkload management (WLM) propagates the following classes or types of routing information:
* Routing information for enterprise bean IIOP traffic.
* Routing information for the default messaging engine, which is also referred to as the
service integration bus.
* Routing HTTP requests through the IBM® WebSphere® Application Server proxy
server.
* Routing Web Services Addressing requests through the IBM WebSphere Application
Server proxy server.
* Routing SIP (Session Initiation Protocol) requests.

WLM uses the high availability manager to both propagate the routing information and make it highly available. Although WLM routing information typically applies to clustered resources, it can also apply to non-clustered resources, such as standalone messaging engines. During
typical circumstances, you must leave the high availability manager enabled on any application server that produces or consumes either IIOP or messaging engine routing information.
For example, if:

* The routing information producer is an enterprise bean application that resides in
cluster 1.
* The routing information consumer is a servlet that resides in cluster 2.
When the servlet in cluster 2 calls the enterprise bean application in cluster 1, the high
availability manager must be enabled on all servers in both clusters.
Workload management provides an option to statically build and export route tables to the file system. Use this option to eliminate the dependency on the high availability manager.
On-demand configuration routing
 
In a Network Deployment system, the on-demand configuration is used for IBM WebSphere
Application Server proxy server routing. If you want to use on-demand configuration routing in
conjunction with your Web services, you must verify that the high availability manager is enabled on the proxy server and on all of the servers to which the proxy server routes work.
Resource adapter management
 
New feature: In a high availability environment, you can configure your resource adapters for
high availability. After a resource adapter is configured for high availability, the high availability
manager assigns the resource adapter to a high availability group, the name for which is derived from the resource adapter key. The resource adapter key is the cell-scoped configuration ID of the resource adapter, and must be identical for all of the servers in a cluster that use the same resource adapter. The high availability manager then controls when each resource adapter is started. newfeat When you configure a resource adapter for high availability, select one of the following types of failover:

* Message endpoint (MEP) failover. When a resource adapter is configured for MEP
failover, that resource adapter can be active within any member of a high availability
group, but only supports inbound communication within one high availability group
member.
* Resource adapter (RA) instance failover. When a resource adapter is configured for
RA instance failover, that resource adapter can support either outbound or inbound
communication within one high availability group member at a time.
When a resource adapter that is configured for high availability starts, the JavaTM EE Connector Architecture (JCA) container joins the resource adapter into the high availability group. This high availability group is configured to run under the one of n policy with quorum disabled. When MEP failover is selected, the container starts the adapter with outbound communication enabled, but disables inbound communication because the high availability manager controls inbound communication for that resource adapter. When RA instance failover is selected, the container starts the adapter and disables both outbound and inbound communication because the high availability manager controls both inbound and outbound communication for that resource adapter.
When the run time stops a resource adapter that is configured for high availability, the JCA
container removes the resource adapter from the high availability group to which it was
assigned.

Disabling or enabling a high availability managerA unique HAManagerService configuration object exists for every core group member. The
enable attribute in this configuration object determines if the high availability manager is enabled or disabled for the corresponding process. When the enable attribute is set to true, the high availability manager is enabled. When the enable attribute is set to false, the high availability manager is disabled. By default, the high availability manager is enabled. If the setting for the enable attribute is changed, the corresponding process must be restarted before the change goes into effect. You must use the wsadmin tool to disable or enable a high availability manager.

Determine if you need to use a high availability manager to manage members of a core group.
About this task
You might want to disable a high availability manager if you are trying to reduce the amount of resources, such as CPU and memory, that the product uses and have determined that the high availability manager is not required on some or all of the processes in a core group.
You might need to enable a high availability manager that you previously disabled because you are installing applications on core group members that must be highly available.
Complete the following steps if you need to disable a high availability manager or to enable a
high availability manager that you previously disabled.
Procedure

1. In the administrative console, navigate to the Core group service page for the
process.
* For a deployment manager, click System Administration > Deployment manager >
Core group service.
* For a node agent, click System Administration > Node agent > node_agent > Core
group service.
* For an application server, click Servers > Server Types > WebSphere application
servers > server_name > Core group service.
2. If you want to disable the high availability manager for this process, deselect the
Enable service at server startup option.
3. If you want to enable the high availability manager for this process, select the Enable
service at server startup option.
4. Click OK and then click Review.
5. Select Synchronize changes with nodes, and then click Save.
6. Restart all of the processes for which you changed the Enable service at server
startup property setting.


Clusters and workload management
 
Clusters are sets of servers that are managed together and participate in workload
management. Clusters enable enterprise applications to scale beyond the amount of throughput capable of being achieved with a single application server. Clusters also enable enterprise applications to be highly available because requests are automatically routed to the running servers in the event of a failure. 
The servers that are members of a cluster can be on different host machines. In contrast, servers that are part of the same node must be located on the same host machine. A cell can include no clusters, one cluster, or multiple clusters.
 
Servers that belong to a cluster are members of that cluster set and must all have identical
application components deployed on them. Other than the applications configured to run on
them, cluster members do not have to share any other configuration data. One cluster member might be running on a huge multi-processor enterprise server system, while another member of that same cluster might be running on a smaller system. The server configuration settings for each of these two cluster members are very different, except in the area of application components assigned to them. In that area of configuration, they are identical. This allows client work to be distributed across all the members of a cluster instead of all workload being handled by a single application server.

When you create a cluster, you make copies of an existing application server template. The
template is most likely an application server that you have previously configured. You are offered the option of making that server a member of the cluster. However, it is recommended that you keep the server available only as a template, because the only way to remove a cluster member is to delete the application server. When you delete a cluster, you also delete any application servers that were members of that cluster. There is no way to preserve any member of a cluster.

Keeping the original template intact allows you to reuse the template if you need to rebuild the configuration.

A vertical cluster has cluster members on the same node, or physical machine. A horizontal
cluster has cluster members on multiple nodes across many machines in a cell. You can
configure either type of cluster, or have a combination of vertical and horizontal clusters.
[AIX Solaris HP-UX Linux Windows] [iSeries] Clustering application servers that host Web
containers automatically enables plug-in workload management for the application servers and the servlets they host. The routing of servlet requests occurs between the Web server plug-in and clustered application servers using HTTP transports, or HTTP transport channels.


Tuesday, 19 December 2017

A brief history of garbage collection

How does garbage collection work?

The benefits of garbage collection are indisputable -- increased reliability, decoupling of memory management from class interface design, and less developer time spent chasing memory management errors. The well-known problems of dangling pointers and memory leaks simply do not occur in Java programs. (Java programs can exhibit a form of memory leak, more accurately called unintentional object retention, but this is a different problem.) However, garbage collection is not without its costs -- among them performance impact, pauses, configuration complexity, and nondeterministic finalization.
An ideal garbage collection implementation would be totally invisible -- there would be no garbage collection pauses, no CPU time would be lost to garbage collection, the garbage collector wouldn't interact negatively with virtual memory or the cache, and the heap wouldn't need to be any larger than the residency (heap occupancy) of the application. Of course, there are no perfect garbage collectors, but garbage collectors have improved significantly over the past ten years.

How does garbage collection work?

There are several basic strategies for garbage collection: reference counting, mark-sweep, mark-compact, and copying. In addition, some algorithms can do their job incrementally (the entire heap need not be collected at once, resulting in shorter collection pauses), and some can run while the user program runs (concurrent collectors). Others must perform an entire collection at once while the user program is suspended (so-called stop-the-world collectors). Finally, there are hybrid collectors, such as the generational collector employed by the 1.2 and later JDKs, which use different collection algorithms on different areas of the heap.
When evaluating a garbage collection algorithm, we might consider any or all of the following criteria:
  • Pause time. Does the collector stop the world to perform collection? For how long? Can pauses be bounded in time?
  • Pause predictability. Can garbage collection pauses be scheduled at times that are convenient for the user program, rather than for the garbage collector?
  • CPU usage. What percentage of the total available CPU time is spent in garbage collection?
  • Memory footprint. Many garbage collection algorithms require dividing the heap into separate memory spaces, some of which may be inaccessible to the user program at certain times. This means that the actual size of the heap may be several times bigger than the maximum heap residency of the user program.
  • Virtual memory interaction. On systems with limited physical memory, a full garbage collection may fault nonresident pages into memory to examine them during the collection process. Because the cost of a page fault is high, it is desirable that a garbage collector properly manage locality of reference.
  • Cache interaction. Even on systems where the entire heap can fit into main memory, which is true of virtually all Java applications, garbage collection will often have the effect of flushing data used by the user program out of the cache, imposing a performance cost on the user program.
  • Effects on program locality. While some believe that the job of the garbage collector is simply to reclaim unreachable memory, others believe that the garbage collector should also attempt to improve the reference locality of the user program. Compacting and copying collectors relocate objects during collection, which has the potential to improve locality.
  • Compiler and runtime impact. Some garbage collection algorithms require significant cooperation from the compiler or runtime environment, such as updating reference counts whenever a pointer assignment is performed. This creates both work for the compiler, which must generate these bookkeeping instructions, and overhead for the runtime environment, which must execute these additional instructions. What is the performance impact of these requirements? Does it interfere with compile-time optimizations?
Regardless of the algorithm chosen, trends in hardware and software have made garbage collection far more practical. Empirical studies from the 1970s and 1980s show garbage collection consuming between 25 percent and 40 percent of the runtime in large Lisp programs. While garbage collection may not yet be totally invisible, it sure has come a long way.

The basic algorithms

The problem faced by all garbage collection algorithms is the same -- identify blocks of memory that have been dispensed by the allocator, but are unreachable by the user program. What do we mean by unreachable? Memory blocks can be reached in one of two ways -- if the user program holds a reference to that block in a root, or if there is a reference to that block held in another reachable block. In a Java program, a root is a reference to an object held in a static variable or in a local variable on an active stack frame. The set of reachable objects is the transitive closure of the root set under the points-to relation.

Reference counting

The most straightforward garbage collection strategy is reference counting. Reference counting is simple, but requires significant assistance from the compiler and imposes overhead on the mutator (the term for the user program, from the perspective of the garbage collector). Each object has an associated reference count -- the number of active references to that object. If an object's reference count is zero, it is garbage (unreachable from the user program) and can be recycled. Every time a pointer reference is modified, such as through an assignment statement, or when a reference goes out of scope, the compiler must generate code to update the referenced object's reference count. If an object's reference count goes to zero, the runtime can reclaim the block immediately (and decrement the reference counts of any blocks that the reclaimed block references), or place it on a queue for deferred collection.
Many ANSI C++ library classes, such as string, employ reference counting to provide the appearance of garbage collection. By overloading the assignment operator and exploiting the deterministic finalization provided by C++ scoping, C++ programs can use the string class as if it were garbage collected. Reference counting is simple, lends itself well to incremental collection, and the collection process tends to have good locality of reference, but it is rarely used in production garbage collectors for a number of reasons, such as its inability to reclaim unreachable cyclic structures (objects that reference each other directly or indirectly, like a circularly linked list or a tree that contains back-pointers to the parent node).

Tracing collectors

None of the standard garbage collectors in the JDK uses reference counting; instead, they all use some form of tracing collector. A tracing collector stops the world (although not necessarily for the entire duration of the collection) and starts tracing objects, starting at the root set and following references until all reachable objects have been examined. Roots can be found in program registers, in local (stack-based) variables in each thread's stack, and in static variables.

Mark-sweep collectors

The most basic form of tracing collector, first proposed by Lisp inventor John McCarthy in 1960, is the mark-sweep collector, in which the world is stopped and the collector visits each live node, starting from the roots, and marks each node it visits. When there are no more references to follow, collection is complete, and then the heap is swept (that is, every object in the heap is examined), and any object not marked is reclaimed as garbage and returned to the free list. Figure 1 illustrates a heap prior to garbage collection; the shaded blocks are garbage because they are unreachable by the user program:


Copying collectors

In a copying collector, another form of tracing collector, the heap is divided into two equally sized semi-spaces, one of which contains active data and the other is unused. When the active space fills up, the world is stopped and live objects are copied from the active space into the inactive space. The roles of the spaces are then flipped, with the old inactive space becoming the new active space.
Copying collection has the advantage of only visiting live objects, which means garbage objects will not be examined, nor will they need to be paged into memory or brought into the cache. The duration of collection cycles in a copying collector is driven by the number of live objects. However, copying collectors have the added cost of copying the data from one space to another, adjusting all references to point to the new copy. In particular, long-lived objects will be copied back and forth on every collection.

Heap compaction

Copying collectors have another benefit, which is that the set of live objects are compacted into the bottom of the heap. This not only improves locality of reference of the user program and eliminates heap fragmentation, but also greatly reduces the cost of object allocation -- object allocation becomes a simple pointer addition on the top-of-heap pointer. There is no need to maintain free lists or look-aside lists, or perform best-fit or first-fit algorithms -- allocating N bytes is as simple as adding N to the top-of-heap pointer and returning its previous value, as suggested in Listing 1:
Listing 1. Inexpensive memory allocation in a copying collector
1
2
3
4
5
6
7
8
void *malloc(int n) {
    if (heapTop - heapStart < n)
        doGarbageCollection();
 
    void *wasStart = heapStart;
    heapStart += n;
    return wasStart;
}
Developers who have implemented sophisticated memory management schemes for non-garbage-collected languages may be surprised at how inexpensive allocation is -- a simple pointer addition -- in a copying collector. This may be one of the reasons for the pervasive belief that object allocation is expensive -- earlier JVM implementations did not use copying collectors, and developers are still implicitly assuming allocation cost is similar to other languages, like C, when in fact it may be significantly cheaper in the Java runtime. Not only is the cost of allocation smaller, but for objects that become garbage before the next collection cycle, the deallocation cost is zero, as the garbage object will be neither visited nor copied.

Mark-compact collectors

The copying algorithm has excellent performance characteristics, but it has the drawback of requiring twice as much memory as a mark-sweep collector. The mark-compact algorithm combines mark-sweep and copying in a way that avoids this problem, at the cost of some increased collection complexity. Like mark-sweep, mark-compact is a two-phase process, where each live object is visited and marked in the marking phase. Then, marked objects are copied such that all the live objects are compacted at the bottom of the heap. If a complete compaction is performed at every collection, the resulting heap is similar to the result of a copying collector -- there is a clear demarcation between the active portion of the heap and the free area, so that allocation costs are comparable to a copying collector. Long-lived objects tend to accumulate at the bottom of the heap, so they are not copied repeatedly as they are in a copying collector.




Sunday, 10 December 2017

Importance of the Java heap size In WebSphere Application Server

Introduction

Java heap is the area of memory that is used by the Java virtual machine (JVM) for storing Java objects. Optimal Java heap size is application and use dependent. Setting the JVM heap size is directly related to the number of server instances that needs to be started on a specific node and the total RAM available on the machine. The maximum heap should be incremented not to exceed 50% of overall physical memory. The Java heap memory is used by the applications that are deployed and the component running in WebSphere Application Server. It is extremely important to monitor for Java heap size usage, which can be done by enabling verbose Garbage Collection. Every WebSphere Application Server instance runs in its own JVM. The default JVM setting for the initial heap, which is 50 MB, and the maximum heap, which is 256 MB, are usually good enough for very small volume applications. However, they are not good for a live production environment.

The following list describes Java Heap Size issues that I have seen in numerous common Java Virtual Machine problems throughout my technical support years. Most of these issues can easily be prevented by taking simple precautionary steps. So, if you are a WebSphere Application Server administrator, this is your must-read!


Why does the JVM heap size setting need to be tuned?

JVM heap size settings will likely need to be tuned to support a combination of the following scenarios:

A very large application is deployed.
A large number of applications are deployed.
A high volume of transactions need to be handled concurrently and a large size request.

What is the importance of setting the Java heap size to a larger value?

Allows more objects to be created.
Takes longer to fill.
Allows the application to run longer between Garbage Collection events.


What issues occur when you set the Java heap size to a smaller value?

Holds fewer objects.
Fills more quickly.
Garbage collected more frequently.
May lead to an out-of-memory error.

What are the two main areas to watch for when it comes to JVM heap size?

How quickly does the heap size grow?
How long does it takes to perform Garbage Collection?


What are the common Java heap size issues in WebSphere Application Server?

A common issue with Low heap size is an out-of-memory error. If you are deploying a very large application using the administrative console, it can fail with out-of-memory error. In this case, you need to increase the maximum Java heap size value of the Deployment Manager and the Node Agent.
DRS uses the High Availability Manager to transfer data from one server to the other server. An out-of-memory error can happen on the server that cannot handle large objects. The solution is to increase the Java heap size on the failing server.
Installing a large application using a wsadmin script will throw an out-of-memory error. The solution is to increase the Java Heap in the wsadmin script. See the following blog for information on how to edit the wsadmin script.
https://www.ibm.com/developerworks/community/blogs/timdp/entry/avoiding_java_lang_outofmemoryerror_when_installing_application_with_wsadmin35


What needs to be done if you see an out-of-memory error with the default Java Heap Size?

If you are seeing a Java heap out-of-memory error and you are using the default or a small heap size, the first step is to increase the heap size. Sometimes this approach resolves the problem as it was just the application needing more memory than was configured. Other times, you will still see the out-of-memory error, but with the larger heap size. There is more of a potential for leaking objects, which makes them easier to find in a heap dump. With a small heap, there will not be many leaking objects, which makes them hard to find.


How do you determine that the maximum heap size is too large or too small for your application?

If garbage collection takes a long time to clean up objects with a large heap, you can reduce the maximum heap size. If garbage collection frequency is too high, the heap might be too small for the application and garbage collection needs to run frequently. Thus, you might increase the maximum heap size.


How do you change the Java heap size settings?

The JVM heap size settings can be changed from the administrative console using these steps:
Expand Servers > Server Types > WebSphere application servers and click your server name.
Click Java and process management > Process definition > Java virtual machine.
The JVM Heap size can be adjusted by using the Xms: Initial Java Heap Size and Xmx: Maximum Java Heap Size command-line parameters.

"Unveiling the java.lang.Out OfMemoryError"



What Is a java.lang.OutOfMemoryError?
 
A java.lang.OutOfMemoryError is a subclass of java.lang.VirtualMachineError that is thrown when the Java Virtual Machine is broken or has run out of resources that are necessary to continue the operation of the Java Virtual Machine. Obviously, memory is the exhausted resource for a java.lang.OutOfMemoryError, which is thrown when the Java Virtual Machine cannot allocate an object due to memory constraints. Unfortunately, the Java specification of java.lang.OutOfMemoryError does not elaborate further on what kind of memory it's talking about.
There are six different types of runtime data areas, or memory areas, in the Java Virtual Machine


1. Program Counter Register
2. Java Virtual Machine Stack
3. Heap
4. Method Area
5. Runtime Constant Pool
6. Native Method Stack
 
The Program Counter Register, also known as the pc register, stores the address of the Java byte code instruction that is currently being executed (just like the processor register in your central processing unit of the device from which you are reading or printing this article). You will not see a java.lang.OutOfMemoryError from the pc register since a program counter is not conventionally considered as a memory. 
Java Virtual Machine Stacks contain frames where data, return values, and partial execution results are stored. Java Virtual Machine Stacks can be expanded during runtime. If there's not enough memory for the expansion of an existing Java Virtual Machine stack, or for the creation of a new Java Virtual Machine stack for a new thread, the Java Virtual Machine will throw a java.lang.OutOfMemoryError.
The Heap is where instances of Java classes and arrays are allocated. A java.lang.OutOfMemoryError will be thrown when there is not enough memory available for instances of Java classes or arrays.
 
The Method Area stores class-related information, the runtime constant pool, for instances, the code for methods and constructors, and field/method data. If there's not enough memory in the method area, you will encounter java.lang.OutOfMemoryError. 
The Runtime Constant Pool contains constants such as field references and literals (Java Literals are syntactic representations of boolean, character, numeric, or string data. ). A java.lang.OutOfMemoryError will be thrown when not enough memory is available for the construction of the runtime constant pool area. 
Native Memory Stacks store conventional stacks, also known as C stacks, to support native methods that are written in a non-Java language such as C/C++. Native memory stacks can be expanded during runtime. If there's not enough memory for the expansion of an existing native memory stack or for the creation of a new native memory stack for a new thread, you would see a java.lang.OutOfMemoryError. 
You may have seen a java.lang.StackOverflowError, which is completely different from a java.lang.OutOfMemoryError. A java.lang.StackOverflowError is thrown when native memory stacks or Java Virtual Machine stacks need more memory than is configured. In most IBM Java Virtual Machine implementations, the -Xmso command-line option controls the stack size for operation system threads or native thread, and the -Xss command-line option controls the stack size for Java threads. In some implementations, such as Sun Microsystems HotSpot Java Virtual Machine, the Java methods share stack frames with C/C++ native code. The maximum stack size for a thread can be configured with the -Xss Java command-line option. The default sizes of these options vary by platform and implementation, but are usually between 256 Kbytes-1024 Kbytes. Please refer to the documentation of your Java virtual machine for more specific information.
Now that we understand which memory areas could cause a java.lang.OutOfMemoryError, let's take a look at actual error messages. What does a java.lang.OutOfMemoryError look like and how can I address each symptom? Have you ever seen a java.lang.OutOfMemoryError similar to the following?
 
java.lang.OutOfMemoryError: Requested array size exceeds VM limit:
This error message indicates that there is a memory request for an array but that's too large for a predefined limit of a virtual machine. What do we do if we encounter this kind of java.lang.OutOfMemoryError? We need to check the source code to make sure that there's no huge array created dynamically or statically. Fortunately, latest virtual machines usually do not have this limit.
java.lang.OutOfMemoryError: PermGen space:
 
You will see an OutOfMemoryError when the Permanent Generation area of the Java heap is full, like the above message. 
On some Java Virtual Machines, such as Sun Microsystems' HotSpot Java Virtual Machine, a dedicated memory area called permanent generation (or permanent region) stores objects that describe classes and methods. We can visualize the usage of a permanent generation with the IBM Pattern Modeling and Analysis Tool for the Java Garbage Collector.

Important question and answers for Websphere Application server

1Q).. How does nodeagent monitor the application server and how does it know the previous state of the application server?

When the nodeagent monitors the application server (with the monitoring policy created as mentioned in question 1) it saves the server state information in the monitoring.state file. It will maintain the previous server state and the application server PID. In case of an application server crash or hang, the nodeagent will get the previous state of the server from the monitoring.state file and then try to start the application server automatically.

Note: If you notice StringIndexOutOfBoundsException or any other exception in the NodeAgent.loadNodeState stack (nodeagent Systemout.log file), it means the monitored.state file is corrupted. You must stop all servers, delete the file and then start the nodeagent again. For example:

Caused by: java.lang.StringIndexOutOfBoundsException
at java.lang.String.substring(String.java:1115)
at com.ibm.ws.management.nodeagent.NodeAgent.loadNodeState(NodeAgent .java:3210)

2Q). My application servers were monitored by the nodeagent. When the server was hung, why didn't the nodeagent restart the server?

Nodeagent PidWaiter sends the signal every ping time out interval to get the status of the application server. If the PidWaiter does not get the response back from Application Server then AppServer is considered hung. Once the application server is identified as unresponsive/hung the nodeagent PidWaiter sends a SIGTERM to the process, which does not guarantee the process is immediately stopped. It sends the signal wait for the process to normally shutdown. If the server doesn't respond to any request, the server just stays hung forever.

If you want the server to be killed when it's hung or doesn't respond to the nodeagent ping, then you need to set "com.ibm.server.allow.sigkill" property to true in the nodeagent custom property. Please review section "Java virtual machine settings" in the product documentation for more information.

3Q).. How can we start the application servers in parallel? In other words, can I start all application servers at the same time (not in sequence)?

Yes, you can do it using the com.ibm.websphere.management.nodeagent.bootstrap.maxthreadpool custom property.

Set the property under System Administration > Node agent > nodeagent_name > Java and process management > Process definition > Java virtual machine > Custom properties.

Use this property to control the number of threads that can be included in a newly created thread pool. A dedicated thread is created to start each application server Java virtual machine (JVM). The JVMs with dedicated threads in this thread pool are the JVMs that are started in parallel whenever the node agent starts.

You can specify an integer from 0 - 5 as the value for this property. If the value you specify is greater than 0, a thread pool is created with that value as the maximum number of threads that can be included in this newly created thread pool. The following table lists the supported values for this custom property and their effect.

Property threadpool.maxsize is set to 0 or not specified - The node agent starts up to five JVMs in parallel.
Property threadpool.maxsize is set to 1 - The node agent starts the JVMs serially.
Property threadpool.maxsize value between 2 and 5 - The node agent starts a number of JVMs equal to the specified value in parallel.

Note: With this property you can only start a maximum of 5 servers at a time.

Load Balancing



Load balancers :


A load balancer, also referred to as an IP sprayer, enables horizontal scalability by dispatching TCP/IP traffic among several identically configured servers. Depending on the product used for load balancing, different protocols are supported.
Load balancer is implemented using the Load Balancer Edge component provided with the Network Deployment package, which provides load balancing capabilities for HTTP, FTP, SSL, SMTP, NNTP, IMAP, POP3, Telnet, SIP, and any other TCP based application.
Horizontal scaling topology with an IP sprayer
Load balancing products can be used to distribute HTTP requests among Web servers running on multiple physical machines. The Load Balancer component of Network Dispatcher, for example, is an IP sprayer(LB) that performs intelligent load balancing among Web servers based on server availability and workload.
 

           Figure below illustrates a horizontal scaling configuration that uses an IP sprayer to    
            redistribute requests between Web servers on multiple machines.
           Simple IP sprayer horizontally scaled topology:


The active Load Balancer hosts the highly available TCP/IP address, the cluster address of your service and sprays requests to the Web servers. At the same time, the Load Balancer keeps track of the Web servers health and routes requests around Web servers that are not available. To avoid having the Load Balancer be a single point of failure, the Load Balancer is set up in a hot-standby cluster. The primary Load Balancer communicates its state and routing table to the secondary Load Balancer. The secondary Load Balancer monitors the primary Load Balancer through heartbeat and takes over when it detects a problem with the primary Load Balancer. Only one Load Balancer is active at a time.

=====================================================
Source:
https://www.citrix.com/content/dam/citrix/en_us/documents/partner-documents/configuring-citrix-netscaler-for-ibm-websphere-application-services-en.pdf 
Understanding IBM HTTP Server plug-in Load
Balancing in a clustered environment: 
Problem(Abstract)
After setting up the HTTP plug-in for load balancing in a clustered IBM 
WebSphere environment, the request load is not evenly distributed among
 back-end WebSphere Application Servers.

Cause

In most cases, the preceding behavior is observed because of a misunderstanding 
of how HTTP plug-in load balancing algorithms work or might be due to an
 improper configuration. Also, the type of Web server (multi-threaded versus 
single threaded) being used can effect this behavior.

Resolving the problem

The following document is designed to assist you in understanding how 
HTTP plug-in load balancing works along with providing you some helpful
 tuning parameters and suggestions to better maximize the ability of the HTTP
 plug-in to distribute load evenly.

Note: The following information is written specifically for the IBM HTTP 
Server, however, this information in general is applicable to other Web servers
 which currently support the HTTP plug-in (for example: IIS, SunOne, Domino,
 and so on).
Also, The WebSphere plug-in versions 6.1 and later offer the property
 "IgnoreAffinityRequests" to address the limitation outlined in this technote.
 In addition, WebSphere versions 6.1 and later offer better facilities for
 updating the configuration through the administrative panels without manual editing.

For additional information regarding this plug-in property, visit
IgnoreAffinityRequests


Load Balancing
  • Background
    In clustered Application Server environments, IBM HTTP Servers spray
    Most commercial Web applications use HTTP sessions for holding 
    The round robin algorithm used by the HTTP plug-in in releases of V5.0, 
  • V5.1 and V6.0 can be roughly described as follows:
  • some kind of state information while using the stateless HTTP protocol.
  •  The IBM HTTP Server attempts to ensure that all the Web requests
  •  associated with a HTTP session are directed to the application server 
  • who is the primary owner of the session. These requests are called 
  • session-ed requests, session-affinity-requests, and so on. In this document
  •  the term ‘sticky requests’ or ‘sticky routing’ will be used to refer to Web 
  • requests associated with HTTP sessions and their routing to a cluster member.
  •  Web requests to the cluster members for balancing the work load among
  •  relevant application servers. The strategy for load balancing and the necessary
  •  parameters can be specified in the plugin-cfg.xml file. The default and the
  •  most commonly used strategy for workload balancing is 
  • ‘Weighted Round Robin’. For details refer to the IBM Redbooks technote,
  • While setting up its internal routing table, the HTTP plug-in component

    For example, if we have three cluster members with specified static weights as 
  • 8, 6, and 18, the internal routing table will have 4, 3, and 9 as the starting dynamic 
  • weights of the cluster members after factoring out 2 = GCD(4, 3, 9).
  •  eliminates the non-trivial greatest common divisor (GCD) from the set of 
  • cluster member weights specified in the plugin-cfg.xml file.
  • <ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin"
    Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" 
    RemoveSpecialHeaders="true" RetryInterval="60">

    <Server CloneID="10k66djk2" ConnectTimeout="0" 
    ExtendedHandshake="false" LoadBalanceWeight="8"
    MaxConnections="0" Name="Server1_WebSphere_Appserver" 
    WaitForContinue="false">
    <Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
    LoadBalanceWeight="6" MaxConnections="0" Name="Server2_WebSphere_Appserver"
     WaitForContinue="false">
    <Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <Server CloneID="10k68xtw10" ConnectTimeout="0" ExtendedHandshake="false" 
    LoadBalanceWeight="18" MaxConnections="0" 
    Name="Server3_WebSphere_Appserver" WaitForContinue="false">
    <Transport Hostname="server3.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <PrimaryServers>
    <Server Name="Server1_WebSphere_Appserver"/>
    <Server Name="Server2_WebSphere_Appserver"/>
    <Server Name="Server3_WebSphere_Appserver"/>
    </PrimaryServers>
    </ServerCluster>






<ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin"
Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" 
RemoveSpecialHeaders="true" RetryInterval="60">

<Server CloneID="10k66djk2" ConnectTimeout="0" 
ExtendedHandshake="false" LoadBalanceWeight="1" 
MaxConnections="0" Name="Server1_WebSphere_Appserver"
 WaitForContinue="false">
<Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
</Server>

<Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
LoadBalanceWeight="1" MaxConnections="0" 
Name="Server2_WebSphere_Appserver" WaitForContinue="false">
<Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
</Server>

<PrimaryServers>
<Server Name="Server1_WebSphere_Appserver"/>
<Server Name="Server2_WebSphere_Appserver"/>
</PrimaryServers>
</ServerCluster>



<ServerCluster CloneSeparatorChange="false" LoadBalance="Random"
Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" 
RemoveSpecialHeaders="true" RetryInterval="60">

<Server CloneID="10k66djk2" ConnectTimeout="0" ExtendedHandshake="false"
 LoadBalanceWeight="2" MaxConnections="0" Name="Server1_WebSphere_Appserver"
 WaitForContinue="false">
<Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
</Server>

<Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
LoadBalanceWeight="2" MaxConnections="0" 
Name="Server2_WebSphere_Appserver" WaitForContinue="false">
<Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
</Server>

<PrimaryServers>
<Server Name="Server1_WebSphere_Appserver"/>
<Server Name="Server2_WebSphere_Appserver"/>
</PrimaryServers>
</ServerCluster>

UNIX:
<IfModule worker.c>
ThreadLimit 250
ServerLimit 2
StartServers 2
MaxClients 500
MinSpareThreads 2
MaxSpareThreads 325

ThreadsPerChild 250
MaxRequestsPerChild 10000
</IfModule>

fix pack installation on websphere using command line (imcl)

 Hi friends, Today here is one more important topic in WAS. For installing fix packs , ifixes with command line we use imcl commnd.

* During installing Products and Fixes with IBM Installation Manager, Installation Manager searches open repositories where Packages and Fixes exist. But how can you verify the applicable packages, updates, and features are in a target repository from the 
 command line?

Installation Manager provides the command line tool, imcl, to manage installation. The imcl command can be found in <IM_ROOT>/eclipse/tools subdirectory.

**Following are the commands and description for it

1] encryptString stringToEncrypt:

   Encrypt the entered string. Use the encryptString command with the -passwordKey option to increase encryption security.

2] exportInstallData outputFileName:

     Export the installation data to the specified file in a compressed file format where outputFileName is the name of the generated file that contains the exported data.

3] input response_file:

Specify a response file for silent installation with the input command.

Use the input command with these options:

    -installationDirectory
    -keyring: This option is deprecated.
    -masterPasswordFile: Use with the -secureStorageFile storage_file -masterPasswordFile master_password_file option.
    -password: This option is deprecated.
    -prompt
    -secureStorageFile
    -variables

install packageID[_version][,featureID]

Use the install command with these options:

    -acceptLicense
    -connectPassportAdvantage
    -eclipseLocation
    -installationDirectory
    -installFixes
    -keyring: This option is deprecated.
    -masterPasswordFile: Use with the -secureStorageFile option.
    -password: This option is deprecated.
    -preferences
    -prompt
    -properties
    -repositories
    -secureStorageFile
    -sharedResourcesDirectory
    -useServiceRepository

Do not use the install command with these commands:

    import
    input
    modify
    rollback
    uninstall
    uninstallAll
    updateAll

4] listAvailableFixes packageID_version:

    Print information to the console about the available fixes for the specified package.
  
    Use the listAvailableFixes command with these options:

    -connectPassportAdvantage
    -keyring: This option is deprecated.
    -long
    -masterPasswordFile: Use with the -secureStorageFile option.
    -password: This option is deprecated.
    -prompt
    -preferences
    -repositories
    -secureStorageFile
    -showPlatforms
    -useServiceRepository

5] listAvailablePackages:

    Print information to the console about the available packages.

6] listInstallationDirectories:

    Print information to the console about the installation directory, the shared resources directory, the name of the package group, and installed translations.

7] listInstalledPackages:

    Print information to the console about the installed packages.

**More commands are there. But we use mostly above commands.

**Below are some commands with example:


1] How to verify the installed packages

Use command: "imcl listInstalledPackges" to view a list of packages that are already installed by the Installation Manager.

Windows: imcl.exe listInstalledPackages
AIX, HP-UX, Linux, Solaris: imcl listInstalledPackages

Example:
 ./imcl listInstalledPackages
com.ibm.websphere.ND.v80_8.0.3.20120320_0536
com.ibm.websphere.WCT.v80_8.0.5.20121022_1902
com.ibm.websphere.PLG.v80_8.0.6.20130328_1645
com.ibm.websphere.IHS.v80_8.0.5.20121022_1902


If you would like to verify more details like version, features, installed fixes, and rollback versions, run the command with the "-verbose "option.

Example:
./imcl listInstalledPackages -verbose

2] How to verify the installable packages

Use command: "imcl listAvailablePackages" to list the packages that are installable to the existing products.

Windows: imcl.exe listAvailablePackages -repositories [source_repository]
AIX, HP-UX, Linux, Solaris: imcl listAvailablePackages -repositories [source_repository]

Example:
 ./imcl listAvailablePackages -repositories /usr/IBMWASREPO
com.ibm.websphere.BASE.v85_8.5.5001.20131018_2242
com.ibm.websphere.BASE.v85_8.5.5002.20140408_1947

**Uninstalling packages by using imcl

Uninstall packages from the tools directory by using Installation Manager command line (imcl) uninstall commands.
Before you begin

•To identify the package_id_version,feature_id, run the listAvailablePackages command.

Procedure
To uninstall a package by using imcl:
1) Navigate to the tool directory
2) Run the uninstall command: using imcl

    imcl uninstall package_id_version,feature_id -installationDirectory installation_directory

List the Installed packages to verify the packages you want to uninstall
3] List the directories where the Websphere Packages are installed:
example:
  [root@wasnode tools]# ./imcl listInstallationDirectories
/opt/IBM/WebSphere/AppServer

4]  Launch the uninstall option for imcl command line:

Example:
[root@wasnode tools]# ./imcl uninstall com.ibm.websphere.ND.v85_8.5.5003.20140730_1249 com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103 -installationDirectory /opt/IBM/WebSphere/AppServer

5] Validate it using “imcl listInstalledPackages” to ensure the WAS packages are removed:

  example:

[root@wasnode tools]# ./imcl listInstalledPackages
com.ibm.cic.agent_1.6.2000.20130301_2248

**Install of Websphere ND 8.5 using imcl command line

Command: ./imcl install
To install the Product using imcl command you need to use “install” Option of imcl
a) Ensure all the prerequisites are satisfied like space , permission etc
b) Extract the WAS binaries in the server which you have downloaded
c) Execute the “imcl listAvailablePackages “ in the repository to validate the packages

example:
[root@wasnode tools]# cd /opt/IBM/InstallationManager/eclipse/tools
[root@wasnode tools]# ./imcl listAvailablePackages -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/WASND
com.ibm.websphere.ND.v85_8.5.5000.20130514_1044

2]  Execute “imcl install” to Install the WAS ND Packages “com.ibm.websphere.ND.v85_8.5.5000.20130514_1044″

Example:
[root@wasnode tools]#./imcl install com.ibm.websphere.ND.v85_8.5.5000.20130514_1044 -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/WASND -installationDirectory /opt/IBM/WebSphere/AppServer -acceptLicense -sP

3] Similarly you can install the SDK 7 Package too:

Example:

[root@wasnode tools]# ./imcl listAvailablePackages -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/SDK
com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103
com.ibm.websphere.liberty.IBMJAVA.v70_7.0.4001.20130510_2103
[root@wasnode tools]#[root@wasnode tools]# ./imcl install com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103 -repositories /IBMSoftware/was8.5_IHS_8.5/was8.5.5/SDK -installationDirectory /opt/IBM/WebSphere/AppServer -acceptLicense -sP

4]List the Installed packages to verify the packages you have installed:

Example:
[root@wasnode tools]# ./imcl listInstalledPackages
com.ibm.cic.agent_1.6.2000.20130301_2248
com.ibm.websphere.IBMJAVA.v70_7.0.4001.20130510_2103
com.ibm.websphere.ND.v85_8.5.5003.20140730_1249

5] List the directories where the Websphere Packages are installed:

Example:
[root@wasnode tools]# ./imcl listInstallationDirectories
/opt/IBM/WebSphere/AppServer

Finally After installation you can check ./versioninfo.sh on path  "cd /opt/IBM/WebSphere/AppServer/bin/"

Useful commands for WebSphere Application Server:

Finding what versions are running:
  
While version information is available from the admin console, it is also available for most IBM products in the source file product.xml .Beginning with WebSphere Application Server Version 4.0.x, this file will also include information on eFixes that have been installed. Access this file for WebSphere Application Server using these commands:

Sunday, 23 April 2017

Liberty Profile

 Liberty profile overview
The Liberty profile is a simplified, lightweight development and application runtime environment that has the following characteristics:
- Simple to configure. Configuration is read from an XML file with text-editor-friendly syntax.
- Dynamic and flexible. The run time loads only what your application needs and recomposes the run time in response to configuration changes.
- Fast. The server starts in under 5 seconds with a basic web application.
-Extensible. The Liberty profile provides support for user and product extensions, which
can use System Programming Interfaces (SPIs) to extend the run time.

 -Liberty supports a subset of the following parts of the full WebSphere®  
 Application Server programming model:
  • Web applications
  • OSGi applications
  • Enterprise JavaBeans (EJB) applications
"Liberty Profile" - IBM WebSphere Application Server V8.5

- It is a flexible and dynamic profile of WAS. Which enables WAS server to deploy only required custom features instead of all JEE components.

-WAS Liberty profile Architecture:-


The runtime environment is an OSGi framework that contains a kernel, a JVM, and any number of Liberty features.

WebSphere Application Server 8.5.5.1 Liberty Profile Installation Reference Guide(4.3)

Installation Notes

There are multiple methods to install and set up WebSphere using Liberty Profile. The following notes are meant as general tips to consider and may or may not apply in every scenario.

Where Can I Find the Installer?

Instructions

  1. Download wlp-developers-runtime-8.5.5.1.jar.
  2. Run the following command to extract the contents of the Liberty archive:
    java -jar wlp-developers-runtime-8.5.5.1.jar
  3. Press x to skip, or Enter to read the license agreement.
  4. Press 1 if you agree to the license terms and are ready to proceed.
  5. Provide the installation path for the Liberty profile, for example: /nhin/app.
  6. Press Enter.
The server is installed into a /wlp directory within the install directory that is specified in step 6 (for example, /nhin/app/wlp).

Creating a Server

The Liberty profile runtime environment does not come with a server defined. To create a server, you must run the following command from the Liberty profile bin directory (for example, /nhin/app/wlp/bin):
server create <server name>
This creates a server with the provided name in the usr/servers directory (for example, /nhin/app/wlp/usr/servers/server1 if "server1" was specified as the server name).

Creating Jvm.options

The jvm.options file can be used at runtime to specify jvm-specific start options (for example, with -X arguments like "-Xmx1024M"). The options are applied when you start, run, or debug the server: When you install the Liberty profile, this configuration file does not exist, so you must create it. Create an etc/ directory under wlp/, and within that, create the jvm.options file (for example, /nhin/app/wlp/etc/jvm.options).

Configure the httpEndPoint

By default, the WebSphere Liberty Profile server is installed on port 9080. The CONNECT team typically tests on port 8080/8181 (for HTTP/HTTPS respectively), so we change the port in server.xml as below:
<httpEndpoint id="defaultHttpEndpoint" host="localhost" httpPort="8080" httpsPort="8181" />

Starting and Stopping the Server

To start the server, run the server start <server name> command under /wlp/bin:
/nhin/app/wlp/bin$ ./server start server1
To stop the server, run the server stop <server name> command under /wlp/bin:
/nhin/app/wlp/bin$ ./server stop server1

Set Up Environment Variables

In some cases, it may be required to set the environment variables for:
  • JAVA_HOME
  • JAVA_OPTIONS
  • ANT_HOME
  • ANT_OPTS
  • MAVEN_HOME
  • MAVEN_OPTS
  • PATH

Sample Environment

# Set up environment for CONNECT + WebSphere 8.5.5.1/Liberty Profile on Linux
 
export JAVA_HOME='/path/to/jdk1.7.0_09'
export MAVEN_HOME='/path/to/apache-maven-3.0.4'
export MAVEN_OPTS='-Xmx5000m -XX:MaxPermSize=1024m'
export ANT_HOME='/path/to/apache-ant-1.7.1'
export ANT_OPTS='-Xmx1200m -XX:MaxPermSize=128m -Dcom.sun.aas.instanceName=server'
export MYSQL_HOME='/path/to/mysql-5.1.42-linux-x86_64-glibc23'
 
export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$ANT_HOME/bin:$MYSQL_HOME/bin:$PATH



 Reference: 
https://connectopensource.atlassian.net/wiki/pages/viewpage.action?pageId=12681451

https://developer.ibm.com/wasdev/blog/2013/03/29/introducing_the_liberty_profile/

https://developer.ibm.com/wasdev/downloads/download-latest-stable-websphere-liberty-runtime/