Tuesday 16 January 2018

WAS Certification ( Architecture)

Apply design considerations of IBM WebSphere Application Server, V7.0 when installing an enterprise environment (e.g., LDAP, database servers, WebSphere messaging, etc.)
 
The following topics are important to consider when designing a WebSphere Application Server deployment. They will most likely impact your design significantly:
Scalability - Scalability, as used in this book, means the ability of the environment to grow as the infrastructure load grows. Scalability depends on many different factors such as hardware,operating system, middleware, and so on.
Caching - WebSphere Application Server provides many different caching features at different locations in the architecture. WebSphere Application Network Deployment provides caching features at each possible layer of the infrastructure:
Infrastructure edge:
 
Caching Proxy provided by the Edge Components
WebSphere Proxy Server
HTTP server layer:
 
Edge Side Include (ESI) fragment caching capabilities provided by the WebSphere plug-in
Caching capabilities of the HTTP server itself (like the Fast Response Cache
Accelerator (FRCA) as provided by most implementations of IBM HTTP Server)
 
Application server layer:
 
Dynamic caching service inside the application server's JVM
 
WebSphere Proxy Server
 
High availability – Designing an infrastructure for high availability means to design an
infrastructure in a way that your environment can survive the failure of one or multiple
components. High availability implies redundancy by avoiding any single point of failure on any
layer (network, hardware, processes, and so forth). The number of failing components your
environment has to survive without losing service depends on the requirements for the specific
environment.

Load-balancing and fail-over - As a result of the design considerations for high availability you will most likely identify a number of components that need redundancy. Having redundant
systems in an architecture requires you to think about how to implement this redundancy to
ensure getting the most benefit from the systems during normal operations, and how you will
manage a seamless fail-over in case a component fails. In a typical WebSphere Application
Server environment, there are a variety of components that need to be considered when
implementing load-balancing and fail-over capabilities:

Caching proxy servers
HTTP servers
Containers (such as the Web, SIP, and Portlet containers)
Enterprise JavaBeans (EJB) containers
Messaging engines
Back-end serves (database, enterprise information systems, and so forth)
User registries

Disaster recovery - Disaster recovery concentrates on the actions, processes, and preparations

to recover from a disaster that has struck your infrastructure. Important points when planning for disaster recovery are as follows:

Recovery Time Objective (RTO) - How much time can pass before the failed component
must be up and running?
Recovery Point Objective (RPO) - How much data loss is affordable? The RPO sets the
interval for which no data backup can be provided. If no data loss can be afforded, the RPO
would be zero.
http://www.ibm.com/developerworks/websphere/techjournal/0707_col_alcott/0707_col_alcott.html
Security - Consider how security will affect your infrastructure:

• Understand the security policy and requirements for your future environment.
• Work with a security subject matter expert to develop a security infrastructure
   that adheres to the requirements and integrates in the existing infrastructure.
• Make sure that sufficient physical security is in place.
• Make sure the application developers understand the security requirements and
   code the application accordingly.
• Consider the user registry (or registries) you plan to use. WebSphere Application
   Server V7.0 supports multiple user registries and multiple security domains.
• Make sure that the user registries are not breaking the high availability
   requirements. Even if the user registries you are using are out of scope of the
  WebSphere Application Server project, considerations for high availability need
 to be taken and requested. For example, make sure that your LDAP user
registries are made highly available and are not a single point of failure.
• Define the trust domain for your environment. All computers in the same
   WebSphere security domain trust each other. This trust domain can be
  extended, and when using SPNEGO / Kerberos, even out to the Windows
 desktop of the users in your enterprise.
• Assess your current implementation design and ensure that every possible
   access to your systems is secured.
• Consider the level of auditing required and how to implement it.
• Consider how you will secure stored data. Think of operating system security
   and encryption of stored data.
• Define a password policy, including considerations for handling password
   expirations for individual users.
• Consider encryption requirements for network traffic. Encryption introduces
   overhead and increased resource costs, so use encryption only where
  appropriate.
• Define the encryption (SSL) endpoints in your communications.
• Plan for certificates and their expiration:
• Decide which traffic requires certificates from a trusted certificate authority and
   for which traffic a self-signed certificate is sufficient. Secured connections to the
  outside world usually use a trusted certificate, but for connections inside the
 enterprise, self-signed certificates are usually enough.
• Develop a management strategy for the certificates. Many trusted certificate
   authorities provide online tools which support certificate management. But what
  about self-signed certificates?
• How are you going to back up your certificates? Always keep in mind that your
certificates are the key to your data. Your encrypted data is useless if you lose
your certificates.
• Plan how you will secure your certificates. Certificates are the key to your data,
   therefore make sure that they are secured and backed up properly.
• http://www.ibm.com/developerworks/websphere/techjournal//0512_botzum/0512
   _botzum1.html

Relate the various components of the IBM WebSphere Application Server Network
Deployment V7.0 runtime architecture.

WebSphere Application Server Network Deployment (ND) provides the capabilities to develop more enhanced server infrastructures. It extends the WebSphere Application Server base package and includes the following features:

• Clustering capabilities
• Edge components
• Dynamic scalability
• High availability
• Advanced management features for distributed configurations
WebSphere Application Server Network Deployment Edge Components provide high
performance and high availability features. For example, the Load Balancer (a software load
balancer) provides horizontal scalability by dispatching HTTP requests among several Web
server or application server nodes supporting various dispatching options and algorithms to
assure high availability in high volume environments. The usage of Edge Component Load
Balancer can reduce Web server congestion, increase content availability, and provide scaling ability for the Web server.

Illustrate workload management and failover strategies using IBM WebSphere Application Server Network Deployment V7.0.


Two IBM HTTP Server Web servers configured in a cluster
Incoming requests for static content are served by the Web server. Requests for dynamic
content is forwarded to the appropriate application server by the Web server plug-in.
A Caching Proxy that keeps a local cache of recently accessed pages Caching Proxy is included in WebSphere Application Server Edge Components. Cacheable
content includes static Web pages and JSPs with dynamically generated but infrequently
changed fragments. The Caching Proxy can satisfy subsequent requests for the same content by delivering it directly from the local cache, which is much quicker than retrieving it again from the content host.

A backup server is configured for high availability.

A Load Balancer to direct incoming requests to the Caching Proxy and a second Load Balancer to manage the workload across the HTTP servers
Load Balancer is included in WebSphere Application Server Edge Components. The Load
Balancers distribute incoming client requests across servers, balancing workload and providing high availability by routing around unavailable servers.

A backup server is configured for each primary Load Balancer to provide high availability.

A dedicated server to host the deployment manager
The deployment manager is required for administration but is not critical to the runtime execution of applications. It has a master copy of the configuration that should be backed up on a regular basis.
Two clusters consisting of three application servers
Each cluster spans two machines. In this topology, one cluster contains application servers that provide the Web container functionality of the applications (servlets, JSPs), and the second cluster contains the EJB container functionality. Whether you choose to do this or not is a matter of careful consideration. Although it provides failover and workload management capabilities for both Web and EJB containers, it can also affect performance.
A dedicated database server running IBM DB2 V9.

When to use a high availability manager
 
A high availability manager consumes valuable system resources, such as CPU cycles, heap
memory, and sockets. These resources are consumed both by the high availability manager and by product components that use the services that the high availability manager provides. The amount of resources that both the high availability manager and these product components consume increases nonlinearly as the size of a core group increases.
For large core groups, the amount of resources that the high availability manager consumes can become significant. Disabling the high availability manager frees these resources. However, before you disable the high availability manager, you should thoroughly investigate the current and future needs of your system to ensure that disabling the high availability manager does not also disable other functions that you use that require the high availability manager. For example, both memory to memory session replication, and remote request dispatcher (RRD) require the high availability manager to be enabled.
The capability to disable the high availability manager is most useful for topologies where none of the high availability manager provided services are used. In certain topologies, only some of the processes use the services that the high availability manager provides. In these topologies, you can disable the high availability manager on a per-process basis, which optimizes the amount of resources that the high availability manager uses

Do not disable the high availability manager on administrative processes, such as node agents and the deployment manager, unless the high availability manager is disabled on all application server processes in that core group.
Some of the services that the high availability manager provides are cluster based. Therefore,because cluster members must be homogeneous, if you disable the high availability manager on one member of a cluster, you must disable it on all of the other members of that cluster.

When determining if you must leave the high availability manager enabled on a given application server process, consider if the process requires any of the following high availability manager services:
* Memory-to-memory replication
* Singleton failover
* Workload management routing
* On-demand configuration routing

Memory-to-memory replication
Memory-to-memory replication is a cluster-based service that you configure or enable at the
application server level. If memory-to-memory replication is enabled on any cluster member,
then the high availability manager must be enabled on all of the members of that cluster.
Memory-to-memory replication is automatically enabled if:
* Memory-to-memory replication is enabled for Web container HTTP sessions.
* Cache replication is enabled for the dynamic cache service.
* EJB stateful session bean failover is enabled for an application server.
Singleton failover Singleton failover is a cluster-based service. The high availability manager must be enabled on all members of a cluster if:
* The cluster is configured to use the high availability manager to manage the recovery
of transaction logs.
* One or more instances of the default messaging provider are configured to run in the
cluster. The default messaging provider that is provided with the product is also referred
to as the service integration bus.

Workload management routingWorkload management (WLM) propagates the following classes or types of routing information:
* Routing information for enterprise bean IIOP traffic.
* Routing information for the default messaging engine, which is also referred to as the
service integration bus.
* Routing HTTP requests through the IBM® WebSphere® Application Server proxy
server.
* Routing Web Services Addressing requests through the IBM WebSphere Application
Server proxy server.
* Routing SIP (Session Initiation Protocol) requests.

WLM uses the high availability manager to both propagate the routing information and make it highly available. Although WLM routing information typically applies to clustered resources, it can also apply to non-clustered resources, such as standalone messaging engines. During
typical circumstances, you must leave the high availability manager enabled on any application server that produces or consumes either IIOP or messaging engine routing information.
For example, if:

* The routing information producer is an enterprise bean application that resides in
cluster 1.
* The routing information consumer is a servlet that resides in cluster 2.
When the servlet in cluster 2 calls the enterprise bean application in cluster 1, the high
availability manager must be enabled on all servers in both clusters.
Workload management provides an option to statically build and export route tables to the file system. Use this option to eliminate the dependency on the high availability manager.
On-demand configuration routing
 
In a Network Deployment system, the on-demand configuration is used for IBM WebSphere
Application Server proxy server routing. If you want to use on-demand configuration routing in
conjunction with your Web services, you must verify that the high availability manager is enabled on the proxy server and on all of the servers to which the proxy server routes work.
Resource adapter management
 
New feature: In a high availability environment, you can configure your resource adapters for
high availability. After a resource adapter is configured for high availability, the high availability
manager assigns the resource adapter to a high availability group, the name for which is derived from the resource adapter key. The resource adapter key is the cell-scoped configuration ID of the resource adapter, and must be identical for all of the servers in a cluster that use the same resource adapter. The high availability manager then controls when each resource adapter is started. newfeat When you configure a resource adapter for high availability, select one of the following types of failover:

* Message endpoint (MEP) failover. When a resource adapter is configured for MEP
failover, that resource adapter can be active within any member of a high availability
group, but only supports inbound communication within one high availability group
member.
* Resource adapter (RA) instance failover. When a resource adapter is configured for
RA instance failover, that resource adapter can support either outbound or inbound
communication within one high availability group member at a time.
When a resource adapter that is configured for high availability starts, the JavaTM EE Connector Architecture (JCA) container joins the resource adapter into the high availability group. This high availability group is configured to run under the one of n policy with quorum disabled. When MEP failover is selected, the container starts the adapter with outbound communication enabled, but disables inbound communication because the high availability manager controls inbound communication for that resource adapter. When RA instance failover is selected, the container starts the adapter and disables both outbound and inbound communication because the high availability manager controls both inbound and outbound communication for that resource adapter.
When the run time stops a resource adapter that is configured for high availability, the JCA
container removes the resource adapter from the high availability group to which it was
assigned.

Disabling or enabling a high availability managerA unique HAManagerService configuration object exists for every core group member. The
enable attribute in this configuration object determines if the high availability manager is enabled or disabled for the corresponding process. When the enable attribute is set to true, the high availability manager is enabled. When the enable attribute is set to false, the high availability manager is disabled. By default, the high availability manager is enabled. If the setting for the enable attribute is changed, the corresponding process must be restarted before the change goes into effect. You must use the wsadmin tool to disable or enable a high availability manager.

Determine if you need to use a high availability manager to manage members of a core group.
About this task
You might want to disable a high availability manager if you are trying to reduce the amount of resources, such as CPU and memory, that the product uses and have determined that the high availability manager is not required on some or all of the processes in a core group.
You might need to enable a high availability manager that you previously disabled because you are installing applications on core group members that must be highly available.
Complete the following steps if you need to disable a high availability manager or to enable a
high availability manager that you previously disabled.
Procedure

1. In the administrative console, navigate to the Core group service page for the
process.
* For a deployment manager, click System Administration > Deployment manager >
Core group service.
* For a node agent, click System Administration > Node agent > node_agent > Core
group service.
* For an application server, click Servers > Server Types > WebSphere application
servers > server_name > Core group service.
2. If you want to disable the high availability manager for this process, deselect the
Enable service at server startup option.
3. If you want to enable the high availability manager for this process, select the Enable
service at server startup option.
4. Click OK and then click Review.
5. Select Synchronize changes with nodes, and then click Save.
6. Restart all of the processes for which you changed the Enable service at server
startup property setting.


Clusters and workload management
 
Clusters are sets of servers that are managed together and participate in workload
management. Clusters enable enterprise applications to scale beyond the amount of throughput capable of being achieved with a single application server. Clusters also enable enterprise applications to be highly available because requests are automatically routed to the running servers in the event of a failure. 
The servers that are members of a cluster can be on different host machines. In contrast, servers that are part of the same node must be located on the same host machine. A cell can include no clusters, one cluster, or multiple clusters.
 
Servers that belong to a cluster are members of that cluster set and must all have identical
application components deployed on them. Other than the applications configured to run on
them, cluster members do not have to share any other configuration data. One cluster member might be running on a huge multi-processor enterprise server system, while another member of that same cluster might be running on a smaller system. The server configuration settings for each of these two cluster members are very different, except in the area of application components assigned to them. In that area of configuration, they are identical. This allows client work to be distributed across all the members of a cluster instead of all workload being handled by a single application server.

When you create a cluster, you make copies of an existing application server template. The
template is most likely an application server that you have previously configured. You are offered the option of making that server a member of the cluster. However, it is recommended that you keep the server available only as a template, because the only way to remove a cluster member is to delete the application server. When you delete a cluster, you also delete any application servers that were members of that cluster. There is no way to preserve any member of a cluster.

Keeping the original template intact allows you to reuse the template if you need to rebuild the configuration.

A vertical cluster has cluster members on the same node, or physical machine. A horizontal
cluster has cluster members on multiple nodes across many machines in a cell. You can
configure either type of cluster, or have a combination of vertical and horizontal clusters.
[AIX Solaris HP-UX Linux Windows] [iSeries] Clustering application servers that host Web
containers automatically enables plug-in workload management for the application servers and the servlets they host. The routing of servlet requests occurs between the Web server plug-in and clustered application servers using HTTP transports, or HTTP transport channels.



This routing is based on weights associated with the cluster members. If all cluster members
have identical weights, the plug-in sends equal requests to all members of the cluster, assuming there are no strong affinity configurations. If the weights are scaled in the range from zero to twenty, the plug-in usually routes requests to those cluster members with the higher weight values.
You can use the administrative console to specify a weight for a cluster member. The weight you assign to a cluster member should be based on its approximate, proportional ability to do work.
The weight value specified for a specific member is only meaningful in the context of the weights you specify for the other members within a cluster. The weight values do not indicate absolute capability. If a cluster member is unavailable, the Web server plug-in temporarily routes requests around that cluster member.


For example, if you have a cluster that consists of two members, assigning weights of 1 and 2 causes the first member to get approximately 1/3 of the workload and the second member to get approximately 2/3 of the workload. However, if you add a third member to the cluster, and assign the new member a weight of 1, approximately 1/4 of the workload now goes to the first member, approximately 1/2 of the workload goes to the second member, and approximately 1/4 of the workload goes to the third member. If the first cluster member becomes unavailable, the second member gets approximately 2/3 of the workload and third member gets approximately 1/3 of the workload.



Describe WebSphere dynamic caching features.
 
Dynamic cache works within an application server Java virtual machine (JVM), intercepting
calls to cacheable objects. For example, it intercepts calls through a servlet service method, or a command execute method, and either stores the output of the object to the cache or serves the content of the object from the dynamic cache.
Key concepts pertaining to the dynamic cache service
Explore the key concepts pertaining to the dynamic cache service, which improves
performance by caching the output of servlets, commands, Web services, and JavaServer
Pages (JSP) files.
Cache instances An application uses a cache instance to store, retrieve, and share data objects within thedynamic cache.
Using the dynamic cache service to improve performance Caching the output of servlets, commands, and JavaServer Pages (JSP) improves application
performance. WebSphere Application Server consolidates several caching activities including
servlets, Web services, and WebSphere commands into one service called the dynamic cache.
These caching activities work together to improve application performance, and share many
configuration parameters that are set in the dynamic cache service of an application server.
Configuring dynamic cache to use the WebSphere eXtreme Scale dynamic cache provider [Fix
Pack 5 or later]
Configuring the dynamic cache service to use WebSphere eXtreme Scale lets you leverage
transactional support, improved scalability, high availability, and other WebSphere eXtreme
Scale features without changing your existing dynamic cache caching code.
Configuring servlet caching After a servlet is invoked and completes generating the output to cache, a cache entry is created containing the output and the side effects of the servlet. These side effects can include calls to other servlets or JavaServer Pages (JSP) files or metadata about the entry, including timeout and entry priority information.
Configuring portlet fragment caching After a portlet is invoked and completes generating the output to cache, a cache entry is created containing the output and the side effects of the portlet. These side effects can include calls to other portlets or metadata about the entry, including timeout and entry priority information.

Eviction policies using the disk cache garbage collector
The disk cache garbage collector is responsible for evicting objects out of the disk cache,
based on a specified eviction policy.
Configuring the JAX-RPC Web services client cache
The Web services client cache is a part of the dynamic cache service that is used to increase
the performance of Web services clients by caching responses from remote Web services.
Cache monitor Cache monitor is an installable Web application that provides a real-time view of the current state of dynamic cache. You use it to help verify that dynamic cache is operating as expected.
The only way to manipulate the data in the cache is by using the cache monitor. It provides a
GUI interface to manually change data.
Invalidation listenersInvalidation listener mechanism uses Java events for alerting applications when contents are removed from the cache.

Compare the Network Deployment (ND) cell model with the flexible management model.

 


ND Model Limitations
• Model expects tight coupling, highly synchronous environment
• Management is at the individual server level - Does not support management at the
   node level
• Scalability - Problems managing very large number of base servers.
• Synchronous model has problems with high latency remote branch servers.
 

No comments:

Post a Comment