Uncategorized

The IBM DB2 pureScale Clustered Database Solution: Part 2

Application configuration
This article was coathored by Yvonne Chan and John Hehir. 
Part 1 of this article gave an overview of DB2 pureScale architecture and technology. 

When considering the benefits of a clustered database and the proven components DB2 LUW has integrated into its pureScale feature, it is fairly easy to be impressed with the technology from a conceptual standpoint (for more details, see Part 1 of this article). What can become a challenge, however, is envisioning the best way to put the pureScale feature to work for your organization’s applications. To provide some insight into this challenge, we will explore two different methods of connecting to a pureScale cluster: workload balancing and client affinities.

 

Workload balancing in DB2 pureScale

Workload balancing is enabled via the client connection to a DB2 pureScale cluster. When WLB is enabled, a server priority list is maintained by each of the members in the cluster. Members communicate with each other at regular intervals, interrogating the other members about their system load and refreshing their own server list. Each member on each list is assigned a priority. The server lists are piggy-backed on the connections made to each member and returned to the DB2 client. Priorities are evaluated and work is routed to members with higher priority so that each member in the cluster approaches a state of processing parity.

Since only a single database catalog or connection object is needed to connect the DB2 pureScale cluster, members can be added or removed to a pureScale system as needed without having to make any changes to the database’s catalog or application code. With a single command, a member or an additional caching facility can be added to or removed from the cluster. The ability to easily manage capacity is useful not only for a growing business application, but also for supporting businesses with peak processing periods. Organizations no longer need to “oversize” systems to handle peak periods of business activity during non-peak periods.

WLB works well with OLTP workloads where transactions are short-lived and can be moved between members easily. When a client application requires persistence of values outside of the scope of a transaction—for instance, when using sequences or declaring a cursor with a WITH HOLD or if you use created temporary tables and define them with the PRESERVE ROWS attribute—the database server will restrict clients from using WLB. For a complete list of these restrictions, see the DB2 Information Center under Client Considerations for the DB2 pureScale feature.

Enabling WLB is easy—only one required property must be set. For example, if you are using a Java-based connection, simply set the enableSysplexWLB connection property to ‘true.’ For non-Java applications, WLB is similarly configured with the enableWLB parameter in the db2dsdriver.cfg file.

When a WLB connection is defined, automatic client reroute (ACR) is activated by default. With ACR activated, if a member goes offline (either intentionally or unexpectedly), requests are automatically rerouted to the remaining online members of the cluster. When a member is brought back online, connections and transactions will resume on the previously unavailable member.

The WLB component of pureScale helps to eliminate potential bottlenecks by distributing transactions to members that are evaluated as having the most resources available to process a new request. This helps to ensure that all system resources are utilized equally.

This example shows how to enable WLB by setting the enableSysplexWLB JDBC property in a URL and with property file settings in application code. This URL connects to the database SAMPLE on DB2 pureScale member 172.16.42.102 through port 60200:

jdbc:db2://172.16.42.102:60200/SAMPLE:enableSysplexWLB=true;

 

This example shows how to set the enableSysplexWLB property within application code using a property file:

String url = jdbc:db2://172.16.42.102:60200/SAMPLE;Properties properties = new Properties();properties.put(“user”, “yourID”);properties.put(“password”, “yourPassword”);properties.put(“enableSysplexWLB”, “true”);

Connection con = DriverManager.getConnection( url, properties );

 

Client affinities

The second method of connecting to a pureScale cluster is referred to as client affinity connections. This method might be considered when consolidating your database resources or if you have multiple applications running on a single cluster.

When connecting through client affinities, each application is designated to a single member in the pureScale cluster. All connections and requests for the application are handled through this designated member. Other members in the pureScale cluster can be leveraged during a planned or unplanned outage.

Whereas WLB works well with OLTP applications, client affinity connections can work well for batch processing, where it is advantageous to keep your database’s memory heaps on a dedicated member. Just like WLB, client affinity connectivity is configured through JDBC properties or in the db2dsdriver.cfg file.

If you want transactions to be routed to another member in the case of planned or unplanned downtime, you can specify alternate members for failover. Connections can also be configured to fail back to the original host automatically once the member is brought back online. Failover hosts and options related to failing back are defined through JDBC connection properties for Java applications or parameters in the db2dsdriver.cfg file for non-Java applications.

This example shows how to enable client affinities with automatic failover in the db2dsdriver.cfg file:

<configuration><dsncollection><dsn alias=”SAMPLE” name=”SAMPLE” host=”172.162.141.44″ port=”50000″/><parameter name=”Authentication” value=”Client”></parameter>

</dsncollection>

<databases>

<database name=”SAMPLE” host=”172.162.141.44″ port=”50000″>

<parameter name=”keepAliveTimeout” value=”20″>

</parameter>

<parameter name=”connectionLevelLoadBalancing” value=”true”>

</parameter>

<acr>

<parameter name=”enableAcr” value=”true”>

</parameter>

<parameter name=”enableSeamlessACR” value=”true”/>

</parameter>

<parameter name=”maxAcrRetries” value=”2″>

</parameter>

<parameter name=”acrRetryInterval” value=”3″>

</parameter>

<parameter name=”affinityFailbackInterval” value=”5″>

</parameter>

<parameter name=”enableAlternateServerListFirstConnect” value=”false”>

</parameter>

<alternateserverlist>

<server name=”server1″ hostname=”172.162.141.44″ port=”50000″>

</server>

<server name=”server2″ hostname=”172.162.141.45″ port=”50000″>

</server>

<server name=”server3″ hostname=”172.162.141.46″ port=”50000″>

</server>

</alternateserverlist>

<affinitylist>

<list name=”list1″ serverorder=”server1,server2, server3″>

</list>

<list name=”list2″ serverorder=”server2,server3, server1″>

</list>

</affinitylist>

<clientaffinitydefined>

<client name=”WASApp1″ hostname=”172.65.185.132″ listname=”list1″>

</client>

<client name=”WASApp2″ hostname=”172.65.185.199″ listname=”list2″>

</client>

</clientaffinitydefined>

</acr>

</database>

</databases>

</configuration>

 

The db2dsdriver.cfg file is a XML file that contains various features to a supported database through ODBC, CLI, OLE DB, PHP, or Ruby applications. The keywords can be associated globally for all database connections, or they can be associated with a specific database source name or database connection.

  1. The DSN entry needs to match the database catalog information exactly.
  2. It is essential to set the enableAcr parameter to “true” when configuring for client affinities.
  3. You must include the primary server in the alternateserverlist.
  4. The serverorder value of the list element in the affinity defines the server to connect with initially and the sequence of servers to failover to during downtime.
  5. Each client that connects to the database can be linked with any defined list of servers specified in the list name attribute or the affinitylist. The first server specified in the serverorder will be where the initial connections are attempted. During automatic client reroute, the subsequent servers listed will be tried according to the order specified in the list.

In this example, the client named WASApp2 will use list2. Since the first server in list2 is server2, all connections will go through this server. If server2 is brought down, connections will be rerouted to server3. If server3 is brought down and automatic fail back is off—or if it is on and server2 is still down—connections will be automatically be rerouted to server1. If automatic failback is on and server2 is available, they will fail back to server2.

The hostname in the client name refers to the hostname or IP address of the application server.

 

Conclusion

DB2 pureScale can be configured to suit the needs and requirements for the critical applications of any organization. WLB is ideal for large applications that are primarily transactional and need to scale efficiently and easily. Client affinity connections work well with batch processing applications or when there are multiple applications using a single cluster and databases are being consolidated to reduce hardware and general infrastructure costs. Under both circumstances, the clustered architecture and failover capabilities of the DB2 pureScale feature help to meet availability requirements commonly found in service-level agreements. DB2 pureScale is a great example of a solution that extols the principle of only paying for what you use and using what you have paid for efficiently and effectively.

 

For more information

To learn more about DB2 pureScale features, visit this site.

DB2 pureScale is available as part of several DB2 product editions:

 

Previous post

IBM Optim Data Growth for Distributed Systems

Next post

Big Data for the Business Analyst

Madhu Flanagan

Madhu Flanagan is a DB2 certified and PMP certified migration consultant, works in Information Management Technology Ecosystem (IMTE) team. She helps customers implement DB2 LUW and coordinates DB2 proof-of-concept projects including DB2 pureScale. Prior to IMTE, she has worked with various relational databases and has helped develop systems solutions for 15+ years.