One day, without warning, I was informed that a new data warehouse was being created. The data warehouse was to reside on a platform running something other than IBM® z/OS®, and I was to lead this effort. The distributed platform was to be using IBM DB2® for Linux, UNIX and Windows (LUW) on Solaris. I was excited to be given this opportunity, and I looked forward to starting this project. One of the first things I did was gather requirements and find tools to help me migrate data/objects from the mainframe to the distributed LUW platform.
The tool had to be easy to understand and use. It had to migrate data and objects to the LUW platform from z/OS, transporting large volumes of data in a reasonable amount of time.
After spending weeks researching, I found a tool that met my requirements: the IBM Data Movement Tool. I found it to be a great tool for copying extremely large volumes of data.
The Data Movement Tool is easy to install and flexible to operate. It can be run from the command line or a GUI. Importantly, the Data Movement Tool allows the selection of objects that you want to copy. You can copy individual objects, entire database, objects by authID, or schema.
Let’s walk through the steps of using the Data Movement Tool. I’ll use my project—in which I migrated data from the z/OS platform to a distributed LUW platform—as an example.
After downloading and configuring the tool, follow the basic prompts to migrate data. I chose to use the command line method. After making a Telnet connection to the target server, issue the following command: ./IBMDataMovementTool.sh.
You will then be guided through the data and object migration to the LUW server by answering a series of wizard-like prompts, such as:
Is Target DB2 Local=Yes
You can then choose the source database. In my case, I selected option “7” from a list of possibilities:
Other than DB2
My SQL Server =2
Sybase = 3
My Access = 4
Postgre SQL= 6
DB2 iSeries =8
Next, enter the additional information requested for the source database, including IP address, port #, database name, userid, and password. After entering the information, the tool connects to the source database, verifying the userid and password.
Then select a schema name for the objects to copy (for example, “Test.DB”). Provide an output directory on the new platform where the data will be unloaded (for example, c:/luw/data/unload).
Enter the target database information, including the IP address, port #, database name, userid, and password. The tool then connects to the target database, verifying the userid and password.
Once connection is made to LUW server, change the directory to the output directory you entered earlier (c:/luw/data/unload). Once the connections to both platforms have been made, you are ready to enter the following commands:
To conduct the data migration, first execute the ./unload command, which will migrate data from z/OS to LUW. Then execute the ddlddl.sql command to create objects. (This command is optional if objects already exist on LUW.) Then execute the dd2load.sql command to load the data into the LUW objects. After the data is loaded, execute db2runstats.sql to see the statistics on objects.
This is essentially the task of migrating data and objects from z/OS to LUW. The process is repeatable.
I hope this article helps anyone who may be faced with a similar challenge of copying data from a mainframe platform to a distributed environment. This tool made my job much easier and took away some of the fear I had of working on a distributed environment.
For more instructions on downloading, configuring, and optimizing the tool for high performance, please visit ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/index.html.
Any questions? What migration projects do you have on the horizon? Whatever they may be, this tool can assist you with that task.
Forrester report: Extract business value from social content
IBM white paper: Could your content be working harder—smarter?
And take advantage of open source InfoSphere Streams components
Podcast: Build a business case for real-time analytics
White paper: Deploy Hadoop to gain insights from mainframe data