350 Solving Operational Business Intelligence with InfoSphere Warehouse Advanced Edition
4. Drop the table.
The data partition has been detached from the partitioned table and the
content of the table has been archived to file on disk. At this point, the file can
be archived and backed up to TSM or another storage manager.
The table can now be dropped using the standard DROP TABLE statement as
shown in Example 9-7.
Example 9-7 Drop archived table
DROP TABLE cstinsight.archive_part_2009q4
5. Drop the data partition table space.
The dedicated table space for the data partition can now be dropped as
shown in Example 9-8.
Example 9-8 Drop archived table
DROP TABLE SPACE tsfactdata_2009q4
9.4.2 Migrate data from production to an alternate database
Optim High Performance Unload uses named pipes and the DB2 LOAD utility to
migrate data from one database to another without the need to stage the data to
disk. This feature is particularly useful for the following tasks:
򐂰 Offloading data to data marts
򐂰 Generating data subsets for direct population of downstream systems
򐂰 Populating development, test, or other non-production systems
򐂰 Repartitioning data
򐂰 Migrating cold or dormant data to a federated database
The global block and unload block used in the control to unload data to output
files or named pipes is introduced in Chapter 8, “Building a corporate backup and
recovery strategy” on page 289.
In this chapter, we introduce the
migrate block which is used to automatically
migrate data from one database to another without the need for you to manually
create the named pipes or issue the LOAD command.
Migrate block
The migrate block provides the db2hpu utility with the details of the table to be
migrated and the source and target environments involved.
Chapter 9. Managing data lifecyle with InfoSphere Warehouse 351
In Example 9-9, we migrate data for the data partition PART_2009Q4, part of the
table CSTINSIGHT.CUSTOMER_TXN, from a source to the target database
environment. The source database has 16 database partitions on two data
nodes. The target database in this example has just four database partitions on a
single data node, similar to an IBM Smart Analytics System 5710.
Example 9-9 Issuing db2hpu from the command line with control file
GLOBAL CONNECT TO <target_dbname>;
--Migrate Block
MIGRATE TABLESPACE
PART(ALL)
DB2 NO LOCK YES QUIESCE YES
TARGET ENVIRONMENT(INSTANCE "target_instance" on "target host" IN
"target_dbname")
WORKING IN("/work1")
SELECT * FROM CSTINSIGHT.CUSTOMER_TXN;
DATAPARTITION_NAME (part_2009q4)
TARGET KEYS(current parts(1:4))
FORMAT MIGRATION INTO schema.customer_txn MODIFIED BY IDENTITYOVERRIDE
;
Where:
򐂰 /work1 is available on all data nodes.
򐂰 The target database table exists.
򐂰 The number of source database partitions is 16.
򐂰 The number of target database partitions is 4.
򐂰 MIGRATION INTO specifies the name of the target table name where
different to the source table name.
򐂰 IDENTITYOVERRIDE indicates that the contents of the identity column on
the source database table is to remain intact during the transfer.
In this example, data is repartitioned from 16 database partitions to four
database partitions. Optim High Performance Unload achieves this by
partitioning the data on the source system using the default round-robin partition
map method. Data is then streamed to four named pipe files on the target system
and the LOAD utility is initiated.
The contents of the identity column, often used to generate a surrogate key, are
retained and this allows for fact and dimensional data to be transferred with
foreign key (informational) constraints intact.

Get Solving Operational Business Intelligence with InfoSphere Warehouse Advanced Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.