To optimize the performance of the Tivoli Storage Manager backup environment use the below checklist make changes in the TSM server settings. These suggestions may not work for all the backup environments and you might be working on an environment
where the optimal solution might differ from what is stated here. The
advice this document offers represents a basic starting point from which
additional configuration and tuning options can be further explored.
IBM Tivoli Storage Manager Performance Tuning tips
Use the following performance checklist for optimizing your Tivoli Storage Manager configuration. It is advised to change one setting at a time and monitor the performance and then go along as per your observations. One setting may not work better for backup and restore performance, so make sure to decide your priority initailly before changing any setings.
- Optimizing the server database
- Optimizing server device I/O
- Using collocation
- Increasing transaction size
- Maximizing your tuning network options
- Using LAN-Free
- Using incremental backup
- Using multiple client sessions
- Using image backup
- Optimizing schedules
Optimizing the TSM server database
Tivoli Storage Manager server database performance is critical to the
performance of many Tivoli Storage Manager operations. The following
topics highlight several issues that you should consider:
Tivoli Storage Manager database and log volumes
Using multiple Tivoli Storage Manager server database volumes provides
greater database I/O capacity that can improve overall server
performance. Especially during backup, restore, and inventory
expiration, the Tivoli Storage Manager server process might be able to
spread its I/O activities over several volumes in parallel, which can
reduce I/O contention and increase performance. The recommended number
of Tivoli Storage Manager server database volumes is between 4 and 16,
though some environments might require a smaller or larger number than
this.
Tivoli Storage Manager performance improves when you use a separate
disk (an LUN) for each database volume. If more than one Tivoli Storage
Manager server database volume exists on a single physical disk or
array, there might be no additional performance gain due to I/O
contention. Check the platform performance monitor to determine whether
the Tivoli Storage Manager server database disk-busy rate is higher than
80%, which indicates a bottleneck. In general, avoid placing a Tivoli
Storage Manager server database volume on a disk that has high
sequential I/O activity such as might occur with a Tivoli Storage
Manager disk storage-pool volume.
For installations that use large disk-storage subsystems such as IBM
Total Storage Enterprise Storage Server (ESS) or DS family, consider
the location of the Tivoli Storage Manager server database and recovery
log volumes with respect to the Redundant Array of Independent Disks
(RAID) ranks within the storage subsystem. You must review the
performance information available on a RAID rank basis to determine the
optimal placement. Large Tivoli Storage Manager server environments
might want to dedicate a RAID rank to a single Tivoli Storage Manager
server database or recovery log volume to boost performance.
To take advantage of hardware availability, most Tivoli Storage
Manager server database and recovery log volumes are placed on
high-availability arrays that use some form of hardware RAID 1, 5, or
10, or use Tivoli Storage Manager mirroring.
Write cache
You might want to enable write cache of the adapter or subsystem for the disks of Tivoli Storage Manager server database, Tivoli Storage Manager recovery log, and the Tivoli Storage Manager storage pools. Write cache provides superior performance in writing random database I/Os to cache. However, this is only feasible if the disk adapter or subsystem can guarantee recovery; for example, the disk-adapter or subsystem hardware must be battery-powered in case of failure.
For best results, use write cache for all RAID 5 arrays because there
can be a large RAID 5 write penalty if the parity block is not available
in the cache for the blocks that are being written.
Tivoli Storage Manager mirroring
Tivoli Storage Manager mirroring
The Tivoli Storage Manager server provides database and recovery
volume mirroring that increases the availability of the Tivoli Storage
Manager server, in case of a disk failure. Use the Tivoli Storage
Manager server database page shadow to provide further availability.
Enable the page shadow using the dbpageshadow yes server option.
Also Read: How to increase or decrease TSM DB, active log and archive log size ?
Also Read: How to increase or decrease TSM DB, active log and archive log size ?
The database page shadow protects against partial page writes to the
Tivoli Storage Manager server database that can occur if the Tivoli
Storage Manager server is stopped, due to a hardware or software
failure. This is possible because disk subsystems use 512 byte sectors,
but Tivoli Storage Manager servers use 4 KB pages. The page shadow
allows the Tivoli Storage Manager server to recover from these
situations, which might otherwise require restoring the Tivoli Storage
Manager server database from a recent backup.
You might want to always use the Tivoli Storage Manager server
database page shadow. There is a slight performance impact if you use
this feature. You can place the page shadow file in the server
installation directory.
If the Tivoli Storage Manager server is using Tivoli Storage Manager
mirroring for the database volumes and using the database page shadow,
it is safe to enable parallel I/Os to each of the Tivoli Storage Manager
server database volume copies. Doing so will improve overall Tivoli
Storage Manager server database performance by reducing I/O response
time.
Tivoli Storage Manager server database buffer
Tivoli Storage Manager server database buffer
In general, the performance of the Tivoli Storage Manager server
should be better if you increase the size of database buffer pool.
The bufpoolsize server option is specified in KB. Beginning
with Tivoli Storage Manager 5.3, this option default is set to 32 768
(32 MB). The buffer pool size can initially be set to 25% of server real
memory or the process virtual memory limit (whichever is lower). For
example, if a 32-bit server has 2 GB RAM, the Tivoli Storage Manager
server database buffer pool size may be set as bufpoolsize 524288. The performance benefit may diminish when the buffer size goes beyond 1 GB.
You can issue the QUERY DB F=D server command to display the database
cache hit ratio. As a general rule, the Tivoli Storage Manager server
database cache hit ratio should be greater than 98%.
Tivoli Storage Manager server database size
The way that you use the size or space of the Tivoli Storage Manager server database impacts Tivoli Storage Manager scalability. The consideration for multiple Tivoli Storage Manager server instances is often motivated by the size of the Tivoli Storage Manager server database.
Tivoli Storage Manager server database size
The way that you use the size or space of the Tivoli Storage Manager server database impacts Tivoli Storage Manager scalability. The consideration for multiple Tivoli Storage Manager server instances is often motivated by the size of the Tivoli Storage Manager server database.
To estimate the space usage of a Tivoli Storage Manager server
database, consider that the Tivoli Storage Manager server database uses
the space as 3-5% of the size of the total file-system data. Also,
consider that in the Tivoli Storage Manager server database space, we
generally use the 600 bytes per object figure for the primary copy of an
object. For hierarchical storage management (HSM) objects with backups,
those are really two different objects to the Tivoli Storage Manager
server, so multiply the number of bytes required per object by 2. If you
keep a copy of an object (in a Tivoli Storage Manager copy pool), that
is an additional 200 bytes for each copy of each object. Additional
versions of backup objects also count as new objects - 600 bytes per
object. These numbers could go up or down depending on the directory
depth and length of the path and file names of the data that is stored.
A common practice is to have the average size of the Tivoli Storage
Manager server database be around 40-60 GB, although it could be bigger
(such as 200+ GB as reported by some customers). For the larger-sized
database, more time may be used to periodically maintaining the database
optimization. For example, running the Tivoli Storage Manager server
auditDB process can take about one hour for each GB in the database.
During the period of maintenance, the Tivoli Storage Manager server does
not perform any normal operations.
Optimizing server device I/O
The Tivoli Storage Manager server performance depends upon the system
I/O throughput capacity. Any contention of device usage when data flow
moves through device I/O impacts Tivoli Storage Manager throughput
performance. There are several performance strategies that might help in
managing the devices.
You might want to study the system documentation to learn which slots
use which particular PCI bus and put the fastest adapters on the fastest
busses. For best LAN backup-to-disk performance, you might want to put
network adapters on a different bus than the disk adapters. For best
disk-to-tape storage pool migration performance, put disk adapters on a
different bus than on the tape adapters.
Because parallelism allows multiple concurrent operations, it is reasonable to use multiple busses, adapters, LANs and SANs, disk subsystems, disks, tape drives, and Tivoli Storage Manager client sessions.
When you use a DISK storage pool (random access), it is better to
define multiple disk volumes and one volume per disk (LUN). If using a
FILE storage pool (sequential access), it is helpful to use multiple
directories in the device class and one directory per disk (LUN).
Consider configuring enough disk storage for at least one day's
backups and configuring disk subsystem LUNs with regard for performance.
Using collocation
Restore performance of large file servers is poor if the backup data
is not collocated. Usually this is due to the backup data being spread
over a huge amount of tape volumes and thus requiring a large number of
tape mounts. Tivoli Storage Manager collocation is designed to maintain
optimal data organization for faster restore, backupset generation,
export, and other processes because of less mount and locate time for
tape and lower volume contention during multiple client restores. More
storage volumes might be required when using collocation.
The following storage pool parameters (for sequential-access only) specify the types of Tivoli Storage Manager collocation:
Parameter | Collocation type |
---|---|
No | Data is written to any available volume |
Node | Data is written to any volume with data for this node |
Filespace | Data is written to any volume with data for this filespace |
Group | Data is written to any volume with data for this group of nodes |
Collocation by node
To avoid volume contention, the nodes that have a low chance of restore at the same time can be collocated together by specifying collocation=node in the DEFINE STGPOOL (sequential access) or UPDATE STGPOOL (sequential access) command.
When storage pool migration has to run during the backup window, the
nodes that back up to disk at the same time should be collocated
together. This reduces volume mounts, because all data that is
collocated can be moved at one time.
Also Read: How TSM Server determines the eligibility of files during different types of backup ?
Also Read: How TSM Server determines the eligibility of files during different types of backup ?
Faster tape volume reclamation can occur when database data is
separated from file server data, either by using collocation by node or
by keeping data in separate storage pools. This is because database
backups are typically large enough to fill the tape volumes and whole
tape volumes can expire at one time. This leads to tape volumes being
reclaimed with no data movement. If file server data is mixed in with
database data in a single non-collocated storage pool, volumes can
contain small amounts of file server data that might not expire at the
same time as the database data, thus leading to the volumes having to be
mounted and read during reclamation processing.
Collocation by node is best at restore time, but not optimized for
multi-session restore operations. Also, collocation by node requires the
highest volume usage and the highest number of mounts for migration and
reclamation.
Collocation by filespace
Restoring a large file server with multiple large file systems requires multiple restore commands. Backup data for multiple file systems might be collocated by node and contained on the same sequential media volumes. If this is the case, poor performance can be result because of contention between restore processes for access to these volumes.
Collocation by filespace
Restoring a large file server with multiple large file systems requires multiple restore commands. Backup data for multiple file systems might be collocated by node and contained on the same sequential media volumes. If this is the case, poor performance can be result because of contention between restore processes for access to these volumes.
The file servers with two or more large file systems should use a
Tivoli Storage Manager storage pool that is collocated by filespace by
using collocation=filespace. Collocation by filespace honors the "virtual" file systems that are defined using Tivoli Storage Manager client option virtualmountpoint.
Collocation by group
You might want to use collocation by group (collocation=group) for copy storage pools where volumes are taken offsite, if the groups defined are sufficiently large enough so that only a small number of volumes must be updated on a daily basis. You might want to use collocation by group for primary storage pools on tape. Nodes that are not grouped are collocated by node.
Collocation group size
The optimal collocation group size depends on the tape volume size and
the number of mount points (tape drives) available for restore of a
single node.
Also Read: Full vs Differential vs Incremental vs Progressive Incremental Backups
Also Read: Full vs Differential vs Incremental vs Progressive Incremental Backups
The volume size depends on the tape drive technology, tape media
technology, and the data characteristics (such as compressibility). The
PERL script fullvolcapacity can be used to determine the average full
volume capacity for each sequential-access storage pool on the server.
The Administration Center wizard determines the volume size
automatically by using the first sequential-access storage pool found in
the default management class destination.
The volume size should be multiplied by the number of available mount
points or tape drives available for restore of these nodes. Consider the
total number of tape drives that the Tivoli Storage Manager server has
available and the number that might be in use for other processes
(database backup, storage pool migration, node backups, other restores)
that are concurrent with a restore. The Administration Center wizard
uses a value of 4.
Automatic collocation
If you have no knowledge about the relationships between nodes, no
knowledge about the node data growth rates, or no knowledge about how
existing data is collocated, you might want to use automatic methods to
collocate the groups. Use automatic methods first, then manually
fine-tune the types of collocation.
Active-data collocation
Beginning with Tivoli Storage Manager 5.4, a new storage pool called
active-data pool is available to collocate the active data and thus
improve Tivoli Storage Manager restore performance significantly. By
using this feature during the restore process, data can be directly
restored from an active-data pool that stores only active data.
Therefore, you avoid the overhead of distinguishing active versus
inactive data in a primary storage pool.
You can use the following two methods to collocate active data into an active-data pool:
- For the legacy data in a primary storage pool, issue the COPY ACTIVEDATA Tivoli Storage Manager server command so that the active data in that primary storage pool can be extracted and migrated to the active-data pool.
- For the new data coming from Tivoli Storage Manager backup sessions, configure the active-data pool and the primary storage pool appropriately so that the active data can be simultaneously backed up to the active-data pool and the primary storage pool.
Increasing transaction size
Generally, by increasing Tivoli Storage Manager transaction size you
can increase the throughput of client backup and server data movement.
The following topics offer some helpful recommendations.
Client session transaction size
The following options are designed to control the transaction size for
a Tivoli Storage Manager client/server session for the backup-archive
client:
txngroupmax
The txngroupmax option specifies the maximum number of
objects (files and directories) that are included in a client session
transaction. For Tivoli Storage Manager 5.2 and later, this option is
set globally for the Tivoli Storage Manager server and can be set
individually for each node by issuing the UPDATE NODE command.
txnbytelimit
The txnbytelimit option specifies the maximum object size in
KB that is included in a client session transaction. A single file
exceeding this size is always processed as a single transaction.
Also Read: Take TSM DB backup immediately if you notice these messages
Also Read: Take TSM DB backup immediately if you notice these messages
Tivoli Storage Manager for Space Management (HSM), Tivoli Storage
Manager for Databases, Tivoli Storage Manager for Mail, Tivoli Storage
Manager for Enterprise Resource Planning, and other Tivoli Storage
Manager applications will typically send a single object (file or
database) in a transaction.
Server process transaction size
The following options are designed to control the transaction size of
Tivoli Storage Manager server data movement transaction size, which can
influence the performance of storage pool migration, storage pool
backup, storage pool restore processes, reclamation, and move data
functions:
movebatchsize
The movebatchsize option specifies the maximum number of
objects (server physical bitfiles) that are included in a server data
movement transaction.
movesizethresh
The movesizethresh option specifies the maximum size of objects (in MB) that is included in a server data-movement transaction.
Recommended options
The following server options are recommended to setup (Tivoli Storage Manager 5.3 defaults):
- txngroupmax 256
- movebatchsize 1000
- movesizethresh 2048
For the backup-archive client, set the txnbytelimit option to 25600
For the nodes that back up small files directly to tape, you can update the node definition with the UPDATE NODE nodename txngroupmax=4096 command. Also, the client option can be set as txnbytelimit 2097152.
Increasing the transaction size might be helpful to improve the
performance when moving data to tape, including backup storage pool from
disk to tape. It might not be beneficial when moving data to disk,
which can cost more Tivoli Storage Manager server recovery log space.
Also Read: Taking TSM server DB backup when the server is down
Also Read: Taking TSM server DB backup when the server is down
You might want to increase the Tivoli Storage Manager server
recovery-log size. 4 GB is sufficient for many installations with 13.5
GB being the maximum.
Interaction
To optimize performance, configure the system for minimal interaction
and information display when backing up or restoring a large number of
files. The following options are designed to control the amount of
interaction that is required and information that is displayed to the
user during a client backup or restore session:
tapeprompt
The tapeprompt option specifies whether you want Tivoli
Storage Manager to wait for a tape mount, if it is required for a
backup, archive, restore, or retrieve process, or if it is prompted for a
choice. Specifying no indicates that no prompt is made and the
process waits for the appropriate tape to be mounted. Tape prompting
does not occur during a scheduled operation regardless of the setting of
the tapeprompt option.
quiet
The quiet option limits the number of messages that display
during processing. This option also affects the amount of information
that is reported in the Tivoli Storage Manager client schedule log and
the Windows client event log. If the quiet option is not specified, the default option, verbose,
is used. By default, information about each file that is backed up or
restored is displayed. Errors and session statistics are displayed
regardless of whether the quiet option or the verbose option is specified.
Retry
A retry occurs when processing is interrupted. Because of the retry
operation, increasing the Tivoli Storage Manager transaction size can
degrade throughput performance when the data must be frequently sent.
The retry statistics are found by checking client session messages or
schedule log file (when the verbose option is set).
To avoid a retry operation, use the client option compressalways yes and tapeprompt no or quiet. To reduce the number of retries, you can set up the client option changingretries.
You can schedule backup/archive processes when the files are not in
use or use exclude options to exclude files likely to be open. On a
Windows client, you can use Open File Support. To change how open files
are managed, use the copy group serialization parameter.
Tuning network options
Tivoli Storage Manager performance is heavily impacted by network
configuration because Tivoli Storage Manager moves data through a
network. Some general recommendations about tuning the network options
for Tivoli Storage Manager performance are listed here.
Also Read: Starting TSM server in maintenance mode
Also Read: Starting TSM server in maintenance mode
When using Fast Ethernet (100 Mb/sec), make sure that the speed and
duplex settings are set correctly. Do not rely on the auto-detect
feature, because it has been found to be a frequent cause of poor
network performance. If the client adapter, network switches or hubs,
and server adapter all support 100 Mb duplex, choose this setting for
all of them.
When using Gb Ethernet, if the client adapter, network switches, and
server adapter all support jumbo frames (9000 Byte MTU), then enable
this feature to provide improved throughput with lower host CPU usage.
The TcpNoDelay, TcpWindowSize, Tcpbufsize, and Diskbuffsize options are designed to control Tivoli Storage Manager data movement through a network.
TcpNoDelay
TcpNoDelay
The Tivoli Storage Manager tcpnodelay parameter specifies
whether TCP/IP will buffer successive small (less than the network MSS)
outgoing packets. This should always be set to yes.
TcpWindowSize
The Transmission Control Protocol (TCP) window size is a parameter provided in the TCP/IP standard that specifies the amount of data that can be buffered at any one time in a session. If one session partner reaches this limit, then it cannot send more data until an acknowledgement is received from the other session partner. Each acknowledgement includes a window size update from the sending partner. A larger TCP window size can allow the sender to continue sending data and thus improve throughput. A larger TCP window size is particularly useful on reliable (no lost packets), long distance, or high latency networks.
Also Read: Use these 2 TSM server options to prevent network issues during backup schedules
TcpWindowSize
The Transmission Control Protocol (TCP) window size is a parameter provided in the TCP/IP standard that specifies the amount of data that can be buffered at any one time in a session. If one session partner reaches this limit, then it cannot send more data until an acknowledgement is received from the other session partner. Each acknowledgement includes a window size update from the sending partner. A larger TCP window size can allow the sender to continue sending data and thus improve throughput. A larger TCP window size is particularly useful on reliable (no lost packets), long distance, or high latency networks.
Also Read: Use these 2 TSM server options to prevent network issues during backup schedules
The Tivoli Storage Manager tcpwindowsize option is available
on both client and server, and is specified in KB. Many platforms
require that you tune the operating system to use a TCP window size
larger than 65 535 bytes (64 KB - 1). If you do not tune the system and
the Tivoli Storage Manager tcpwindowsize option is set to 64 or
larger, there might be a performance impact caused when the operating
system chooses to use a smaller value than that requested by Tivoli
Storage Manager. This is why using tcpwindowsize 63 is recommended. If the client backup operations are over a network that might benefit from a large TCP window size, use tcpwindowsize 63 first, and then experiment with larger values.
For a large TCP window size to be used between Tivoli Storage Manager
client and Tivoli Storage Manager server, both must support this
feature, which is also referred to as TCP window scaling, or RFC 1323
support. For support information about TCP window scaling, see the
Tivoli Storage Manager Performance Tuning Guide in the Tivoli Storage Manager V5.3, V5.4, and V5.5 Information Center at http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp.
Tcpbufsize
Tcpbufsize
The Tivoli Storage Manager tcpbufsize option is designed to
specify the size of the buffer that is used for TCP/IP SEND requests.
During a restore operation, client data moves from the Tivoli Storage
Manager session component to a TCP communication driver. The tcpbufsize
option is used to determine if the server sends the data directly from
the session buffer or copies the data to the TCP buffer. Although it
uses more memory, a larger buffer can improve communication performance.
Diskbuffsize
Diskbuffsize
The Tivoli Storage Manager diskbuffsize option is designed to
specify the maximum disk I/O buffer size in KB that the Tivoli Storage
Manager client can use when reading files. For Tivoli Storage Manager
backup-archive or HSM migration client, you can achieve optimal
performance if the value for this option is equal to or smaller than the
amount of file read-ahead provided by the client file system. A larger
buffer requires more memory and might not improve performance or might
even degrade performance.
In Tivoli Storage Manager 5.3, the previous largecommbuffers option is replaced by the diskbuffsize option.
Recommended options
The following Tivoli Storage Manager server options and defaults are recommended to set up:
- tcpwindowsize 63
- tcpnodelay yes
- tcpbufsize 32 (not used for Windows)
The following Tivoli Storage Manager client options and defaults are recommended:
- tcpwindowsize 63
- tcpnodelay yes
- tcpbuffsize 32
- diskbuffsize 32, 256 (for AIX, if you set the enablelanfree option to no), 1023 (for HSM, if you set the enablelanfree option to no)
Compression
The Tivoli Storage Manager client compression option helps to
instruct the Tivoli Storage Manager client to compress files before
sending them to the Tivoli Storage Manager server.
Compressing files reduces the file data storage space and can improve
throughput over slow networks with a powerful client. The throughput,
however, can be degraded when running on a slow client system using a
fast network, because software compression uses significant client
processor resources and costs additional elapsed time.
Also Read: Steps to do after successful TSM DB restore
Also Read: Steps to do after successful TSM DB restore
If the option is set as compressalways yes, compression
continues even if the file size increases. To stop compression if the
file size grows and to resend the file uncompressed, set the option to compressalways no.
If the option is set as compression yes, the compression processing can be controlled in the following ways:
- Use the include.compression option to include files within a broad group of excluded files for compression processing.
- Use the exclude.compression option to exclude specific files or groups of files from compression processing, especially for the objects that are already compressed or encrypted, such as gif, jpg, zip, mp3, and so on.
The preferred setup options for the client are as follows:
- For fast network AND fast server, set compression no
- For LAN-Free with tape, set compression no
- For slow network or slow server, set compression yes
- Typically, set compressalways yes
Compression statistics can be monitored for each client and the options can be adjusted as needed.
If the Tivoli Storage Manager client compression and encryption is
used for the same file during backup, the file is first compressed and
then encrypted, because this results in a smaller file. When the file is
restored, the file is decrypted first, and then decompressed.
Server storage can be saved using tape hardware compaction rather than Tivoli Storage Manager client compression.
Using LAN-free data movement
Tivoli Storage Manager LAN-free movement offloads LAN traffic and gets better Tivoli Storage Manager server scalability due to reduced I/O requirements. Therefore, by using LAN-free data movement, higher backup or restore throughput is possible in some circumstances where large objects are moved through the SAN.
In LAN-free data movement, the metadata moves through a LAN between a
storage agent (client) and a server. Thus, LAN-free data movement is not
designed for effectively moving small files. For instance, if a very
small file that is x bytes moves through a SAN, the file's metadata in a size of xxx
bytes must go through a LAN. Therefore, it is inefficient to use Tivoli
Storage Manager LAN-free data movement to move a large amount of small
files. In general, use Tivoli Storage Manager LAN-free data movement
where the SAN bandwidth requirement is significantly greater than a LAN.
If using Tivoli Storage Manager LAN-free data movement to move many
small files, slower performance must be tolerated.
Using LAN-free data movement is best with Tivoli Storage Manager Data
Protection products and API clients that backup and restore large
objects, such as Tivoli Storage Manager for Mail, Tivoli Storage Manager
for Databases, and Tivoli Storage Manager for Enterprise Resource
Planning.
Tivoli Storage Manager 5.3 provides the following LAN-free performance improvements:
- Processor usage is reduced, especially for API clients.
- The lanfreecommmethod sharedmem option is used for all Tivoli Storage Manager 5.3 LAN-free data-movement clients, including AIX, HP-UX, HP-UX on Itanium2, Linux, Solaris, and Windows.
Using incremental backup
Because of Tivoli Storage Manager's progressive incremental backup
mechanism, it is not necessary to do weekly full backups for file
servers. Tivoli Storage Manager incremental backup compares the client
file system with Tivoli Storage Manager server inventory to determine
the files and directories that are new or changed. Then Tivoli Storage
Manager incrementally backs up the new or changed files and directories.
Tivoli Storage Manager incremental backup also expires those deleted
files and directories. Therefore, no unnecessary data will be backed up.
Thee is no need to restore the same file multiple times, less network
and server bandwidth are needed, and less server storage are occupied.
Also Read: TSM Storage Pool Concepts (V7 Revised)
For effective incremental backup, use the journal-based service that
is available on both Windows and AIX platforms. Journal-based backup
uses a journal service that runs continuously on the Tivoli Storage
Manager client system to detect files or directories that are changed
and maintain that information in a journal for use during a later
backup. When you use the journal, during an incremental backup the
client file system is not scanned and the file and directory attributes
are not compared to determine which files are to be backed up.
Eliminating these operations means that journal-based incremental backup
can perform much faster than standard full incremental backup and use
significantly less virtual memory.
Journal-based backup is multi-threaded for both change notification
and backup operations. Beginning in Tivoli Storage Manager 5.3, the
journal service supports multiple incremental-backup sessions
concurrently.
Journal-based backup is installed using the client GUI setup wizard
and is supported on all 32-bit Windows clients, as well as in clustered
configurations. The journal options are specified in the tsmjbbd.ini
file in the Tivoli Storage Manager client installation directory. The
default configuration parameters are usually optimal. Just add the
appropriate file systems to be monitored and the appropriate entries in
the journal exclude list.
Beginning in Tivoli Storage Manager 5.4, a method using the client option memoryefficientbackup diskcachemethod
is available to improve performance of the Tivoli Storage Manager
backup-archive client for executing progressive incremental backup. In
comparison to using one server query per directory (with the memoryefficientbackup yes
option), this method uses disk caching for inventory data, reducing the
amount of virtual memory used by the Tivoli Storage Manager
backup-archive client process. This method also makes it possible to
back up much larger file systems as defined by the number of files and
directories in that file system. Using disk caching does not increase
the workload on the Tivoli Storage Manager server.
Using multiple client sessions
The Tivoli Storage Manager backup-archive client can run multiple
parallel sessions that can improve backup and restore performance.
Multi-session backup-archive client
You can enable multiple sessions with the server by setting the Tivoli Storage Manager client option resourceutilization
during backup or restore. Substantial throughput improvements can be
achieved in some cases, but is not likely to improve incremental backup
of a single large file system with a small percentage of changed data.
If backup is direct to tape, the client-node maximum-mount-points-allowed parameter, maxnummp,
must also be updated at the server by issuing the UPDATE NODE command.
This specifies the maximum number of tape volumes that can be mounted
for this node.
For a restore operation, only one session is used to restore all data
in DISK storage pools. For data in sequential storage pools (including
FILE storage pools), the number of sessions is the same as the number of
sequential volumes with data to be restored. However, the number of
sessions can be limited by the node maxnummp parameter and the number of mount points or tape drives in the device class.
Virtual mount points
The client option virtualmountpoint dir_name can be used to inform the Tivoli Storage Manager server that the dir_name
directory and all data in it should be considered as a separate
"virtual" file system for backup purposes. This allows better
granularity of the backup processing, such as reducing backup-process
virtual-mmemory usage and enabling multiple parallel-backup processes.
Create virtual mount points carefully so that each "virtual" file
system does not contain more than several million files. This option is
only available on UNIX platforms and has no impact on the applications
or users of the file system.
Multiple concurrent backup and restore operations
Scheduling incremental backups for multiple file systems concurrently
on one Tivoli Storage Manager client system is feasible with any of the
following methods:
- Using one Tivoli Storage Manager node name, running one Tivoli Storage Manager client scheduler, and specifying the Tivoli Storage Manager client option resourceutilization with a value of 5 or higher with multiple file systems included in the schedule or domain specification.
- Using one Tivoli Storage Manager node name, running one Tivoli Storage Manager client scheduler, and scheduling a command that runs a script on the client system that includes multiple Tivoli Storage Manager command-line client statements (for example, using multiple dsmc commands).
- Using multiple Tivoli Storage Manager node names and running one Tivoli Storage Manager client scheduler for each node name, in which each scheduler uses a unique client options file.
You might want to configure multiple sessions for Tivoli Storage
Manager Data Protection products because each product has its own
configuration options. You might want to configure multiple sessions for
the Tivoli Storage Manager backup-archive client taking one of the
following actions:
- Specifying the client option resourceutilization with a value of 5 or higher
- By running multiple client instances in parallel, each on a separate file system.
Use a multi-session restore operation only if the restore data is
stored on multiple sequential storage pool volumes and the restore
command option contains an unqualified wildcard character. The following
example uses an unqualified wildcard character:
e:\users\*
Using image backup
Tivoli Storage Manager image backup can be used to optimize large
file-system backup and restore performance using sequential block I/O
and avoiding file system overheads, such as file open() and close().
Therefore, the Tivoli Storage Manager image backup can approach the
speed of hardware device.
You can use the Tivoli Storage Manager BACKUP IMAGE command to create
an image backup of one or more volumes on the client system. The Tivoli
Storage Manager API must be installed before you can use the BACKUP
IMAGE command.
Using a Tivoli Storage Manager image backup operation can restore the
file system to a point-in-time image backup, to a most recent image
backup, and to a most recent image backup with changes from the last
incremental backup.
Offline and online image backup
An offline image backup prevents read or write access to the volume by
other system applications during the operation. Offline image backup is
available for AIX, HP-UX, and Solaris clients.
For Linux86 and Linux IA64 clients, Tivoli Storage Manager performs an
online image backup of the file systems residing on a logical volume
created by Linux Logical Volume Manager. During the backup, the volume
is available to other system applications.
For Windows clients, if Logical Volume Snapshot Agent (LVSA) is
installed and configured, Tivoli Storage Manager performs an online
image backup during which the volume is available to other system
applications. To specify the LVSA snapshot cache location, you can use
the snapshotcachelocation and include.image options.
Multiple concurrent backup and restore operations
Scheduling image backups for multiple file systems concurrently on one
Tivoli Storage Manager client system is feasible with one of the
following methods:
- Using multiple Tivoli Storage Manager node names and running one Tivoli Storage Manager client scheduler for each node name, in which each scheduler uses a unique client options file.
- Using one Tivoli Storage Manager node name, running one Tivoli Storage Manager client scheduler, and scheduling a command that runs a script on the client system that includes multiple Tivoli Storage Manager command line client statements (example: using multiple dsmc commands).
For the best image-backup performance, use LAN-free data movement with
tape, and use parallel-image backup and restore sessions for the
clients with multiple file systems.
Optimizing schedules
To reduce resource contention and improve performance of Tivoli
Storage Manager operations, a common strategy is to create schedules
with minimal overlap to some Tivoli Storage Manager operations.
The Tivoli Storage Manager operations that should be scheduled in
different time periods include client backup, storage pool backup,
storage pool migration, database backup, inventory expiration, and
reclamation.
You can stagger Tivoli Storage Manager client session starting times cover the schedule window using the SET RANDOMIZE percent command. You can disable automatic expiration using the server option expinterval 0. It is better to define an administrative schedule for expiration at a set time. Scheduling improvements from Tivoli Storage Manager 5.3 include
calendar-type administrative and client schedules and new commands that
include MIGRATE STGPOOL and RECLAIM STGPOOL.
0 Comment to "Increase TSM server performance by following these guidelines"
Post a Comment