Article Contents
- Overview
- Option 1: Creating a tar or zip Archive
- Option 2: Using the mysqldump Tool
- Option 3: MySQL Replication
- Option 4: Oracle Data Pump
- Option 5: SQL Server Backup
- Other Files to Backup
Overview
Performing regular backups of your up.time DataStore is a highly recommended practice. The DataStore is the up.time backbone and holds all configuration information and historical performance data. This article outlines five common methods for backing up your DataStore.
Note
Option 1: Creating a tar or zip Archive
One backup option is to regularly create tar or zip archives of your /datastore
directory. This is the most straightforward method but requires up.time to be stopped during the backup period.
To use this method, simply include the /datastore
directory in the tar/zip archive. If you need to recover your DataStore from a tar/zip archive, ensure that all up.time services are stopped and that you delete the existing /datastore
files before extracting your archive.
- Stop the up.time services. - Review Starting and Stopping up.time
- Archive the datastore directory.
On a Windows system:
- Locate the up.time install directory (default
C:\Program Files\uptime software\uptime
). - Archive the
\datastore
directory (using an archiving tool such as Winzip). - Move the zip archive to another system or drive.
On a Unix system, enter the following commands:
# cd /usr/local/uptime OR cd /opt/uptime (depending on the OS) # tar -cvf uptime_backup.tar datastore # gzip uptime_backup.tar
- Locate the up.time install directory (default
- Start the up.time services. - Review Starting and Stopping up.time
Option 2: Using the mysqldump Tool
mysqldump is a free utility included with the standard up.time MySQL database. This tool will export DataStore contents into a human readable .sql file that can later be used to recreate your DataStore. The commands below detail how to export both configuration and performance data using mysqldump.
Note
The standard format for the mysqldump command is as follows:
mysqldump --single-transaction -u[username] -p[password] -P[port #] --protocol=tcp [dbname]
By adding >
mybackup.sql to the commands below, all mysqldump data will be directed to the mybackup.sql file. The mybackup.sql file name should be changed to a date-stamped file name for easy reference.
Exporting Your Entire DataStore
mysql/bin/mysqldump --single-transaction -uuptime -puptime -P3308 --protocol=TCP uptime > mybackup.sql
Note
[dbname]
variable may be uptime_v4
if your database was created in up.time 4.
Exporting Only Your Configuration Information
mysql/bin/mysqldump -uuptime -puptime -P3308 --protocol=tcp
--ignore-table=uptime.erdc_decimal_data
--ignore-table=uptime.erdc_int_data
--ignore-table=uptime.erdc_string_data
--ignore-table=uptime.ranged_object_value
--ignore-table=uptime.performance_aggregate
--ignore-table=uptime.performance_cpu
--ignore-table=uptime.performance_disk
--ignore-table=uptime.performance_esx3_workload
--ignore-table=uptime.performance_fscap
--ignore-table=uptime.performance_lpar_workload
--ignore-table=uptime.performance_network
--ignore-table=uptime.performance_nrm
--ignore-table=uptime.performance_psinfo
--ignore-table=uptime.performance_sample
--ignore-table=uptime.performance_vxvol
--ignore-table=uptime.performance_who
--ignore-table=uptime.archive_delenda
uptime > mybackup.sql
Exporting Only Your Historical Performance Data
mysql/bin/mysqldump -uuptime -puptime -P3308 --protocol=tcp uptime
performance_aggregate
performance_cpu
performance_disk
performance_esx3_workload
performance_fscap
performance_lpar_workload
performance_network
performance_nrm
performance_psinfo
performance_sample
performance_vxvol
performance_who
erdc_decimal_data
erdc_int_data
erdc_string_data > mybackup.sql
Importing Your Backup Data
NOTE: Before importing data, you must stop the up.time services (see steps outlined in the Creating a tar or zip Archive section).
To import your backup data, run the following command:
mysql/bin/mysql -q -f -u uptime -puptime -P3308 --protocol=tcp uptime < mybackup.sql
This process will attempt to insert any non-duplicate data that is found in your mybackup.sql
file. If you need to rebuild your database from scratch, run the resetdb
utility before importing your backup file. This utility will erase ALL data in your existing DataStore; be absolutely sure that a full backup recovery is your best option before running this command.
resetdb really --nodata
Option 3: MySQL Replication
MySQL replication is the most complex backup method but is the most powerful for quick recovery. MySQL's built-in replication feature will maintain a completely up-to-date copy of your DataStore on another database instance (on the local system or a secondary server). This copy can be quickly set up to act as the primary DataStore in the event of a failure, or can be easily copied from the replication server to the primary server in the event of an outage.
Information on starting replication can be found at:
- MySQL Online Documentation: How to Set Up Replication.
- up.time KB article: Configuring a Reporting Instance.
Option 4: Oracle Data Pump
If your DataStore is running on Oracle, you can use the Data Pump utility to export data from an Oracle database. Refer to the Oracle database utilities page for more information:
Option 5:SQL Server Backup
If your DataStore is running on Microsoft SQL Server, you can use the SQL Backup tool to export data. Refer to the Microsoft Developer Network for more information:
Other Files to Backup
The following files are not part of the DataStore but should also be backed up on a regular basis, especially if they have been modified or tuned.
- <uptime_dir>/uptime.conf
- <uptime_dir>/license.dat
- <uptime_dir>/wrapper.conf (Windows OS)
- <uptime_dir>/uptime.lax (Unix OS)
- <uptime_dir>/apache/conf/httpd.conf