up.time includes user-definable parameters that can control some aspects of its behavior including the following:
From a configuration perspective, there are two types of parameters:
Configuration parameters that are not directly tied to, thus do not require a restart of, the up.time Core service can be modified directly in the up.time GUI (shown below):
Only the variables whose default values have been modified appear in up.time Configuration. |
Configuration parameters that are directly tied to the up.time Core service are found in the uptime.conf file. uptime.conf is a text file that you can modify in any text editor, and can be found in the root up.time installation directory.
In addition to the up.time database, uptime.conf parameters affect a variety of up.time behavior.
Not all of the settings listed in this section will necessarily be found in your particular uptime.conf file. |
In addition to the Web interface, the up.time Monitoring Station consists of the following services:
These services run in the background and start automatically after the operating system on the server hosting up.time starts. However, system administrators may need to stop the up.time services - for example, before making configuration changes to the uptime.conf file, performing an upgrade, or archiving the DataStore.
To stop the up.time services in Windows, do the following:
To stop the up.time services on Solaris or Linux, do the following:
To restart the up.time services in Windows, do the following:
To restart the up.time services on Solaris or Linux, do the following:
Some of the Monitoring Station’s features require integration with other elements that make up your infrastructure. In some cases configuration is mandatory (e.g., an SMTP server will need to have been set at the time of installation), while in others it is required only when particular up.time features are used (e.g., using the Web Application Transaction monitor requires you to provide up.time with your proxy server settings). The following sections outline how to configure up.time to communicate with servers and databases.
The database settings are used to determine how up.time communicates with the DataStore, and how it will perform a database health check. The following are the database-related parameters in the uptime.conf
file.
Connection | |
dbType | The type of database that is being used to store data from up.time. The default value is By default, up.time uses a JDBC (Java Database Connectivity) driver, and the driver used to connect to the DataStore corresponds to the database selected:
|
dbHostname | The name of the system on which the database is running. The default is localhost . |
dbPort | The port on which the database is listening. The default is 3308 . |
dbName | The name of the database. The default is uptime . |
dbUsername | The name of the default database user. The default is uptime . |
dbPassword | The password for the default database user. The default is uptime . |
dbJdbcProperty | Optional property-and-value pairs to append to the JDBC database URL. Note that only MySQL and Microsoft SQL Server supports URL properties, so this setting will do nothing if you are using Oracle. The value of the dbJdbcProperty parameter in
|
Health | |
datastoreHealthCheck.checkInterval | When this parameter is enabled with a non-zero value, up.time performs a database health check. The value provided is the frequency of the check, in seconds. The default is 5 . |
datastoreHealthCheck.timeLimit | When the health check time limit has been reached (the value unit is seconds, and the default is 300 ), up.time considers the database down. The Data Collector service is stopped, and administrators that are members of the SysAdmin user group are sent an email. |
Performance | |
connectionPoolMaximum | The maximum number of connections that are allowed to the DataStore. Setting this option to a lower number will help increase the performance of up.time. |
connectionPoolMaxIdleTime | (c3p0 library) Sets the amount of time a connection can be idle before it is closed. This parameter should only be modified with the assistance of uptime software Customer Support. |
connectionPoolNumHelperThreads | (c3p0 library) Sets the number of helper threads that can improve the performance of slow JDBC operations. This parameter should only be modified with the assistance of uptime software Customer Support. |
The up.time DataStore is first linked to a database during the installation process, and contains important historical performance data that has since been collected. Linking the DataStore to a new database will result in lost data unless you properly migrate your data to the new database. As such, changing the DataStore’s database should be done only after some consideration and planning.
In cases where you would like to migrate the database (e.g., from the default up.time MySQL implementation to Oracle) or move the DataStore to a different system from the Monitoring Station, you will modify the aforementioned database values in the uptime.conf
file. Note that the modification of these values is one of a series of steps. Refer to the uptime software Knowledge Base for more information on migrating your DataStore.
Monitoring Stations include a Web server component that drives the user interface. Any Monitoring Station that is accessed by users or administrators requires a URL. The Web address used to access the Monitoring Station is configured through the following uptime.conf parameter:
httpContext = http://<hostname>:<port>
If the up.time interface is being accessed via SSL, the value for this parameter should be stated as https instead of http .
up.time uses a mail server to send alerts and reports to its users. After installing up.time for the first time, the administrator was asked to enter SMTP server information. These initial values can be modified in the Mail Servers configuration panel.
Modifying the SMTP Server Used by up.time
To configure up.time ’s mail server, do the following:
A Windows-based Element can retrieve metric data either through the up.time Agent, or via WMI (see Agentless WMI Systems for more information). You can configure details for either method at a global level, in the form of agent connection information or WMI access credentials. Having global details defined simplifies individual Element configuration, and also allows you to switch the data collection method for multiple Windows Elements, at once, as a group.
When a system is part of the up.time inventory, its data collection method is either configured to be based on an Agent , or a WMI Agentless method. This configuration option is set when the system is first added as an Element. If agent and WMI details have been globally defined, when adding the Element, you will be able to Use the up.time Agent Global Configuration , or use WMI Global Credentials to skip configuration steps.
Once configured, this data collection method can later be switched from an agent-based, to agentless method, or vice versa. Although this change can be made on a per-Element basis, multiple Elements can also be switched in a single batch. In the latter case, the data collection method must be globally defined.
To configure data collection methods globally, you can provide information for either the up.time Agent, or your organization’s WMI credentials, or both. Note that batches of Elements can only be converted to a particular data collection source when that method has been globally configured in the Global Element Settings panel.
To provide WMI credentials that can be used to switch Windows Elements from agent-based data collection:
To provide up.time Agent information that can be used to switch Windows Elements from agentless, WMI-based data collection, do the following:
Enter the Agent Port Number , indicating the port the up.time Agents use to communicate with the up.time Monitoring Station.
The port number entered reflects what the up.time Agents are configured to use; this setting does not modify the agent-side configuration. |
When you add a network device to up.time , as part of the configuration process, you must provide details about how SNMP has been configured to communicate with and manage other devices on the network. These details describe, among other things, the SNMP protocol being used, and encryption methods.
By default, SNMP-specific settings are inputted for each network-type device, as they are added to up.time . To facilitate this process, your network’s SNMP settings can defined globally in the Global Element Settings panel.
The following SNMP settings are used to configure network-related Elements, and can be defined globally.
SNMP Version | The SNMP version the network device and your network are using. | |
SNMP Port | The port on which network devices have been configured to listen for SNMP messages. | |
Read Community | A string that acts like a user ID or password, giving you access to the network device instance. Common read communities are “public”, enabling you to retrieve read-only information from the device, and “private”, enabling you to access all information on the device. | |
Username | The name that is required to connect to the network device. | |
Authentication Password | The password that is required to connect to the network device. | |
Authentication Method | This option determines how encrypted information travelling between the network device and up.time will be authenticated:
| |
Privacy Password | The password that will be used to encrypt information travelling between the network device and up.time. | |
Privacy Type | From the list, select an option that will determine how information travelling between the network device and up.time will be encrypted:
| |
Pingable Node | This specifies whether up.time can contact the network device using the ping utility. There are scenarios in which you might not want the network device to be pingable (e.g., you have a firewall in place). Before enabling this option, you should try to contact it using the ping utility. If you cannot ping it, ensure this check box is left cleared. Then, change the default host check for the network device. See Changing Host Checks for more information. |
To globally define the SNMP version 2 settings used to communicate with network devices on your network, do the following:
To globally define the SNMP version 2 settings used to communicate with network devices on your network, do the following:
Indicate the Privacy Type used for encryption.
If no password is provided, the authentication method is ignored.
You can set both the authentication and password types, only one of them, or neither. |
up.time displays a list of recent knowledge base articles in the My Portal panel. This list is fed to the My Portal panel via RSS (Really Simple Syndication, a method for delivering summaries of and links to Web content). Clicking the title of an article opens it in your Web browser.
By default, RSS feeds are drawn directly from the uptime software Support Portal without the use of proxy server information. If your Monitoring Station accesses the Internet through one, feeds will most likely not be available, and the following message will appear in the My Portal panel:
You can manually configure the settings for RSS feeds through the following parameters (default values, if applicable, are shown):
The URL of the RSS feed.
The host name of the proxy server that the Monitoring Station uses to access the Internet.
The port through which the Monitoring Station communicates with the proxy server.
The user name required to use the proxy server.
The password required to use the proxy server.
Administrators can configure Action Profiles to automatically carry out tasks in the event of an up.time alert. One such task is the initiation of contact with VMware vCenter Orchestrator, and the execution of a workflow. To have access to this functionality, up.time needs to know how to communicate with Orchestrator.
For information about Action Profiles and VMware vCenter Orchestrator, see Action Profiles
To configure up.time integration with Orchestrator to execute workflows, do the following:
On the up.time tool bar, click Config .
In the Tree panel, click VMware vCenter Orchestrator .
When the Web Application Transaction monitor is recording a user session on an external site, it is intercepting URLs by acting as your browser’s proxy. For the monitor to do this, you must replace your organization’s proxy server information with the Web Application Transaction monitor in your browser settings. In order for the monitor to access the Internet, you must provide your proxy settings in up.time .
This monitor-specific proxy information is used during transaction recording; during session playback, the proxy normally used by up.time (defined by the httpProxy* settings) is used.
For more information about the Web Application Transaction monitor, see Web Application Transactions.
You can change up.time ’s proxy server configuration by manually inputting settings in the up.time Configuration panel, as outlined in Configuring and Managing up.time
You can configure the proxy server settings used by up.time when running the Web Application Transaction monitor with the following parameters:
The host name of the proxy server that the Web Application Transaction monitor uses to access the Internet during transaction recording.
The port through which the Web Application Transaction monitor communicates with the proxy server during transaction recording.
If you are using a reporting instance (an up.time instance that only generates and serves reports), the remote reporting settings enable you to specify the location of the reporting instance, and the port on which it is listening.
To configure the remote reporting instance used by up.time, do the following:
Note that the modification of these values is one of a series of steps performed to correctly set up a remote reporting instance. See Remote Reporting Instances for more information.
A UI instance is an up.time installation that does not perform any data collection tasks, and is primarily used for real-time monitoring and report generation. When there are many up.time users who do not need to perform full administrative tasks, UI instances can divert traffic from a core Monitoring Station implementation, improving data-collection performance and UI responsiveness.
You can manually configure UI instance settings with the following uptime.conf
parameters:
Parameter | Description |
---|---|
uiOnlyInstance | enables the Monitoring Station as a user interface instance |
uiOnlyInstance.monitoringStationHost | the host name or IP address of the up.time Monitoring Station that is performing data collection, and to which this UI instance will connect |
uiOnlyInstance.monitoringStationCommandPort | the port through which the UI instance can communicate with the core data-collecting Monitoring Station; in most cases, this port should be 9996, otherwise the UI instance will not communicate properly with the core Monitoring Station |
To create a UI instance, do the following:
uptime.conf
parameters: uiOnlyInstance=true
uiOnlyInstance.monitoringStationHost=<hostname>
hostname
is the hostname or IP address of the core, data-collecting Monitoring Station, with which this UI instance will communicateuiOnlyInstance.monitoringStationCommandPort=9996
the port through which the UI instance can communicate with the core Monitoring Station
Unless your core Monitoring Station has been customized, it is configured to use port 9996 to communicate with a UI instance. If you wish to use a different port, you must ensure matching |
<installDirectory>/gadgets
directory on the Monitoring Station.<installDirectory>/gadgets
directory accessible by the UI instance system./gadgets
directory accessible depends on the Monitoring Station platform: mklink
command to create a symbolic link on the UI instance that points to the /gadgets
directory on the core Monitoring Station, such as in the following example: mklink /D "C:\Program Files\uptime software\uptime\gadgets" "\\host\gadgets
"Scrutinizer is a NetFlow analyzer that can be installed to monitor network traffic managed by compatible switches and routers. Scrutinizer can be integrated with up.time as a NetFlow dashboard, and can directly link network devices monitored by Scrutinizer to their NetFlow data from each Element's Graphing tab. In order to access Scrutinizer, up.time needs to be pointed to your installation.
You can configure Scrutinizer’s integration with up.time through the following parameters:
Parameter | Description |
---|---|
netflow.enabled | determines whether Scrutinizer is integrated with the Monitoring Station |
netflow.hostname | the host name or IP address of your Scrutinizer installation |
netflow.port | the HTTP port through which Scrutinizer sends and receives communication |
netflow.username | the username required to log in to Scrutinizer |
netflow.password | the password required to log in to Scrutinizer |
Splunk is a third-party search engine that indexes log files and data from the devices, servers, and applications in your network. Using Splunk, you can quickly analyze your logs to pinpoint problems on a server or in a network, or ensure that you are in compliance with a regulatory mandate or Service Level Agreements. You install Splunk on a server in your datacenter.
When values are provided for the Splunk settings listed below, the Splunk icon will appear in the My Portal panel beside the names of services that are in WARN or CRIT states. When you click the Splunk icon, you will be automatically logged in to your Splunk search page.
You can change your up.time-Splunk integration by manually inputting settings in the up.time Configuration panel, as outlined in Configuring and Managing up.time.
You can enable automatic login to the Splunk search page, or modify an existing configuration through the following parameters:
Parameter | Description |
---|---|
splunk.url | the URL of the server on which your Splunk search page is hosted (e.g., http://webportal:8000 ) |
splunk.username | the username required to log in to your Splunk search page |
splunk.password | the password required to log in to your Splunk search page |
splunk.soapurl | the URL that points to the SOAP management port that Splunk uses to communicate with the splunk daemon (e.g., https://webportal:8089 ).In the URL, you must include the port on which the Splunk server listens for requests. See the Splunk Admin Manual for more information. |
Depending on the amount of disk space available for the continuously growing DataStore, administrators can set an archive policy that determines how many month’s worth of data is retained. Old performance data is automatically archived and removed from the DataStore. This archiving procedure works with all databases that are compatible with up.time.
The existing archive policy can be viewed and modified on the Archive Policy subpanel, which is accessed from the main Config panel. Here, the main archive categories are shown, along with the number of months for which collected data is retained in the DataStore.
Every month, up.time checks the DataStore’s entries; data that is older than the limit set in the archive policy are written to XML files. The XML archives use the following format:
<table_name>_<date>.xml.gz
The archives created reflect the database table structure used to store performance data, as well as the date that the stored data represents:
performance_cpu_2006-09-13.xml.gz
The DataStore is trimmed and the XML files are compressed and stored in the /archives
directory.
For example, if you installed up.time in the default location, the path to the archived data will be:
/usr/local/uptime/archives
C:\Program Files\uptime software\uptime\archives
Once backed up, archives can be stored offline. If required, they can be temporarily imported into the DataStore.
The following table lists the statistical categories whose archiving can be configured, along with the corresponding DataStore database table:
Archive Policy Category | Database Table |
---|---|
Overall CPU/Memory |
|
Multi-CPU |
|
Detailed Process |
|
Disk Performance |
|
File System Capacity |
|
Network |
|
User Information |
|
Volume Manager |
|
Retained Data |
|
vSphere Performance Data |
|
vSphere Inventory Updates |
|
Network Device Performance Data |
|
To set an archive policy, do the following:
If you need to generate graphs or reports on older data that have already been archived and is no longer in the DataStore, you can import specific archives using the restorearchive
command line utility. The command’s parameters allow you to import archives in the following manner:
To import archived data into the DataStore, do the following:
At the command line, navigate to the up.time /scripts
folder.
For example, if you installed the Monitoring Station in the default location on a Windows system, navigate to the following folder:C:\Program Files\uptime software\uptime\scripts\
Run the restorearchive
command with one or more of the following options:
-f <filename>
-d <date>
YYYY-MM-DD
format). -D <directory>
-d
option. -c <directory>
uptime.conf
file.For example, the following command would import all of the data archived on September 18, 2006, which are located in the default directory for archived data:
restorearchive -d 2006-09-18 -D /usr/local/uptime/archives/ -c /usr/local/uptime
If you have deployed up.time UI instances, ensure you always run command-line scripts such as |
In cases where you need to perform a wholesale backup of the existing DataStore (e.g., migrating your DataStore to another database), up.time includes two command-line utilities:
fulldatabasedump
: creates a compressed XML file of the contents of your DataStore fulldatabaseimport
: imports the archived data back into your DataStoreTo archive your DataStore, do the following:
At the command line, navigate to the up.time /scripts
folder.
For example, if you installed the Monitoring Station in the default location on a Windows system, navigate to the following folder:C:\Program Files\uptime software\uptime\scripts\
Run the following command:
fulldatabasedump
If you have deployed up.time UI instances, ensure you always run command-line scripts such as |
Depending on the size of your DataStore, this process can take anywhere from several minutes to several hours.
The utility creates the file uptimedump_YYYY-MM-DD.xml.gz
(e.g., uptimedump_2007-01-02.xml.gz
). This file is saved in up.time's root installation directory.
To restore your DataStore, do the following:
resetdb
utility with the really
and nodata
options to delete, then recreate the database structure that is used by up.time by running one of the following commands:/usr/local/uptime/resetdb --nodata really
C:\Program Files\uptime software\uptime\resetdb --nodata really
If you have set up UI instances of up.time, ensure you always run command-line scripts such as |
Run the following command:
fulldatabaseimport path/<fileToImport>.xml.gz
Where path/<fileToImport>.xml.gz
is the path to, and file name of, the archived contents of your DataStore. For example, to import an archive that is located in up.time
’s root installation directory, you would enter the following:
fulldatabaseimport uptimedump_2007-01-02.xml.gz
up.time's logs can assist you with diagnostic steps that you may need to perform should you encounter problems. Problem reports can be generated for up.time Customer Support if further analysis is required.
All up.time logs are written to the /logs
directory, and problem reports to the /GUI
directory, both of which are found in the up.time installation directory:
/usr/local/uptime/
C:\Program Files\uptime software\uptime
The following logs are available for troubleshooting. Depending on the type of investigation, output from multiple logs can be correlated.
Log Name | Descroption and Uses | uptime.conf parameter and values | |
---|---|---|---|
uptime.log | This is the base up.time log. System events are automatically recorded to these weekly logs, which follow the You can determine the type of system information up.time writes to the log (ranging from verbose, to informational, to critical errors) by setting the logging level. The default setting, |
| |
uptime_diagnostics.log | This log is similar to the Like the |
| |
uptime_exceptions.log | All DEBUG -level Java runtime exceptions evoked by up.time actions. Full stack traces are channeled to this log to lighten and accompany the core uptime.log and uptime_diagnostics.log files. Use the context marker in the core log to find the exception in this log. | N/A | |
uptime_console.log | All Java-related command-line feedback based on up.time activity is routed to this log, providing extra information that may not be captured in the standard up.time log. | N/A | |
audit.log | up.time can record changes to the application’s configuration in an audit log, and is essentially a record of which user performed which action, and when. The following is an example of an audit log entry:
There are many uses for the audit log. For example, you can use it to track changes to your up.time environment for compliance with your security or local policies. You can also use the audit log to debug problems that may have been introduced into your up.time installation by a specific configuration change; the audit log enables you to determine who made the change and when it took effect. By default, the |
| |
uptime_access.log | A summary of which up.time access-related actions, mainly database queries, were evoked by which service or user, and the execution time. This database-focused log can be used in conjunction with the more user-focused | N/A | |
thirdparty.log | Aggregation of warnings and errors logged by thirdparty libraries that up.time is using, such as the iReasoning library for SNMP monitoring. Correlating these with the other logs may help with investigation. | N/A | |
| When SQL logging has been enabled with the assistance of uptime software Customer Support, these log shows all SQL queries, with and without execution time, respectively. Queries in uptime_sql.log are listed before execution, which can be compared with the second log to determine conflicts and deadlocks. | contact Customer Support |
When you encounter a problem with up.time, Customer Support needs a specific set of information to diagnose and fix the problem. up.time can automatically collect this information and compress it in an archive which you can send to Customer Support.
The archive contains the following:
hs_err_pid
error filesThe archive is saved to the GUI/problemreports
directory on the Monitoring Station and has a file name with the following format:
prYYYYMMDD-HHMMSS.zip
YYYYMMDD
is the date on which the report was generated (e.g., 20101224
)HHMMSS
is the time at which the report was generated (e.g., 202306
)To generate a problem report, do the following:
dbchecker
script with the default values on your DataStore. This integrity test allows you to ensure there are no database issues that are part of, or are at the root of the problem. Disable this check box to improve generation performance by skipping the database check.In some cases, you can make measurement adjustments to up.time's default values. Changes can be made to the following:
By default, the number of Java threads allocated to service and performance monitors is 100. This can be modified with the following uptime.conf
parameter:
serviceThreads=100 |
By default, the JVM's heap memory is to a maximum of 1 GB. If your monitoring deployment has a lot of service monitors running or reports to generate, you can the increase the amount of Java heap memory (e.g., 1.5 GB) to improve performance.
When increasing the Java heap size, ensure your Monitoring Station resources can support the new setting. If the OS does not have the desired amount of memory available exclusively for up.time, the up.time Core service may become unstable and crash, despite starting up successfully. |
The amount of memory allocated to the JVM can be adjusted by modifying one of the following parameters, depending on your Monitoring Station platform:
On Linux, edit the <uptimeInstallDir>/uptime.jcnf
file and modify the following:
-Xmx1G |
<uptimeInstallDir>\UptimeDataCollector.ini
file and modify the following, which relates to the Java -Xmx option:vm.heapsize.preferred=1024m |
Note that the default heap size is measured in gigabytes in the Linux configuration file, and megabytes in the Windows configuration file.
The Global Scan threshold settings determine when a cell on the Global Scan dashboard changes state to reflect a host’s status change: green represents normal status, yellow represents Warning status, and red represents Critical.
The Resource Scan threshold settings determine the size of the gauge ranges on the Resource Scan view: green represents normal status, yellow represents Warning status, and red represents Critical status.
You can change the thresholds used to determine status by manually inputting settings in the up.time Configuration panel, as outlined in Configuring and Managing up.time.
You can modify the Global Scan threshold settings through the following parameters (default values are shown):
globalscan.cpu.warn=70 | Warning-level status is reported when CPU usage is at 70% or greater |
globalscan.cpu.crit=90 | Critical-level status is reported when CPU usage is at 90% or greater |
globalscan.diskbusy.warn=70 | Warning-level status is reported when a disk on the host is busy for 70% or more of a five-minute time frame |
globalscan.diskbusy.crit=90 | Critical-level status is reported when a disk on the host is busy for 90% or more of a five-minute time frame |
globalscan.diskfull.warn=70 | Warning-level status is reported when 70% or more of the disk space on the host is used |
globalscan.diskfull.crit=90 | Critical-level status is reported when 90% or more of the disk space on the host is used |
globalscan.swap.warn=70 | Warning-level status is reported when 70% or more of the swap space on a disk is in use |
globalscan.swap.crit=90 | Critical-level status is reported when 90% or more of the swap space on a disk is in use |
Changes to Global Scan thresholds are not retroactively applied to all Elements; only Elements added after threshold changes will reflect those changes. |
You can modify the Resource Scan threshold settings through the following parameters (default values are shown):
resourcescan.cpu.warn=70 | the Warning-level range in the CPU Usage gauge begins at this value (70%), and ends at the Critical-level range |
resourcescan.cpu.crit=90 | the Critical-level range in the CPU Usage gauge is between this value (90%) and 100% |
resourcescan.memory.warn=70 | the Warning-level range in the Memory Usage gauge begins at this value (70%), and ends at the Critical-level range |
resourcescan.memory.crit=90 | the Critical-level range in the Memory Usage gauge is between this value (70%) and 100% |
resourcescan.diskbusy.warn=70 | the Warning-level range in the Disk Busy gauge begins at this value (70%), and ends at the Critical-level range |
| the Critical-level range in the Disk Busy gauge is between this value (70%) and 100% |
| the Warning-level range in the Disk Capacity gauge begins at this value (70%), and ends at the Critical-level range |
| the Critical-level range in the Disk Capacity gauge is between this value (70%) and 100% |
The Platform Performance Gatherer is a core performance monitor that resides on all agent-based Elements.
By default, the Platform Performance Gatherer checks the host Elements’ performance levels every 300 seconds. You can change the interval by manually inputting settings in the up.time Configuration panel, as outlined in Configuring and Managing up.time.
You can modify the Platform Performance Gatherer check interval through the following up.time Configuration parameter (the default value is shown):
performanceCheckInterval=300 |
A change to the Platform Performance Gatherer check interval is not retroactively applied to all Elements; only Elements added after an interval change will reflect that change. |
When an up.time user generates a report, that report is stored in the /GUI/reportcache
directory; when a scheduled report is automatically generated and published, it is stored in the /GUI/published
directory. Both of these directory paths are found in the up.time installation directory:
/usr/local/uptime/
C:\Program Files\uptime software\uptime
By default, generated reports are cached on the Monitoring Station for 30 days; additionally, the location for published reports is also on the local Monitoring Station file system. Both options can be modified. In the latter case, automatically publishing reports to a publicly accessed directory on the network is an ideal way for non-IT staff to view them. See Saving Reports to the File System for more information.
You can change a report’s expiry time limit by manually inputting settings in the up.time Configuration panel, as outlined in Configuring and Managing up.time.
Change the expiry limit through the following parameter (the default value is shown):
reportCacheExpiryDays=30
This can be modified with the following uptime.conf
parameter:
publishedReportRoot=<location>
If the intended published report directory is on a system other than the Monitoring Station, the provided location should be a full network path to the system in addition to the directory path on that system.
The following options can be used to expedite report-generation time (at the expense of system resources).
Note that the default number is optimal in most cases; increasing the amount may improve performance, but the law of diminishing returns applies, as too many concurrent threads can tax the PDF generation process overall.
You can manually change the number of concurrently generated images in the up.time Configuration panel with the following parameter:
reporting.prefetch.images.threads=10
up.time limits the number of reports that can be generated at the same time (the default is 2). This is controlled by the following uptime.conf
parameter:
maximumPdfReports=2
The report-generation process uses a significant amount of memory. Note that increasing this parameter's value beyond what the Monitoring Station or reporting instance can handle may result in out-of-memory errors.
Some configuration options affect the Monitoring Station interface. These can be modified by manually inputting settings in the up.time Configuration panel, as outlined in Configuring and Managing up.time.
When services reach a warning or critical state, administrators can flag an alert as “acknowledged,” which prevents subsequent alerts from being broadcasted, giving them time to investigate the issue. See Acknowledging Alerts for more information.
Service status alert acknowledgements can be reported in the status tables on the Global Scan dashboard. By default, status alert acknowledgement counts are not shown; if enabled a new column (labelled ACK) appears in the Service Status section of Global Scan. When the current status of a monitor is acknowledged, it appears in the ACK column instead of in the WARN or CRIT column.
You can enable or disable status acknowledgement (i.e., add or remove the ACK column from the status tables) through the following parameter (the default value is shown):
acknowledgedSeparate=false
When performance and availability graphs are generated, the Graph Editor is used to manipulate the appearance of graphed data (see Using the Graph Editor). Transformations from a three-dimensional perspective are possible if the user account permits it (see Adding Users), and the user is connecting to the Monitoring Station using Internet Explorer.
This 3D presentation option can be disabled outright. You can determine whether ActiveX graphs are displayed in 3D for users with Internet Explorer through the following parameter (the default value is shown):
default3DGraphs=true
Custom dashboards can be added to My Portal to display custom content that is relevant to the particular user who is currently logged in. Up to 50 dashboards can be added, each of which is accessed through, and viewed in, its own tab at the top of My Portal.
A custom dashboard tab is configured by pointing up.time to a custom Web page, and indicating which User Group will be able to view it. You can enable and configure the first dashboard through the following parameters:
myportal.custom.tab1.enabled=true myportal.custom.tab1.name=<DashboardNameOnTab> myportal.custom.tab1.URL=<URLtoCustomPage> myportal.custom.tab1.usergroups=<UserGroupName> |
Values for the first three parameters are required. If no name is specified for the User Group parameter (or, if no User Groups have been defined), the custom dashboard will be visible to all up.time users. Thus, a User Group parameter is only required if you want to restrict or refine user access to a particular custom dashboard.
To create additional tabs, add the same set of parameters, but increment the tab count:
myportal.custom.tab2.enabled=true myportal.custom.tab2.name=<DashboardNameOnTab> myportal.custom.tab2.URL=<URLtoCustomPage> |
If your up.time package did not come with a license key, then either contact your sales representative to request a key or send an email to [email protected]. You will need the host ID for the system so that a permanent license key can be generated. The host ID is displayed in the License Information subpanel, and is similar to the following:
001110bf101d
You do not need the host ID if you are evaluating up.time. The demo licenses expire after predetermined amounts of time and can run on any system. |
To install or update a license, do the following:
In the License Notification section of the License Information page, you can select the Notification Group that receives alerts should there be any licensing errors related to syncing with VMware vSphere.
For more information, see Managing vSync.