up.time includes user-definable parameters that can control some aspects of its behavior including the following:
From a configuration perspective, there are two types of parameters:
Configuration parameters that are not directly tied to, thus do not require a restart of, the up.time Core service can be modified directly in the up.time GUI (shown below):
Note - Only the variables whose default values have been modified appear in up.time Configuration . |
Configuration parameters that are directly tied to the up.time Core service are found in the uptime.conf file. uptime.conf is a text file that you can modify in any text editor, and can be found in the root up.time installation directory.
In addition to the up.time database, uptime.conf parameters affect a variety of up.time behavior.
Note - Not all of the settings listed in this section will necessarily be found in your particular uptime.conf file. |
In addition to the Web interface, the up.time Monitoring Station consists of the following services:
These services run in the background and start automatically after the operating system on the server hosting up.time starts. However, system administrators may need to stop the up.time services - for example, before making configuration changes to the uptime.conf file, performing an upgrade, or archiving the DataStore.
To stop the up.time services in Windows, do the following:
To stop the up.time services on Solaris or Linux, do the following:
To restart the up.time services in Windows, do the following:
To restart the up.time services on Solaris or Linux, do the following:
Some of the Monitoring Station’s features require integration with other elements that make up your infrastructure. In some cases configuration is mandatory (e.g., an SMTP server will need to have been set at the time of installation), while in others it is required only when particular up.time features are used (e.g., using the Web Application Transaction monitor requires you to provide up.time with your proxy server settings). The following sections outline how to configure up.time to communicate with servers and databases.
The database settings determine how up.time communicates with the DataStore. The following are the database settings in the uptime.conf
file.
The type of database that is being used to store data from up.time. The default value is mysql
. You can also specify mssql
and oracle
to use SQL Server and Oracle, respectively.
By default, up.time uses a JDBC (Java Database Connectivity) driver, and the driver used to connect to the DataStore corresponds to the database selected:
com.mysql.jdbc.Driver
for MySQLnet.sourceforge.jtds.jdbc.Driver
for Microsoft SQL Serveroracle.jdbc.OracleDriver
for OracleThe name of the system on which the database is running. The default is localhost .
The port on which the database is listening. The default is 3308 .
The name of the database. The default is uptime .
The name of the default database user, which is uptime .
The password for the default database user, which is uptime .
Optional property-and-value pairs to append to the JDBC database URL. Note that only MySQL and Microsoft SQL Server supports URL properties, so this setting will do nothing if you are using Oracle. The value of the dbJdbcProperty parameter in uptime.conf
should be verbatim as that which would be manually added to the URL. The exact format depends on the database type. Consider the following examples:
dbJdbcProperty=instance=sqlserver;ssl=request
for MS SQL ServerdbJdbcProperty=instance=mysql&useSSL=true
for MySQLThe maximum number of connections that are allowed to the DataStore. Setting this option to a lower number will help increase the performance of up.time .
(c3p0 library) Sets the amount of time a connection can be idle before it is closed. This parameter should only be modified with the assistance of uptime software Customer Support.
(c3p0 library) Sets the number of helper threads that can improve the performance of slow JDBC operations. This parameter should only be modified with the assistance of uptime software Customer Support.
The up.time DataStore is first linked to a database during the installation process, and contains important historical performance data that has since been collected. Linking the DataStore to a new database will result in lost data unless you properly migrate your data to the new database. As such, changing the DataStore’s database should be done only after some consideration and planning.
In cases where you would like to migrate the database (e.g., from the default up.time MySQL implementation to Oracle) or move the DataStore to a different system from the Monitoring Station, you will modify the aforementioned database values in the uptime.conf file. Note that the modification of these values is one of a series of steps. Refer to the Knowledge Base for more information on migrating your DataStore.
Monitoring Stations include a Web server component that drives the user interface. Any Monitoring Station that is accessed by users or administrators requires a URL. The Web address used to access the Monitoring Station is configured through the following uptime.conf parameter:
httpContext = http://<hostname>:<port>
If the up.time interface is being accessed via SSL, the value for this parameter should be stated as https instead of http .
up.time uses a mail server to send alerts and reports to its users. After installing up.time for the first time, the administrator was asked to enter SMTP server information. These initial values can be modified in the Mail Servers configuration panel.
Modifying the SMTP Server Used by up.time
To configure up.time ’s mail server, do the following:
A Windows-based Element can retrieve metric data either through the up.time Agent, or via WMI (see Agentless WMI Systems for more information). You can configure details for either method at a global level, in the form of agent connection information or WMI access credentials. Having global details defined simplifies individual Element configuration, and also allows you to switch the data collection method for multiple Windows Elements, at once, as a group.
When a system is part of the up.time inventory, its data collection method is either configured to be based on an Agent , or a WMI Agentless method. This configuration option is set when the system is first added as an Element. If agent and WMI details have been globally defined, when adding the Element, you will be able to Use the up.time Agent Global Configuration , or use WMI Global Credentials to skip configuration steps.
Once configured, this data collection method can later be switched from an agent-based, to agentless method, or vice versa. Although this change can be made on a per-Element basis, multiple Elements can also be switched in a single batch. In the latter case, the data collection method must be globally defined.
To configure data collection methods globally, you can provide information for either the up.time Agent, or your organization’s WMI credentials, or both. Note that batches of Elements can only be converted to a particular data collection source when that method has been globally configured in the Global Element Settings panel.
To provide WMI credentials that can be used to switch Windows Elements from agent-based data collection:
To provide up.time Agent information that can be used to switch Windows Elements from agentless, WMI-based data collection, do the following:
When you add a network device to up.time , as part of the configuration process, you must provide details about how SNMP has been configured to communicate with and manage other devices on the network. These details describe, among other things, the SNMP protocol being used, and encryption methods.
By default, SNMP-specific settings are inputted for each network-type device, as they are added to up.time . To facilitate this process, your network’s SNMP settings can defined globally in the Global Element Settings panel.
The following SNMP settings are used to configure network-related Elements, and can be defined globally.
SNMP Version | The SNMP version the network device and your network are using. | |
SNMP Port | The port on which network devices have been configured to listen for SNMP messages. | |
Read Community | A string that acts like a user ID or password, giving you access to the network device instance. Common read communities are “public”, enabling you to retrieve read-only information from the device, and “private”, enabling you to access all information on the device. | |
Username | The name that is required to connect to the network device. | |
Authentication Password | The password that is required to connect to the network device. | |
Authentication Method | This option determines how encrypted information travelling between the network device and up.time will be authenticated:
| |
Privacy Password | The password that will be used to encrypt information travelling between the network device and up.time . | |
Privacy Type | From the list, select an option that will determine how information travelling between the network device and up.time will be encrypted:
| |
Pingable Node | This specifies whether up.time can contact the node using the ping utility. There are scenarios in which you might not want the node to be pingable (e.g., you have a firewall in place). Before enabling this option, you should try to contact the node using the ping utility. If you cannot ping the node, ensure the check box is left cleared. Then, change the default host check for the node. See Changing Host Checks for more information. | |
Exports Data to Scrutinzer | If Scrutinizer has been integrated with up.time , and is also receiving NetFlow data from the node, use this option. You will then be able to call a Scrutinizer instance directly from the node’s Graphing tab in up.time . |
To globally define the SNMP version 2 settings used to communicate with network devices on your network, do the following:
To globally define the SNMP version 2 settings used to communicate with network devices on your network, do the following:
up.time displays a list of recent knowledge base articles in the My Portal panel. This list is fed to the My Portal panel via RSS (Really Simple Syndication, a method for delivering summaries of and links to Web content). Clicking the title of an article opens it in your Web browser.
By default, RSS feeds are drawn directly from the uptime software Support Portal without the use of proxy server information. If your Monitoring Station accesses the Internet through one, feeds will most likely not be available, and the following message will appear in the My Portal panel:
You can manually configure the settings for RSS feeds through the following parameters (default values, if applicable, are shown):
The URL of the RSS feed.
The host name of the proxy server that the Monitoring Station uses to access the Internet.
The port through which the Monitoring Station communicates with the proxy server.
The user name required to use the proxy server.
The password required to use the proxy server.
Administrators can configure Action Profiles to automatically carry out tasks in the event of an up.time alert. One such task is the initiation of contact with VMware vCenter Orchestrator, and the execution of a workflow. To have access to this functionality, up.time needs to know how to communicate with Orchestrator.
For information about Action Profiles and VMware vCenter Orchestrator, see Action Profiles
To configure up.time integration with Orchestrator to execute workflows, do the following:
On the up.time tool bar, click Config .
In the Tree panel, click VMware vCenter Orchestrator .
When the Web Application Transaction monitor is recording a user session on an external site, it is intercepting URLs by acting as your browser’s proxy. For the monitor to do this, you must replace your organization’s proxy server information with the Web Application Transaction monitor in your browser settings. In order for the monitor to access the Internet, you must provide your proxy settings in up.time .
This monitor-specific proxy information is used during transaction recording; during session playback, the proxy normally used by up.time (defined by the httpProxy* settings) is used.
For more information about the Web Application Transaction monitor, see Web Application Transactions.
You can change up.time ’s proxy server configuration by manually inputting settings in the up.time Configuration panel, as outlined in Modifying up.time Config Panel Settings
You can configure the proxy server settings used by up.time when running the Web Application Transaction monitor with the following parameters:
The host name of the proxy server that the Web Application Transaction monitor uses to access the Internet during transaction recording.
The port through which the Web Application Transaction monitor communicates with the proxy server during transaction recording.
If you are using a reporting instance (an up.time instance that only generates and serves reports), the remote reporting settings enable you to specify the location of the reporting instance, and the port on which it is listening.
To configure the remote reporting instance used by up.time , do the following:
Note that the modification of these values is one of a series of steps performed to correctly set up a remote reporting instance. Refer to the Knowledge Base article entitled “Setting up a reporting instance” for more information.
A UI instance is an up.time installation that does not perform any data collection tasks, and is primarily used for real-time monitoring and report generation. UI instances can divert traffic from a standard Monitoring Station implementation, and are helpful when there are many up.time users who do not need to perform full administrative tasks.
You can manually configure UI instance settings with the following uptime.conf parameters:
Determines whether the Monitoring Station functions only as a user interface instance.
The host name or IP address of the up.time Monitoring Station that is performing data collection, and to which this UI instance will connect.
The port through which the UI instance can communicate with the data-collecting Monitoring Station.
A Monitoring Station that is acting as a UI instance must have the same database settings as the data-collecting Monitoring Station. See Database Settings for more information.
Scrutinizer is a NetFlow analyzer that can be installed to monitor network traffic managed by compatible switches and routers. Scrutinizer can be integrated with Global Scan , as well as up.time ’s graph generation for node-type Elements, and other hosts that are also monitored with Scrutinizer.
In order to access Scrutinizer, up.time needs to be pointed to your installation.
You can configure Scrutinizer’s integration with up.time through the following parameters:
Determines whether Scrutinizer is integrated with the Monitoring Station.
The host name or IP address of your Scrutinizer installation.
The HTTP port through which Scrutinizer sends and receives communication.
The user name required to log in to Scrutinizer.
The password required to log in to Scrutinizer.
Splunk is a third-party search engine that indexes log files and data from the devices, servers, and applications in your network. Using Splunk, you can quickly analyze your logs to pinpoint problems on a server or in a network, or ensure that you are in compliance with a regulatory mandate or Service Level Agreements. You install Splunk on a server in your datacenter.
When values are provided for the Splunk settings listed below, the Splunk icon will appear in the My Portal panel beside the names of services that are in WARN or CRIT states. When you click the Splunk icon, you will be automatically logged in to your Splunk search page.
You can change your up.time -Splunk integration by manually inputting settings in the up.time Configuration panel, as outlined in Modifying up.time Config Panel Settings.
You can enable automatic login to the Splunk search page, or modify an existing configuration through the following parameters:
The URL of the server on which your Splunk search page is hosted (e.g., http://webportal:8000 ).
The user name required to log in to your Splunk search page.
The password required to log in to your Splunk search page.
The URL that points to the SOAP management port that Splunk uses to communicate with the splunk daemon (e.g., https://webportal:8089 ).
In the URL, you must include the port on which the Splunk server listens for requests. See the Splunk Admin Manual for more information.
Depending on the amount of disk space available for the continuously growing DataStore, administrators can set an archive policy that determines how many month’s worth of data is retained. Old performance data is automatically archived and removed from the DataStore. This archiving procedure works with all databases that are compatible with up.time .
The existing archive policy can be viewed and modified on the Archive Policy subpanel, which is accessed from the main Config panel. Here, the main archive categories are shown, along with the number of months for which collected data is retained in the DataStore.
Every month, up.time checks the DataStore’s entries; data that is older than the limit set in the archive policy are written to XML files. The XML archives use the following format:
<table_name>_<date>.xml.gz
The archives created reflect the database table structure used to store performance data, as well as the date that the stored data represents:
performance_cpu_2006-09-13.xml.gz
The DataStore is trimmed and the XML files are compressed and stored in the /archives directory.
For example, if you installed up.time in the default location, the path to the archived data will be:
Note - Windows Vista users can find the DataStore archive in the Virtual Store instead of the default up.time location (i.e., C:\Users\uptime\AddData\Local\VirtualStore\Program Files\< uptime-install-directory >.
Once backed up, archives can be stored offline. If required, they can be temporarily imported into the DataStore.
The following table lists the statistical categories whose archiving can be configured, along with the corresponding DataStore database table:
Archive Policy Category | Database Table |
---|---|
Overall CPU/Memory | performance_cpu |
Multi-CPU | performance_aggregate |
Detailed Process | performance_psinfo |
Disk Performance | performance_disk |
File System Capacity | performance_fscap |
Network | performance_network |
User Information | performance_who |
Volume Manager | performance_vxvol |
Retained Data | erdc_int_data erdc_decimal_data erdc_string_data |
vSphere Performance Data | vmware_perf_aggregate vmware_perf_cluster vmware_perf_datastore_usage vmware_perf_datastore_vm_usage vmware_perf_disk_rate vmware_perf_entitlement vmware_perf_host_cpu vmware_perf_host_disk_io vmware_perf_host_disk_io_adv vmware_perf_host_network vmware_perf_host_power_state vmware_perf_mem vmware_perf_mem_advanced vmware_perf_network_rate vmware_perf_vm_cpu vmware_perf_vm_disk_io vmware_perf_vm_network vmware_perf_vm_power_state vmware_perf_vm_storage_usage vmware_perf_vm_vcpu vmware_perf_watts vsync_update |
vSphere Inventory Updates | virtual_inventory_update vmware_event table |
Network Device Performance Data | net_device_perf_ping net_device_perf_port |
To set an archive policy, do the following:
If you need to generate graphs or reports on older data that has already been archived, and is no longer in the DataStore, you can import specific archives using the restorearchive command line utility. The command’s parameters allow you to import archives in the following manner:
To import archived data into the DataStore, do the following:
restorearchive -d 2006-09-18 -D /usr/local/uptime/archives/ -c /usr/local/uptime
In cases where you need to perform a wholesale backup of the existing DataStore (e.g., migrating your DataStore to another database), up.time includes two command line utilities:
Creates a compressed XML file of the contents of your DataStore.
Imports the archived data back into your DataStore.
Both utilities work with all of the databases that up.time supports.
To archive your DataStore, do the following:
fulldatabasedump
To restore your DataStore, do the following:
The following options assist you with diagnostic steps that you may need to perform should you encounter problems with up.time . You have access to two types of logs: system logs and audit logs that track user actions. Additionally, you can generate a problem report for up.time Customer Support if further analysis is required.
System and audit logs are written to the /logs directory, and problem reports are found in the /GUI directory, both of which are found in the up.time installation directory:
Note - Windows Vista users can find the audit log in the Virtual Store instead of the default up.time location (i.e., C:\Users\uptime\AddData\Local\VirtualStore\Program Files\< uptime-install-directory >. |
up.time automatically logs system events to the /logs directory. These weekly logs follow the uptime.log.<year>-<week>.log naming format. You can determine the type of system information up.time writes to the log by using one of the following values:
The default setting, DEBUG , essentially logs all system event types. To reduce the number of log entries, you can limit logging to events with a higher level of severity, from INFO to FATAL . Note that each severity level is a subset of higher levels (e.g., setting loggingLevel to WARN means any WARN -, ERROR - or FATAL -level events are written to the log).
Logging is configured through the following uptime.conf parameter:
loggingLevel = DEBUG
up.time can record changes to the application’s configuration in an audit log. The details of the configuration changes are saved in the audit.log file, which is found in the /logs directory.
There are many uses for the audit log. For example, you can use the audit log to track changes to your up.time environment for compliance with your security or local policies. You can also use the audit log to debug problems that may have been introduced into your up.time installation by a specific configuration change; the audit log enables you to determine who made the change and when it took effect.
The following is an example of an audit log entry:
2006-02-23 12:28:20,082 - kdawg: ADDSYSTEM [cfgcheck=true, port=9998, number=1, use-ssl=false, systemType=1, hostname=10.1.1.241, displayName=MailMain, systemSystemGroup=1, serviceGroup=, description=, systemSubtype=1]
Audit Logging is enabled or disabled, with “ yes ” or “ no ” values, respectively, through the following uptime.conf parameter:
auditEnabled = yes
When you encounter a problem with up.time , Client Care needs specific information to diagnose and fix the problem. up.time can automatically collect this information and compress it in an archive which you can send to Client Care.
The archive contains the following:
The archive is saved to the GUI/problemreports directory on the Monitoring Station and has a file name with the following format:
prYYYYMMDD-HHMMSS.zip
To generate a problem report, do the following:
Problem report created : pr20061017-094927.zip
In some cases, you can make measurement adjustments to up.time's default values. Changes can be made to the following:
By default, the number of Java threads allocated to service and performance monitors is 100. This can be modified with the following uptime.conf
parameter:
serviceThreads=100 |
By default, the JVM's heap memory is 1 GB. If your monitoring deployment has a lot of service monitors running or reports to generate, you can the increase the amount of Java heap memory (e.g., 1.5 GB) to improve performance.
When increasing the Java heap size, ensure your Monitoring Station resources can support the new setting. If the OS does not have the desired amount of memory available exclusively for up.time, the up.time Core service may become unstable and crash, despite starting up successfully. |
The amount of memory allocated to the JVM can be adjusted by modifying one of the following parameters, depending on your Monitoring Station platform:
On Linux, edit the <uptimeInstallDir>/uptime.lax
file and modify the following:
lax.nl.java.option.java.heap.size.max=1000m |
<uptimeInstallDir>\UptimeDataCollector.ini
file and modify the following:vm.heapsize.preferred=1024m |
Note that heap size is measured in megabytes for both parameters.
The Global Scan threshold settings determine when a cell in the Global Scan panel changes state to reflect a host’s status change: green represents normal status, yellow represents Warning status, and red represents Critical.
The Resource Scan threshold settings determine the size of the gauge ranges on the Resource Scan view: green represents normal status, yellow represents Warning status, and red represents Critical status.
You can change the thresholds used to determine status by manually inputting settings in the up.time Configuration panel, as outlined in Modifying up.time Config Panel Settings.
You can modify the Global Scan threshold settings through the following parameters (default values are shown):
globalscan.cpu.warn=70 | Warning-level status is reported when CPU usage is at 70% or greater |
globalscan.cpu.crit=90 | Critical-level status is reported when CPU usage is at 90% or greater |
globalscan.diskbusy.warn=70 | Warning-level status is reported when a disk on the host is busy for 70% or more of a five-minute time frame |
globalscan.diskbusy.crit=90 | Critical-level status is reported when a disk on the host is busy for 90% or more of a five-minute time frame |
globalscan.diskfull.warn=70 | Warning-level status is reported when 70% or more of the disk space on the host is used |
globalscan.diskfull.crit=90 | Critical-level status is reported when 90% or more of the disk space on the host is used |
globalscan.swap.warn=70 | Warning-level status is reported when 70% or more of the swap space on a disk is in use |
globalscan.swap.crit=90 | Critical-level status is reported when 90% or more of the swap space on a disk is in use |
Changes to Global Scan thresholds are not retroactively applied to all Elements; only Elements added after threshold changes will reflect those changes. |
You can modify the Resource Scan threshold settings through the following parameters (default values are shown):
resourcescan.cpu.warn=70 | the Warning-level range in the CPU Usage gauge begins at this value (70%), and ends at the Critical-level range |
resourcescan.cpu.crit=90 | the Critical-level range in the CPU Usage gauge is between this value (90%) and 100% |
resourcescan.memory.warn=70 | the Warning-level range in the Memory Usage gauge begins at this value (70%), and ends at the Critical-level range |
resourcescan.memory.crit=90 | the Critical-level range in the Memory Usage gauge is between this value (70%) and 100% |
resourcescan.diskbusy.warn=70 | the Warning-level range in the Disk Busy gauge begins at this value (70%), and ends at the Critical-level range |
| the Critical-level range in the Disk Busy gauge is between this value (70%) and 100% |
| the Warning-level range in the Disk Capacity gauge begins at this value (70%), and ends at the Critical-level range |
| the Critical-level range in the Disk Capacity gauge is between this value (70%) and 100% |
The Platform Performance Gatherer is a core performance monitor that resides on all agent-based Elements.
By default, the Platform Performance Gatherer checks the host Elements’ performance levels every 300 seconds. You can change the interval by manually inputting settings in the up.time Configuration panel, as outlined in Modifying up.time Config Panel Settings.
You can modify the Platform Performance Gatherer check interval through the following up.time Configuration parameter (the default value is shown):
performanceCheckInterval=300 |
A change to the Platform Performance Gatherer check interval is not retroactively applied to all Elements; only Elements added after an interval change will reflect that change. |
When an up.time user generates a report, that report is stored in the
/GUI/reportcache directory; when a scheduled report is automatically generated and published, it is stored in the /GUI/published directory. Both of these directory paths are found in the up.time installation directory:
Note - Windows Vista users can find the audit log in the Virtual Store instead of the default up.time location (i.e., C:\Users\uptime\AddData\Local\VirtualStore\Program Files\< uptime-install-directory >. |
By default, generated reports are cached on the Monitoring Station for 30 days; additionally, the location for published reports is also on the local Monitoring Station file system. Both options can be modified. In the latter case, automatically publishing reports to a publicly accessed directory on the network is an ideal way for non-IT staff to view them. See Saving Reports to the File System for more information.
You can change a report’s expiry time limit by manually inputting settings in the up.time Configuration panel, as outlined in Modifying up.time Config Panel Settings.
Change the expiry limit through the following parameter (the default value is shown):
reportCacheExpiryDays=30
This can be modified with the following uptime.conf parameter:
publishedReportRoot=<location>
If the intended published report directory is on a system other than the Monitoring Station, the provided location should be a full network path to the system in addition to the directory path on that system.
Due to the large number of options available for the Resource Usage report, generating an extensive report for a large group of Elements can take several minutes. If exhaustive report generation is necessary, but taking too long, you can increase the number of report images (the default being “ 6 ”) that up.time concurrently generates for this type of report.
Note that the default number is optimal in most cases; increasing the amount may improve performance, but the law of diminishing returns applies, as too many concurrent threads can tax the PDF generation process overall.
Logging is configured through the following uptime.conf parameter:
reporting.prefetch.images.threads = 6
Some configuration options affect the Monitoring Station interface. These can be modified by manually inputting settings in the up.time Configuration panel, as outlined in Modifying up.time Config Panel Settings.
When services reach a warning or critical state, administrators can flag an alert as “acknowledged,” which prevents subsequent alerts from being broadcasted, giving them time to investigate the issue. See Acknowledging Alerts for more information.
Service status alert acknowledgements can be reported in the status tables on the Global Scan panel. By default, status alert acknowledgement counts are not shown; if enabled a new column (labelled ACK) appears in the Service Status section of Global Scan . When the current status of a monitor is acknowledged, it appears in the ACK column instead of in the WARN or CRIT column.
You can enable or disable status acknowledgement (i.e., add or remove the ACK column from the status tables) through the following parameter (the default value is shown):
acknowledgedSeparate=false
When performance and availability graphs are generated, the Graph Editor is used to manipulate the appearance of graphed data (see Using the Graph Editor). Transformations from a three-dimensional perspective are possible if the user account permits it (see Adding Users), and the user is connecting to the Monitoring Station using Internet Explorer.
This 3D presentation option can be disabled outright. You can determine whether ActiveX graphs are displayed in 3D for users with Internet Explorer through the following parameter (the default value is shown):
default3DGraphs=true
Custom dashboards can be added to My Portal to display custom content that is relevant to the particular user who is currently logged in. Up to 50 dashboards can be added, each of which is accessed through, and viewed in, its own tab at the top of My Portal .
A custom dashboard tab is configured by pointing up.time to a custom Web page, and indicating which User Group will be able to view it. You can enable and configure the first dashboard through the following parameters:
myportal.custom.tab1.enabled=true
myportal.custom.tab1.name=<DashboardNameOnTab>
myportal.custom.tab1.URL=<URLtoCustomPage>
myportal.custom.tab1.usergroups=<UserGroupName>
Values for the first three parameters are required. If no name is specified for the User Group parameter (or, if no User Groups have been defined), the custom dashboard will be visible to all up.time users. Thus, a User Group parameter is only required if you want to restrict or refine user access to a particular custom dashboard.
To create additional tabs, add the same set of parameters, but increment the tab count:
myportal.custom.tab2.enabled=true
myportal.custom.tab2.name=<DashboardNameOnTab>
myportal.custom.tab2.URL=<URLtoCustomPage>
If your up.time package did not come with a license key, then either contact your sales representative to request a key or send an email to [email protected]. You will need the host ID for the system so that a permanent license key can be generated. The host ID is displayed in the License Information subpanel, and is similar to the following:
001110bf101d
Note - You do not need the host ID if you are evaluating up.time. The demo licenses expire after predetermined amounts of time and can run on any system. |
To install or update a license, do the following:
In the License Notification section of the License Information page, you can select the Notification Group that receives alerts should there be any licensing errors related to syncing with VMware vSphere.
For more information, see Managing vSync.