Date: Fri, 29 Mar 2024 15:19:43 +0000 (UTC) Message-ID: <2005965835.5811.1711725583558@ip-10-0-1-161.ec2.internal> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_5810_1793023218.1711725583554" ------=_Part_5810_1793023218.1711725583554 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
In some cases, you can make measurement adjustments to Up= time Infrastructure Monitor's default values. Changes can be made to the fo= llowing:
By default, the number of Java threads allocated to servi=
ce and performance monitors is 100. This can be modified with the following=
uptime.conf
parameter:
servic= eThreads=3D100
By default, the JVM's heap memory is to a maximum of 1 GB. If your monit= oring deployment has a lot of service monitors running or reports to genera= te, you can the increase the amount of Java heap memory (for example, to 1.= 5 GB) to improve performance.
When increasing the Java heap size, ensure your Monitoring Station resou= rces can support the new setting. If the OS does not have the desired amoun= t of memory available exclusively for Uptime Infrastructure Monitor, the Up= time Core service may become unstable and crash, despite starting up succes= sfully.
The amount of memory allocated to the JVM can be adjusted by modifying o= ne of the following parameters, depending on your Monitoring Station platfo= rm:
On Linux, edit the <uptimeInstallDir>/uptime.jcnf
file and modify the following:
-Xmx1G=
On Windows, edit the <uptimeInstallDir&=
gt;\UptimeDataCollector.ini
file and modify the following, which rela=
tes to the Java -Xmx option:
vm.hea= psize.preferred=3D1024m
Note that the default heap size is measured in gigabytes in the Linux co= nfiguration file, and megabytes in the Windows configuration file.
The Global Scan threshold settings determine when a cell = on the Global Scan dashboard changes state to re= flect a host=E2=80=99s status change: green represents normal status, yello= w represents Warning status, and red represents Critical.
The Resource Scan threshold settings determine the size o= f the gauge ranges on the Resource Scan view: gr= een represents normal status, yellow represents Warning status, and red rep= resents Critical status.
You can change the thresholds used to determine status by= manually inputting settings in the Uptime Configuration&n= bsp;panel, as outlined in Modifying Uptime Config Panel= Settings.
You can modify the Global Scan= strong> threshold settings through the following parameters (default v= alues are shown):
globalscan.cpu.warn=3D70 |
Warning-level status is reported when CPU usage = is at 70% or greater |
globalscan.cpu.crit=3D90 |
Critical-level status is reported when CPU usage= is at 90% or greater |
globalscan.diskbusy.warn=3D70 |
Warning-level status is reported when a disk on = the host is busy for 70% or more of a five-minute time frame |
globalscan.diskbusy.crit=3D90 |
Critical-level status is reported when a disk on= the host is busy for 90% or more of a five-minute time frame |
globalscan.diskfull.warn=3D70 |
Warning-level status is reported when 70% or mor= e of the disk space on the host is used |
globalscan.diskfull.crit=3D90 |
Critical-level status is reported when 90% or mo= re of the disk space on the host is used |
globalscan.swap.warn=3D70 |
Warning-level status is reported when 70% or mor= e of the swap space on a disk is in use |
globalscan.swap.crit=3D90 |
Critical-level status is reported when 90% or mo= re of the swap space on a disk is in use |
Changes to Global Scan thresholds are not retroactively= applied to all Elements; only Elements added after threshold changes will = reflect those changes.
You can modify the Resource Scan= threshold settings through the following parameters (default= values are shown):
resourcescan.cpu.warn=3D70 |
the Warning-level range in the CPU Usage= gauge begins at this value (70%), and ends at the Critical-level = range |
resourcescan.cpu.crit=3D90 |
the Critical-level range in the CPU Usa= ge gauge is between this value (90%) and 100% |
resourcescan.memory.warn=3D70 |
the Warning-level range in the Memory Us= age gauge begins at this value (70%), and ends at the Critical-lev= el range |
resourcescan.memory.crit=3D90 |
the Critical-level range in the Memory U= sage gauge is between this value (70%) and 100% |
resourcescan.diskbusy.warn=3D70 |
the Warning-level range in the Disk Bus= y gauge begins at this value (70%), and ends at the Critical-level= range |
|
the Critical-level range in the Disk Bu= sy gauge is between this value (70%) and 100% |
|
the Warning-level range in the Disk Cap= acity gauge begins at this value (70%), and ends at the Critical-l= evel range |
|
the Critical-level range in the Disk Ca= pacity gauge is between this value (70%) and 100% |
The Platform Performance Gatherer= is a core performance monitor that resides on all agent-based Ele= ments.
By default, the Platform Performance Gatherer checks the = host Elements=E2=80=99 performance levels every 300 seconds. You can change= the interval by manually inputting settings in the Uptime Configur= ation panel, as outlined in Modifying Upt= ime Config Panel Settings.
You can modify the Platform Performance Gatherer check in= terval through the following Uptime Configuration par= ameter (the default value is shown):
perfor= manceCheckInterval=3D300
A change to the Platform Performance Gatherer check interval is not retr= oactively applied to all Elements; only Elements added after an interval ch= ange will reflect that change.