Alert Events Aggregation & Trend Calculation


This document provides an overview of the alert aggregation process that powers the “Global View”, and “Active Alerts” pages. It details the business and technical logic behind the “GET /v1/company/alert-events/aggregate” endpoint, explaining how raw alert data is collected, filtered, and transformed into meaningful insights. The documentation covers the calculation of key performance indicators, including alert counts by severity and source, comparisons over time, and the statistical trend analysis used to determine if alert volumes are rising, improving, or stable.

Dashboards Overview

Here you can find a quick overview of the Alerting Dashboards, their respective data restrictions and other dev notes.

Recent Alerts (24h)

Pages: Active Alerts, Global View.

Image 1 – Recent Alerts (24h).


The value for each dashboard section (254, 652, 1120; Image 1) is a count of all Active Alerts with a respective severity in the last 24 hours (Image 2). The trend value of each dashboard section (-35, +25, +25; Image 1) is the difference between the number of all Active Alerts with a respective severity in the last 48 to 24 hours and the number of all Active Alerts with a respective severity in the last 24 hours (Image 3).


Image 2 – Recent Alerts date range


Image 3 – Recent Alerts difference (trend) calculation


Top tenants with New Alerts in 24 Hrs

Pages: Global View

Image 4 – Top tenants with New Alerts in 24 Hrs dashboard


The data for this dashboard (Image 4) is collected in the same way as for the Recent Alerts (24h) dashboard (last 24 hours timespan (Image 2)).

Last 7 Days Timeline

Pages:

  • Active Alerts (“7d Alert Timeline” dashboard (Image 5));
  • Global View (“Last 7 Days” column).

Image 5 – “7d Alert Timeline” dashboard


The data for this dashboard (Image 5) includes the current date, even though the current day is not over, and the number of alerts may be constantly changing. For example, a Review alerts timeline is shown on Image 6. The current date on the example timeline is the 6th of November. The timeline starts at 31.10 and ends at 06.11, both dates being included.

Image 6 – Review alerts timeline


Rising/Improving/Stable trend

Pages:

  • Active Alerts (“Total Active Alerts” dashboard);
  • Global View (Critical, Attention, Review columns).

The trend values (Image 7, red underlines) are based on the last 7 days' statistics (Image 7, purple rectangle). This may be “obvious” when you look at the trend values on the Global View page, but the same logic is actually applied on the Active Alerts page.

For more information on how the trend values are calculated, read the “Trend calculation process” section of this document.

Image 7 – Total Alerts table



Endpoint Overview

Data Aggregation*

The system first gathers alert data and then groups it based on the user's selected view on the dashboard, which is controlled by the groupBy query parameter.

  • Group by Severity: When grouped by severity, the dashboard values represent the total count of alerts for each severity level (Critical, Attention, Review). This gives a high-level overview of the current alert landscape.
  • Group by Source: When grouped by source (e.g., Office 365 Tenant), the values show the total number of alerts per source. This view also provides a breakdown of alert counts by severity within each tenant, helping to identify which tenants are the most problematic.

Comparison Over Time

To provide context, the system can compare the alert count of the current period against a previous period. This feature is supported only for “groupBy=severity”. To enable this feature, specify both comparisonRangeStart and comparisonRangeEnd.

The business logic for this is straightforward subtraction. The system calculates the difference between the total alerts in the main date range and the total alerts in the comparison range.

Last N Days Statistic

The nDaysStat parameter is a crucial component for enabling trend analysis on the alerting dashboard. When this parameter is included in a request, it instructs the backend to fetch a historical record of alert counts for the specified number of days leading up to the selected end date (rangeEnd). This daily data serves two primary purposes:

  • First, it provides the raw data points needed to render time-series graphs that visualise alert volume over a period.
  • Second, it serves as the direct input for the calculation behind the nDaysTrend query param (Spearman's Rank Correlation algorithm [1][2]), which analyzes the data to determine an overall trend (Rising, Improving, Stable).

In other words, the trend value of the “Critical” (1), “Attention” (2), and “Review” (3) columns is based on the data from the “Last 7 Days” (4) column of the “Total Alerts” table of the “Global View” page (Image 8).

Image 8 – “Total Alerts” table

Trend Analysis

This is the most sophisticated piece of business logic. The system doesn't just show data points on a graph; it analyses them to determine a trend, which is categorised as Rising, Improving, or Stable. This is accomplished using a statistical method called Spearman's Rank Correlation [1][2].

This method is used to measure the relationship between two variables—in this case, time and the number of alerts. It is particularly well-suited for this task because it can detect a consistent upward or downward trend even if the trend is not linear (i.e., it doesn't have to be a straight line on a graph). Furthermore, it is resilient against outliers that can “break” most linear algorithms.

Trend calculation process

  1. Data gathering: the system collects all alert events that occurred in the last N days (as specified with the nDaysStat query param).
  2. Data Integrity Check: Before performing any calculation, the system first checks if the alert data has enough variation. If the number of alerts is almost the same every day (e.g., `[1, 1, 1, 2, 1, 1, 1]`), the trend is automatically classified as Stable. This prevents the system from reporting a Rising or Improving trend based on insignificant fluctuations.
  3. Correlation Calculation: If the data is sufficiently diverse, the system computes a correlation coefficient—a value between -1 and 1, using Spearman's Rank Correlation method.
  4. Interpreting the Result: The coefficient is then interpreted based on a predefined threshold:
    1. A positive coefficient (e.g., `+0.7`) means that as time progresses, the number of alerts tends to increase. If the coefficient is greater than the predefined threshold, the trend is reported as Rising.
    2. A negative coefficient (e.g., `-0.8`) means that as time progresses, the number of alerts tends to decrease. If the coefficient is less than the predefined threshold, the trend is reported as Improving.
    3. If the coefficient's absolute value is below the threshold, it signifies that there is no meaningful correlation between time and the alert count, so the trend is reported as Stable.

Sources

  1. Spearman's Rank-Order Correlation. Available at: https://statistics.laerd.com/statistical-guides/spearmans-rank-order-correlation-statistical-guide.php (accessed at 12.11.2025).
  2. Spearman's Rank Correlation. Available at: https://www.geeksforgeeks.org/data-science/spearmans-rank-correlation/ (accessed at 12.11.2025).
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us