diff --git a/docs/alerts/monitors/overview.md b/docs/alerts/monitors/overview.md
index 2a9ec2c804..a68af47157 100644
--- a/docs/alerts/monitors/overview.md
+++ b/docs/alerts/monitors/overview.md
@@ -2,32 +2,111 @@
id: overview
title: Monitors Overview
sidebar_label: Overview
-description: Sumo Logic monitors continuously query your logs or metrics and sends notifications when specific events occur, such as critical, warning, and missing data.
+description: Learn how Sumo Logic monitors continuously query your logs or metrics and sends notifications when specific events occur, such as critical, warning, and missing data.
+keywords:
+ - monitors
+ - log-monitoring
+ - metric-monitoring
+ - alert-notification
+ - threshold-alert
+ - anomaly-detection
+ - missing-dataalert
+ - monitor-limits
+head:
+ - tagName: script
+ attributes:
+ type: application/ld+json
+ innerHTML: |
+ {
+ "@context": "https://schema.org",
+ "@type": "FAQPage",
+ "mainEntity": [
+ {
+ "@type": "Question",
+ "name": "What is a Sumo Logic monitor?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A Sumo Logic monitor continuously queries logs or metrics data and sends a notification when a defined condition is met — such as an error count exceeding a threshold, a metric spiking above a baseline, or log data stopping entirely."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What is the difference between a monitor and a scheduled search in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A monitor evaluates data continuously — from every few seconds to every few minutes — and fires in real time when a condition is breached. A scheduled search runs at a fixed interval such as hourly or daily and sends a report of results. Use monitors for real-time alerting and scheduled searches for periodic reporting."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How many monitors can a Sumo Logic account have?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Enterprise and Trial accounts can have up to 1,000 log monitors and 1,500 metric monitors. Essentials and Professional accounts can have up to 300 log monitors and 500 metric monitors. Free Trial accounts can have up to 50 of each."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What permissions are needed to create a Sumo Logic monitor?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "The Manage Monitors role capability is required to create or edit monitors. The View Monitors capability is required to view them. Permissions can also be set at the folder level."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "When does a Sumo Logic monitor auto-resolve?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A monitor resolves automatically when the recovery condition is met for the entire duration of the detection window. For example, if a monitor triggered at 1:00 PM with a 15-minute detection window, the earliest it can resolve is 1:15 PM. After one day without new data, the incident is automatically expired and marked resolved."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What are the limitations of Sumo Logic monitors?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Monitors do not support Receipt Time, LogReduce, LogCompare, Save to Index, Save to Lookup, or Search Templates. An aggregate metric monitor can evaluate up to 15,000 time series and a non-aggregate metric monitor up to 3,000. A log monitor query can be up to 15,000 characters. Email notifications support up to 100 recipients."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What happens when a monitor is muted in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A muted monitor continues to evaluate data and generate alerts, but notifications are suppressed for the duration of the mute. Use muting schedules to silence notifications during planned maintenance without disabling the monitor."
+ }
+ }
+ ]
+ }
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import useBaseUrl from '@docusaurus/useBaseUrl';
-Monitors track your metrics and logs data in real time and send notifications when noteworthy changes happen in your production applications.
+A Sumo Logic monitor continuously queries logs or metrics and send notifications when noteworthy changes happen in your production applications.
:::note
-Learn how [monitors differ from Scheduled Searches](/docs/alerts/difference-from-scheduled-searches).
+To understand when to use a monitor versus a scheduled search, refer to [Monitors vs. Scheduled Searches](/docs/alerts/difference-from-scheduled-searches/).
:::
-## Prerequisites
+## What permissions are required to use monitors?
-To manage and/or view monitors, you'll need the **Manage** and **View Monitors** [role capabilities](/docs/manage/users-roles/roles/role-capabilities). [Learn more](/docs/alerts/monitors/settings/#monitor-folder-permissions) about controlling permissions at the monitor or folder level.
+The **Manage Monitors** role capability is required to create, edit, or delete monitors. The **View Monitors** capability is required to view them. [Learn more](/docs/alerts/monitors/settings/#monitor-folder-permissions) about controlling permissions at the monitor or folder level.
-## Rules
+## How often does a monitor evaluate data?
-The frequency at which a monitor executes depends on various factors, such as the underlying query, the operators used, and the detection window. This frequency can range from a few seconds to several minutes.
+Evaluation frequency depends on the underlying query, the operators used, and the detection window. This frequency can range from a few seconds to several minutes.
-For example, if the detection window of your alert is 24 hours, it will be evaluated every few minutes. Conversely, if the detection window of the monitor is 15 minutes, it will be evaluated every few seconds.
+The shorter the detection window, the more frequently the monitor runs:
-See [Trigger Type (Logs)](/docs/alerts/monitors/create-monitor/#trigger-type-logs) and [Trigger Type (Metrics)](/docs/alerts/monitors/create-monitor/#trigger-type-metrics) for more information.
+- A **15-minute** detection window evaluates every few seconds.
+- A **24-hour** detection window evaluates every few minutes.
-### Log monitors
+See [Trigger Type (Logs)](/docs/alerts/monitors/create-monitor/#trigger-type-logs) and [Trigger Type (Metrics)](/docs/alerts/monitors/create-monitor/#trigger-type-metrics) for the full evaluation schedule by window size.
+
+## What are the rules specific to log monitors?
* Log monitors use the [role search filter](/docs/manage/users-roles/roles/construct-search-filter-for-role) of their creator.
* Log monitors delay execution by two minutes. This means it won't evaluate data from the current time, but evaluate data from two minutes ago. This ensures that any delays in ingestion are factored in and won't generate false positive or false negative alerts.
@@ -35,7 +114,7 @@ See [Trigger Type (Logs)](/docs/alerts/monitors/create-monitor/#trigger-type-log
* Essentials and Professional plan customers can have up to 300 log monitors.
* Free Trial customers can have up to 50 log monitors.
-#### Auto-resolving notifications
+### How do log monitors auto-resolve?
Log monitors in a triggered state can auto-resolve.
@@ -44,25 +123,25 @@ Log monitors in a triggered state can auto-resolve.
- Non-grouped monitors will trigger again after auto-resolving if there is still no data.
- Grouped monitors will be removed and no longer considered after being auto-resolved, unless data for this group is seen again.
-### Metrics monitors
+## What are the rules specific to metric monitors?
* Metrics monitors delay execution by one minute.
* Enterprise and Trial plan customers can have up to 1,500 Metrics monitors.
* Essentials and Professional plan customers can have up to 500 Metrics monitors.
* Free Trial customers can have up to 50 Metrics monitors.
-## Notifications
+## How do monitor notifications work?
-Notifications are optional and available as an **alert** and **recovery** for each trigger condition you specify, **critical**, **warning**, and **missing**.
+Notifications are optional and available for both **alert** and **recovery** states for each trigger condition you specify, **critical**, **warning**, and **missing**.
-### Alerts
+### How do alerts behave when multiple trigger types fire?
* Monitor evaluation for each trigger type (Critical, Warning or Missing Data) happens independently. Each trigger type's lifecycle is managed separately and doesn't have any impact on other trigger types. So it is possible for a monitor to be in Critical and Warning state at the same time. Monitor goes back to normal when it is not in either of Critical, Warning and Missing Data states.
* When both Critical and Warning conditions are met, two separate alerts and notifications are generated - one for the Critical condition and one for the Warning condition. Auto-resolution, if set up, will work according to the resolution condition for each case.
* Metric monitors have the option to group notifications. When configured, the Monitor will not trigger new notifications until the first one is resolved. The Monitor will only update if the notification type supports auto-resolution. Grouped notifications will resolve when all the time series return to normal.
* Log monitors always group notifications.
-### Recovery
+### How does alert recovery and auto-resolution work?
* Recovery is based on the detection window, which is either the time range or the number of data points of the trigger condition. An alert is recovered (resolved) when the recovery condition is met for the entire duration of the detection window.
* For example, if an alert is triggered at 1:00 PM and the detection window is 15 minutes, the earliest the alert would recover is after 1:15 PM since the entire detection window must pass. This is to ensure there isn't an alert between the triggered and resolved state, especially for metrics that are volatile.
@@ -72,73 +151,92 @@ Notifications are optional and available as an **alert** and **recovery** for ea
* The recovery notification is sent to the same channel where the corresponding Alert notifications were sent. In other words, you cannot have different channels where you receive alert and recovery notifications for a given trigger condition.
* After one day without new data to an incident, the system automatically expires it. The incident is marked as resolved with the resolution set to **Expired**.
-## Tools
+## What are the monitor status values?
+
+| Status | Meaning |
+|:--|:--|
+| **Normal** | No trigger conditions are met; data is actively monitored. |
+| **Critical** | The critical threshold condition is met. |
+| **Warning** | The warning threshold condition is met. |
+| **Missing Data** | No data was returned within the detection window. |
+
+A monitor returns to Normal when none of the Critical, Warning, or Missing Data conditions are met.
+
+## Where can monitors be managed programmatically?
-* [Monitor resource in Terraform](https://registry.terraform.io/providers/SumoLogic/sumologic/latest/docs/resources/monitor)
-* [Monitor Management API](/docs/api/monitors-management)
+- **Terraform**. Use the [`sumologic_monitor`](https://registry.terraform.io/providers/SumoLogic/sumologic/latest/docs/resources/monitor) and [`sumologic_monitor_folder`](https://registry.terraform.io/providers/SumoLogic/sumologic/latest/docs/resources/monitor_folder) resources.
+- **API**. Use the [Monitor Management API](/docs/api/monitors-management/).
+## What does muting a monitor do?
-## Terminology
+Muting a monitor suppresses notifications for the duration of the mute schedule, but the monitor continues to evaluate data and generate alerts internally. Use
+[Muting Schedules](/docs/alerts/monitors/muting-schedules/) to silence notifications during planned maintenance windows without disabling the monitor entirely.
-Here are the technical terms used in monitors.
+## What are the key terms used in monitors?
-### Detection method
-This can be _Static_, _Dynamic_, _Anomaly_, or _Outlier_.
+| Term | Definition |
+|:--|:--|
+| **Detection method** | _Static_, _Dynamic_, _Anomaly_, or _Outlier_ defines how the monitor identifies a trigger condition. |
+| **Disable** | The monitor is in a disabled state when monitors are not processed by the backend, only their definition is persisted in the database. |
+| **Incident** | Created when a trigger condition is met. |
+| **Monitor** | A _Monitor_ creates an _Alert_. Using the options below, you're subscribing to an _Alert's Monitor_.
The monitor is the object that you configure within Sumo Logic that:
- Checks for specific events of interest against a data source, based on your specified conditions. Events of interest are used in a general sense to denote an event that may be of interest to you.
- Notifies you about the event-of-interest based on your preferences.
|
+| **Monitor type** | The underlying data stream, either logs or metrics, on which the monitor is created. |
+| **Mute** | When a monitor is in a mute state, it continues to process your data stream as expected where alerts are still generated. However, notifications are suppressed based on your mute condition. See also: [Muting Schedules](/docs/alerts/monitors/muting-schedules). |
+| **Resolve** | The process of closing an incident. |
+| **Status** | The state of the monitor can be one of the following: Normal, Critical, Warning, or Missing Data.|
+| **Template** | The section that describes the actual connection attributes. |
+| **Threshold** | The static condition which when met an incident is triggered by a monitor. |
+| **Trigger (state)** | The state when an alert condition has been met, and an incident has been created as a result. |
+| **Trigger type** | Type of alert/trigger condition defined Critical/Warning/Missing Data. |
+| **Alert variables** | Custom variables used inside the Action Payload. |
-### Disable
-The monitor is in a disabled state when monitors are not processed by the backend, only their definition is persisted in the database.
+## What are the limitations of monitors?
-### Incident
-When a specific alerting condition is met, as defined on the monitor, an incident is triggered.
+The following features and operators are **not supported** in monitors:
-### Monitor
+- [Receipt Time](/docs/search/get-started-with-search/build-search/use-receipt-time/)
+- [LogReduce](/docs/search/behavior-insights/logreduce/logreduce-operator/) and [LogCompare](/docs/search/behavior-insights/logcompare/)
+- [Save to Index](/docs/alerts/scheduled-searches/save-to-index/) and [Save to Lookup](/docs/alerts/scheduled-searches/save-to-lookup/)
+- [Search templates](/docs/search/get-started-with-search/build-search/search-templates/)
+- [`timeshift` metrics operator](/docs/metrics/metrics-operators/timeshift/)
+- [Hidden Metrics queries](/docs/metrics/metrics-queries/metrics-explorer/) are not persist across edit sessions.
+- **Numeric limits:**
+ | Limit | Value |
+ |:--|:--|
+ | Log monitor query length | 15,000 characters |
+ | Metric monitor queries | Up to 6 per monitor |
+ | Aggregate metric monitor time series | 15,000 |
+ | Non-aggregate metric monitor time series | 3,000 |
+ | Email notification recipients | 100 |
+ | Time range precision | Last millisecond excluded, a range of 6:15 to 6:30 PM runs as 6:15:00.000–6:29:59.999 |
+- Monitors only support the [Continuous data tier](/docs/manage/partitions/data-tiers/).
-A _Monitor_ creates an _Alert_. Using the options below, you're subscribing to an _Alert's Monitor_.
+## FAQs
-The monitor is the object that you configure within Sumo Logic that:
- * Checks for specific events of interest against a data source, based on your specified conditions. Events of interest are used in a general sense to denote an event that may be of interest to you.
- * Notifies you about the event-of-interest based on your preferences.
+### What is a Sumo Logic monitor?
-### Monitor type
-The underlying data stream, either logs or metrics, on which the monitor is created.
+A monitor continuously queries logs or metrics data and sends a notification when a defined condition is met, such as an error count exceeding a threshold, a metric spiking above a baseline, or log data stopping entirely.
-### Mute
-When a monitor is in a mute state, it continues to process your data stream as expected where alerts are still generated. However, notifications are suppressed based on your mute condition. See also: [Muting Schedules](/docs/alerts/monitors/muting-schedules).
+### What is the difference between a monitor and a scheduled search?
-### Resolve
-The process of closing an incident.
+A monitor evaluates data continuously every few seconds to every few minutes and fires in real time when a condition is breached. A scheduled search runs at a fixed interval such as hourly or daily and sends a report of results. Use monitors for real-time alerting and scheduled searches for periodic reporting.
-### Status
-The state of the monitor can be one of the following: Normal, Critical, Warning, or Missing Data.
+### How many monitors can a Sumo Logic account have?
-### Template
-The section that describes the actual connection attributes.
+Enterprise and Trial accounts support up to 1,000 log monitors and 1,500 metric monitors. Essentials and Professional accounts support up to 300 log monitors and 500 metric monitors. Free Trial accounts support up to 50 of each type.
-### Threshold
-The static condition which when met an incident is triggered by a monitor.
+### What permissions are needed to create a monitor?
-### Trigger (state)
-The state when an alert condition has been met, and an incident has been created as a result.
+The **Manage Monitors** role capability is required to create or edit monitors. The **View Monitors** capability is required to view them. Permissions can also be set at the folder level.
-### Trigger type
-Type of alert/trigger condition defined Critical/Warning/Missing Data.
+### When does a Sumo Logic monitor auto-resolve?
-### Alert variables
-Custom variables used inside the Action Payload.
+A monitor resolves automatically when the recovery condition is met for the entire duration of the detection window. For example, a monitor that triggered at 1:00 PM with a 15-minute window can resolve no earlier than 1:15 PM. Incidents without new data for 24 hours are automatically expired and marked resolved.
+### What are the limitations of Sumo Logic monitors?
-## Limitations
+Monitors do not support Receipt Time, LogReduce, LogCompare, Save to Index, Save to Lookup, or Search Templates. An aggregate metric monitor evaluates up to 15,000 time series; a non-aggregate metric monitor evaluates up to 3,000. Log monitor queries are limited to 15,000 characters. Email notifications support up to 100 recipients.
-### General
+### What happens when a monitor is muted?
-* [Receipt Time](../../search/get-started-with-search/build-search/use-receipt-time.md) is not supported.
-* [LogReduce](/docs/search/behavior-insights/logreduce/logreduce-operator) / [LogCompare](/docs/search/behavior-insights/logcompare) operators are not supported in monitors. If your query contains these operators, you will not be able to create the monitor.
-* Monitors only support the [Continuous data tier](/docs/manage/partitions/data-tiers).
-* An aggregate Metric Monitor can evaluate up to 15,000 time series. A non-aggregate Metric Monitor can evaluate up to 3,000 time series.
-* [Save to Index](../scheduled-searches/save-to-index.md) and [Save to Lookup](../scheduled-searches/save-to-lookup.md) are not supported.
-* [Search templates](../../search/get-started-with-search/build-search/search-templates.md) are not supported.
-* A Log Monitor can have one query up to 15,000 characters long. Metric monitors can specify up to six queries.
-* Email notifications can have up to 100 recipients.
-* The [`timeshift metrics` operator](/docs/metrics/metrics-operators/timeshift) is not supported in a Metric Monitor.
-* [Hidden Metrics queries](../../metrics/metrics-queries/metrics-explorer.md) do not persist across edit sessions.
-* The last millisecond of the defined time range is not searched. For example, a time range of 6:15 to 6.30 pm will run as 6:15:00:000 to 6:29:59:999.
+A muted monitor continues to evaluate data and generate alerts internally, but notifications are suppressed for the duration of the mute. Use muting schedules to silence notifications during planned maintenance without disabling the monitor.
\ No newline at end of file
diff --git a/docs/dashboards/about.md b/docs/dashboards/about.md
index b76df17f3c..02ee595121 100644
--- a/docs/dashboards/about.md
+++ b/docs/dashboards/about.md
@@ -1,25 +1,110 @@
---
id: about
-title: About Dashboard
+title: Dashboards Overview
sidebar_label: About Dashboard
-description: Learn the benefits of Dashboard and how it seamlessly integrates log, metric, and trace data.
+description: Sumo Logic dashboards let you visualise log and metric data together in real time with template variable filters, auto-refresh, dark mode, drill-down, and scheduled email reports.
+keywords:
+ - dashboards
+ - log-dashboard
+ - metric-dashboard
+ - real-time-dashboard
+ - template-variables
+ - dashboard-auto-refresh
+ - dashboard-dark-mode
+ - operational-dashboard
+ - build-a-dashboard
+head:
+ - tagName: script
+ attributes:
+ type: application/ld+json
+ innerHTML: |
+ {
+ "@context": "https://schema.org",
+ "@type": "FAQPage",
+ "mainEntity": [
+ {
+ "@type": "Question",
+ "name": "What is a Sumo Logic dashboard?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A Sumo Logic dashboard is a real-time visualisation surface that displays log and metric data together in a single view. Panels support charts, tables, maps, and single-value displays. Dashboards can be filtered with template variables, set to auto-refresh, shared with teammates, and exported as PDF, PNG, or JSON."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to build a real-time operational dashboard from logs in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Create a new dashboard, add log or metric panels directly from the dashboard editor, write queries for each panel, set a time range and optional auto-refresh interval, and use template variables to make filters dynamic. See the Create a Dashboard page for step-by-step instructions."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "Can Sumo Logic dashboards display logs and metrics together?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Yes. Sumo Logic dashboards support both log and metric queries in the same panel and across panels on the same dashboard, giving a unified view of application and infrastructure data."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to share a Sumo Logic dashboard?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Use the Share option in the dashboard menu to share with teammates inside the organisation, preserving template variables and time range. Dashboards can also be shared publicly outside the organisation using a public URL."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to set up auto-refresh on a Sumo Logic dashboard?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Click the dropdown arrow next to the refresh icon on the dashboard and select a refresh interval. Auto-refresh applies to the entire dashboard and cannot be set per panel. If the requested interval is not achievable due to query complexity or time range, an error message indicates the actual refresh rate."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to send a Sumo Logic dashboard as a scheduled email report?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Use the Scheduled Report feature to send a dashboard snapshot by email on a defined schedule. See the Scheduled Report page for setup steps."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What are the limitations of Sumo Logic dashboards?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A dashboard can have up to 100 queries. Each panel supports up to 6 log queries and 6 metric queries. Dashboard queries cannot return more than 1,440 data points. Joining log queries across panels is not supported. The operators Details, LogReduce, LogCompare, Save, and Transaction cannot be used in dashboard panels."
+ }
+ }
+ ]
+ }
---
import useBaseUrl from '@docusaurus/useBaseUrl';
-Dashboard allows you to analyze metric and log data on the same dashboard, in a streamlined user experience. This is exactly what you need to effectively monitor and manage a Kubernetes environment.
+A Sumo Logic dashboard displays log and metric data together in a single real-time view. Panels support a range of chart types, template variable filters make dashboards dynamic, and auto-refresh keeps data current without manual reloads. This is exactly what you need to effectively monitor and manage a Kubernetes environment.
Dashboards are a critical tool for monitoring and troubleshooting modern applications, allowing you to quickly navigate through your data without having to learn a query language. Graphs and data mappings provide visual representations of data that enable you to quickly identify and resolve key issues.
-## What's great about Dashboard
+## What can Sumo Logic dashboards display?
-Dashboard provides the unique ability to display metrics metadata and logs data on the same dashboard in an integrated seamless view. This gives you control over the visual display of metric data as well as log data. Dashboard streamlines dashboard configuration and on-the-fly analytic visualizations with its new templating features.
+Dashboards support both log and metric queries in the same panel and across panels on the same dashboard. This gives a unified view of application logs and infrastructure metrics without switching between tools.
-[Template variables](filter-template-variables.md) allow you to filter dashboard data dynamically to generate new visualizations for intuitive chart creation and data scoping.
+Supported panel types include: Area, Bar, Box Plot, Bubble, Cluster Map, Column, Combo, Connection Map, Funnel, Geo Heat Map, Heat Map, Honeycomb, Line, Pie, Sankey Diagram, Scatter, Single Value, Table, and Text panels.
-### Features
+See [Panels](/docs/dashboards/panels/) for details on each chart type.
+
+## How do template variables work in dashboards?
+
+[Template variables](/docs/dashboards/filter-template-variables/) let you filter dashboard data dynamically without editing individual panel queries. A variable can be applied across both log and metric panels simultaneously, and the dashboard updates all panels when the variable value changes.
+
+Template variables support full replacement control over inserted values and work across log and metric panels.
+
+## What features does Sumo Logic Dashboard support?
:::tip
See [Migrate to Dashboards](/docs/dashboards/dashboards-migration).
@@ -55,30 +140,37 @@ The following table shows the availability of features for Dashboard.
| Locate Deviations in a Time Series |[Supported](/docs/dashboards/locate-deviations-time-series/) |
| Longer Time Range Queries | [Supported](/docs/dashboards/set-custom-time-ranges/) |
-## Restricted Operators in Dashboard
+## What operators cannot be used in dashboard panels?
-The following operators cannot be used with Dashboard:
+The following operators are not supported in dashboard panels:
-* Details
-* LogReduce
-* LogCompare
-* Save
-* Transaction
+- `Details`
+- `LogReduce`
+- `LogCompare`
+- `Save`
+- `Transaction`
:::note
-Live mode restrictions do not apply to Dashboard.
+Live mode restrictions do not apply to dashboards.
:::
-## Limitations
+See [Restricted Operators in Dashboards](/docs/dashboards/restricted-operators-dashboards/) for full details.
+
+## What are the limits for Sumo Logic dashboards?
+
+| Limit | Value |
+|:--|:--|
+| Queries per dashboard | 100 |
+| Log queries per panel | 6 |
+| Metric queries per panel | 6 |
+| Data points per query | 1,440 |
+| Joining log queries across panels | Not supported |
+
+Chart properties set in a panel are not retained when the chart is viewed from the Search page, and are not retained when a chart is added to a dashboard from the Search page.
-* A panel can have up to 6 logs and 6 metrics queries.
-* Joining log queries in a separate query is not supported. See how to [join metric queries](/docs/metrics/metrics-queries/metrics-explorer) for details on how this works.
-* A Dashboard can have up to 100 queries.
-* Dashboard chart properties are not retained when viewed from the Search page.
-* Chart properties are not retained when a chart is added to a Dashboard from the Search page.
-* Dashboard queries cannot return more than 1440 data points.
+## How does auto-refresh work?
-## Rules
+Dashboards can automatically refresh all panels at a configured interval. To set the interval, click the dropdown arrow next to the refresh icon and select a rate.
})
* Auto Refresh applies to the whole dashboard, you cannot configure it by panel.
* If there are two or more queries in a panel, the refresh interval for the panel is set to the maximum supported interval.
@@ -87,24 +179,19 @@ Live mode restrictions do not apply to Dashboard.
* An operator is not supported at this refresh interval.
* The number of grouped elements is too large for the requested interval.
-## Auto Refresh
+See [Restricted Operators in Dashboards](/docs/dashboards/restricted-operators-dashboards/) for a full list of operators that affect refresh behaviour.
-Your dashboard can automatically refresh its panels to the latest information. You have the ability to configure the refresh interval rate by clicking the dropdown arrow next to the refresh icon.
-
-There are some restrictions when using operators with dashboards. To learn more, see [Restricted Operators in Dashboards](/docs/dashboards/restricted-operators-dashboards).
-
A list of the refresh interval rates is provided for you to select from.
-
-## Dark Theme
+## How to switch to dark mode?
Dashboards have two themes available: Light mode (which is the default) and Dark mode. You can toggle between the two themes within the dashboard by clicking the three-dot kebab icon. The following image shows the option to **Switch to Dark Theme**.
-## Clickable Legend
+## How does the clickable legend work?
If you want to focus on one item in your chart you can simply click on the item in the legend. If you want to toggle just one legend item, just hold the **shift** key and then click the item.
-## Dashboard Information
+## How do I view dashboard scan cost information?
-The dashboard information popup provides insights into the scan costs associated with log-based queries that run within dashboards.
+The dashboard information dialog shows insights into the scan costs associated with log-based queries that run within dashboards.
To view the dashboard information, follow the steps below:
1. Open the dashboard for which you need to view the information.
@@ -117,4 +204,35 @@ To view the dashboard information, follow the steps below:
- **End**. The current end time based on the selected time range.
- **Time Zone**. The time zone for the set time range.
- **Scanned Bytes**. The total amount of data scanned in bytes.
- - **Dashboard ID**. A unique identification ID for the dashboard. Copy and use the dashboard ID within the APIs to identify the dashboard when making requests.
\ No newline at end of file
+ - **Dashboard ID**. A unique identification ID for the dashboard. Copy and use the dashboard ID within the APIs to identify the dashboard when making requests.
+
+
+## FAQs
+
+### What is a Sumo Logic dashboard?
+
+A Sumo Logic dashboard is a real-time visualisation surface that displays log and metric data together in a single view. Panels support charts, tables, maps, and single-value displays. Dashboards can be filtered with template variables, set to auto-refresh, shared with teammates, and exported as PDF, PNG, or JSON.
+
+### How to build a real-time operational dashboard from logs?
+
+Create a new dashboard, add log or metric panels directly from the dashboard editor, write queries for each panel, set a time range and optional auto-refresh interval, and use template variables to make filters dynamic. See [Create a Dashboard](/docs/dashboards/create-dashboard-new/) for step-by-step instructions.
+
+### Can Sumo Logic dashboards display logs and metrics together?
+
+Yes. Both log and metric queries are supported in the same panel and across panels on the same dashboard, giving a unified view of application and infrastructure data.
+
+### How to share a Sumo Logic dashboard?
+
+Use the **Share** option in the dashboard menu to share with teammates inside the organisation, preserving template variables and time range. Dashboards can also be shared publicly outside the organisation. See [Share a Dashboard](/docs/dashboards/share-dashboard-new/) and [Share a Dashboard Outside Your Organisation](/docs/dashboards/share-dashboard-outside-org/).
+
+### How to set up auto-refresh on a dashboard?
+
+Click the dropdown arrow next to the refresh icon and select an interval. Auto-refresh applies to the entire dashboard. If the requested interval is not achievable, an error message explains the reason — usually a time range that is too long or an unsupported operator.
+
+### How to send a dashboard as a scheduled email report?
+
+Use the [Scheduled Report](/docs/dashboards/scheduled-report/) feature to send a dashboard snapshot by email on a defined schedule.
+
+### What are the limitations of Sumo Logic dashboards?
+
+A dashboard supports up to 100 queries total. Each panel supports up to 6 log and 6 metric queries. Queries cannot return more than 1,440 data points. Joining log queries across panels is not supported. The operators Details, LogReduce, LogCompare, Save, and Transaction cannot be used in dashboard panels.
\ No newline at end of file
diff --git a/docs/manage/field-extractions/create-field-extraction-rule.md b/docs/manage/field-extractions/create-field-extraction-rule.md
index 0006c598af..7aafb78fd4 100644
--- a/docs/manage/field-extractions/create-field-extraction-rule.md
+++ b/docs/manage/field-extractions/create-field-extraction-rule.md
@@ -1,16 +1,99 @@
---
id: create-field-extraction-rule
-title: Create a Field Extraction Rule
-description: Field Extraction Rules (FER) tell Sumo Logic which fields to parse out automatically.
+title: How to Create a Field Extraction Rule in Sumo Logic
+sidebar_label: Create a Field Extraction Rule
+description: Create a Field Extraction Rule (FER) in Sumo Logic to automatically parse fields from log messages at ingest time making fields available for searches, alerts, and dashboards without query-level parsing.
+keywords:
+ - Sumo Logic
+ - create field extraction rule
+ - FER
+ - parse log fields at ingest
+ - extract fields from logs
+ - automatic log parsing
+ - ingest time field extraction
+ - run time field extraction
+ - parse regex logs
+ - log field extraction rule
+head:
+ - tagName: script
+ attributes:
+ type: application/ld+json
+ innerHTML: |
+ {
+ "@context": "https://schema.org",
+ "@type": "FAQPage",
+ "mainEntity": [
+ {
+ "@type": "Question",
+ "name": "How to create a field extraction rule in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Go to Data Management > Logs > Field Extraction Rules, click + Add, select the rule type (Ingest Time or Run Time), define the scope to target the relevant log sources, write a parse expression to extract the fields, and click Save."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to extract a value from a log message using regex in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Create an Ingest Time field extraction rule with a parse expression using the parse regex operator. For example: parse regex \"user=(?\\S+)\" extracts the user field from every matching log message at ingestion time, making it available in all searches and dashboards without repeating the regex in queries."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What is the difference between Ingest Time and Run Time field extraction in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Ingest Time rules parse any log format using manually written expressions and apply to data ingested after the rule is created, providing better search performance. Run Time rules parse JSON data automatically during a search using Dynamic Parsing and have no rule limit. Run Time rules are more flexible but add overhead at query time."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What operators can be used in a field extraction rule parse expression?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Ingest Time field extraction rules support the following operators in the parse expression: parse regex, parse anchor, parse nodrop, csv, fields, json, keyvalue, and num. The multi and auto options are not supported."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to parse multiple fields from a log message in a single field extraction rule?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Use a single parse expression with multiple named capture groups or wildcards. For example: parse \"[hostId=*] [module=*] [localUserName=*]\" as hostId, module, localUserName extracts three fields from each matching log message in one rule."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What are the best practices for designing field extraction rules?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Lock down the scope as tightly as possible to target only the logs that need parsing. Create multiple specific rules rather than one complex rule. Extract only the fields that are actually needed. Test the scope as a search before saving the rule. Avoid using the same field name in multiple rules that target the same messages."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "Can field extraction rules be managed with Terraform?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Yes. Use the sumologic_field_extraction_rule resource in the Sumo Logic Terraform provider to create and manage field extraction rules as code."
+ }
+ }
+ ]
+ }
---
import useBaseUrl from '@docusaurus/useBaseUrl';
import Iframe from 'react-iframe';
import FerLimit from '../../reuse/fer-limitations.md';
+A Field Extraction Rule (FER) automatically parses fields from log messages at ingestion time, making those fields available in searches, alerts, scheduled searches, and dashboards without writing parse expressions in every query.
+
You can create a field extraction rule of your own from scratch by following the instructions below. We also provide [data-source-specific templates](/docs/manage/field-extractions/fer-templates/index.md) for AWS, Apache, and more.
-You need the **Manage field extraction rules** [role capability](../users-roles/roles/role-capabilities.md) to create a field extraction rule.
+:::info
+The **Manage field extraction rules** [role capability](/docs/manage/users-roles/roles/role-capabilities/) is required to create a field extraction rule.
+:::
:::note
Fields specified in field extraction rules are automatically added and enabled in your [Fields](/docs/manage/fields) table schema.
@@ -25,9 +108,6 @@ You can use Terraform to provide a field extraction rule with the [`sumologic_fi
:::
:::training Micro Lesson
-
-Learn how to create a FER through our video, "Creating a Field Extraction Rule".
-
-
:::
-## Creating a new Field Extraction Rule
+## How to create a field extraction rule?
To create a Field Extraction Rule:
@@ -52,103 +131,120 @@ To create a Field Extraction Rule:
1. Enter the following options:
* **Rule Name**. Type a name that makes it easy to identify the rule.
* **Applied At**. There are two types available, Ingest Time and Run Time. The main differences are Run Time only supports JSON data and the time that Sumo parses the fields. The following is an overview of the differences:
- * Ingest Time
- * Parsing support - any data format, requires manually written parser expressions.
- * Rule limit - There is a limit of 50 Field Extraction Rules and 200 fields. This includes the default fields defined by Sumo Logic (about 16). The 200-field limit is per account, and deleting rules does not create more space.
- * Time - At the time of ingestion, only applies to data moving forward. If you want to parse data ingested before the creation of your FER, you can either parse your data in your query, or create Scheduled Views to extract fields for your historical data.
- * Run Time
- * Parsing support - JSON, automatically
- * Rule limit - none
- * Time - During a search when using **Auto Parse Mode** from [Dynamic Parsing](../../search/get-started-with-search/build-search/dynamic-parsing.md).
+ * **Ingest Time**
+ * Parsing support any data format, requires manually written parser expressions.
+ * Rule limit: There is a limit of 50 Field Extraction Rules and 200 fields. This includes the default fields defined by Sumo Logic (about 16). The 200-field limit is per account, and deleting rules does not create more space.
+ * Time: At the time of ingestion, only applies to data moving forward. If you want to parse data ingested before the creation of your FER, you can either parse your data in your query, or create Scheduled Views to extract fields for your historical data.
+ * **Run Time**
+ * Parsing supports JSON automatically.
+ * There is no rule limit.
+ * Time: During a search when using **Auto Parse Mode** from [Dynamic Parsing](../../search/get-started-with-search/build-search/dynamic-parsing.md).
* **Scope**. Select either **All Data** or **Specific Data**. When specifying data the options for the scope differ depending on when the rule is applied.
* For an **Ingest Time** rule, type a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that points to the subset of logs you'd like to parse. Think of the scope as the first portion of an ad hoc search, before the first pipe (`|`). You'll use the scope to run a search against the rule. Custom metadata fields are not supported here, they have not been indexed to your data yet at this point in collection.
* For a **Run Time** rule, define the scope of your JSON data. You can define your JSON data source as a [partition](/docs/manage/partitions) Name(index), sourceCategory, Host Name, Collector Name, or any other [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) that describes your JSON data. Think of the scope as the first portion of an ad hoc search, before the first pipe (`|`). You'll use the scope to run a search against the rule. You cannot use keywords like “info” or “error” in your scope.
-
:::note
Always set up JSON auto extraction (Run Time field extraction) on a specific partition name (recommended) or a particular Source. Failing to do so might cause the auto parsing logic to run on data sources where it is not applicable and will add additional overhead that might deteriorate the performance of your queries.
:::
-
:::sumo Best Practices
If you are not using partitions we recommend using [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) fields like `_sourceCategory`, `_sourceHost` or `_collector` to define the scope.
-
We recommend creating a separate partition for your JSON dataset and use that partition as the scope for run time field extraction. For example, let's say you have AWS CloudTrail logs, and they are stored in `_view=cloudtrail` partition in Sumo. You can create a Run Time FER with the scope `_view=cloudtrail`. Creating a separate partition and using it as scope for a run time field extraction ensures that auto parsing logic only applies to necessary partitions.
:::
-
* **Parsed template** (Optional for Ingest Time rules).
* Click the dropdown under **Parsed template** to see the available templates.
* Choose a template and click **Use Template**. The template is applied to the Parse Expression.
* **Parse Expression**. (Applicable to Ingest Time rules)
* Type a valid parse expression with supported parse and search operators. Because fields are associated with the Rule Name, you can parse one particular field into as many rules as you'd like. For example, to parse a single field, you could use a definition similar to this: `parse "message count = *," as msg_count`. To parse multiple fields, you could use a definition similar to this: `parse "[hostId=*] [module=*] [localUserName=*] [logger=*] [thread=*]" as hostId, module, localUserName, logger, thread`.
-
1. **Extracted Fields** (applicable to Ingest Time rules) shows the field names the rule will parse. Any fields that do not exist in the Field table schema are shown with the text **New** highlighted in green. New fields are automatically created in the table schema when you save the rule. You can view and manage the field table schema on the [Fields](/docs/manage/fields) page.
1. Click **Save** to create the rule.
-## Example Template
+## What does a field extraction rule look like in practice?
+
+- **Rule Name:** Fake Log Parse
+- **Log Type:** Fake Log
+- **Rule Description:** Parse the email, sessionID and action type from a fake log message.
+- **Sample Log:**
+ ```
+ 12-12-2012 12:00:00.123 user="test@demo.com" action="delete" sessionID="145623"
+ ```
+- **Extraction Rule:**
+ ```
+ parse "user=\"*\" action=\"*\" sessionId=\"*\"" as user, action, sessionid
+ ```
+- **Resulting Fields:**
+ | Field Name | Description | Example |
+ |:--|:--|:--|
+ | user | User Email Address | `test@email.com` |
+ | action | Action performed by the user | Delete |
+ | sessionId | Session ID for user action | 145623 |
+
+## What are the best practices for designing field extraction rules?
+
+- **Include the most accurate keywords to identify the subset of data from which you want to extract data.** Lock down the scope as tightly as possible to make sure it's extracting just the data you want, nothing more. Using a broader scope means that Sumo Logic will inspect more data for the fields you'd like to parse, which may mean that fields are extracted when you do not actually need them.
-**Rule Name:** Fake Log Parse
+- **Create multiple, specific rules.** Instead of constructing complicated rules, create multiple rules with basic scope, then search on more than one (rules are additive). The OR and AND commands are supported, just as in any search. For example, you could use one rule to parse Apache log response codes, and then use another rule to parse response time. When used together, you can get all of the information you may need.
-**Log Type:** Fake Log
+- **Don't extract fields you do not need.** Extract the minimum number of fields that should all be present in logs. Every field you include in the scope shows up in every search, so including extra fields means you'll see more results than you may need. It's better to create more rules that extract the fields that are most commonly used. First, look at common data sources and see what's most frequently extracted. Then, think about what you most frequently parse from those sources, then create rules to automatically extract those fields.
-**Rule Description:** Parse the email, sessionID and action type from a fake log message.
+- **Create multiple parse nodrop statements in an FER for a field name to match distinct log patterns**. The different parse statements will effectively function like an OR statement since only one will match the log message and return the field value.
-**Sample Log:**
+- **Test the scope before creating the rule.** Make sure that you can extract fields from all messages you need to be returned in search results. Test them by running a potential rule as a search.
-```
-12-12-2012 12:00:00.123 user="test@demo.com" action="delete" sessionID="145623"
-```
+- **Make sure all fields appear in the scope you define.** When Field Extraction is applied to data, all fields must be present to have any fields indexed; even if one field isn't found in a message, that message is dropped from the results. In other words, it's all or nothing. For multiple sets of fields that are somewhat independent, make two rules.
-**Extraction Rule:**
+- **Reuse field names in multiple FERs if scope is distinct and separate and not matching same messages.** To save space and allow for more FERs within your 200 field limit, you can reuse the field names as long as they are used in non-overlapping FERs.
-```
-parse "user=\"*\" action=\"*\" sessionId=\"*\"" as user, action, sessionid
-```
+- **Avoid targeting the same field name in the same message with multiple FERs.** When more than one FER targets the same message with the same field name, one of the rules will NOT apply. The rule applied to the specific field name is randomly selected. Don't use the same field names in multiple FERs that target the same messages.
-**Resulting Fields:**
+## What operators can be used in a parse expression?
-| Field Name | Description | Example |
-|:--|:--|:--|
-| user | User Email Address | `test@email.com` |
-| action | Action performed by the user | Delete |
-| sessionId | Session ID for user action | 145623 |
+The following operators can be used as part of the **Parse Expression** in an Ingest Time Field Extraction Rule.
-## Best practices for designing rules
+* `parse regex`
+* `parse anchor`
+* `parse nodrop`
+* `csv`
+* `fields`
+* `json`
+* `keyvalue`
+* `num`
-**Include the most accurate keywords to identify the subset of data from which you want to extract data.** Lock down the scope as tightly as possible to make sure it's extracting just the data you want, nothing more. Using a broader scope means that Sumo Logic will inspect more data for the fields you'd like to parse, which may mean that fields are extracted when you do not actually need them.
+:::note
+The **multi** and **auto** options are not supported in field extraction rules.
+:::
-**Create multiple, specific rules.** Instead of constructing complicated rules, create multiple rules with basic scope, then search on more than one (rules are additive). The OR and AND commands are supported, just as in any search. For example, you could use one rule to parse Apache log response codes, and then use another rule to parse response time. When used together, you can get all of the information you may need.
+## What are the limits for field extraction rules?
-**Don't extract fields you do not need.** Extract the minimum number of fields that should all be present in logs. Every field you include in the scope shows up in every search, so including extra fields means you'll see more results than you may need. It's better to create more rules that extract the fields that are most commonly used. First, look at common data sources and see what's most frequently extracted. Then, think about what you most frequently parse from those sources, then create rules to automatically extract those fields.
+The `parse multi` operator is not supported in FERs.
-**Create multiple parse nodrop statements in an FER for a field name to match distinct log patterns**. The different parse statements will effectively function like an OR statement since only one will match the log message and return the field value.
+
-**Test the scope before creating the rule.** Make sure that you can extract fields from all messages you need to be returned in search results. Test them by running a potential rule as a search.
+## FAQs
-**Make sure all fields appear in the scope you define.** When Field Extraction is applied to data, all fields must be present to have any fields indexed; even if one field isn't found in a message, that message is dropped from the results. In other words, it's all or nothing. For multiple sets of fields that are somewhat independent, make two rules.
+### How to create a field extraction rule in Sumo Logic?
-**Reuse field names in multiple FERs if scope is distinct and separate and not matching same messages.** To save space and allow for more FERs within your 200 field limit, you can reuse the field names as long as they are used in non-overlapping FERs.
+Navigate to **Data Management > Logs > Field Extraction Rules**, click **+ Add**, select the rule type (Ingest Time or Run Time), define the scope to target the relevant log sources, write a parse expression to extract the fields, and click **Save**.
-**Avoid targeting the same field name in the same message with multiple FERs.** When more than one FER targets the same message with the same field name, one of the rules will NOT apply. The rule applied to the specific field name is randomly selected. Don't use the same field names in multiple FERs that target the same messages.
+### How to extract a value from a log message using regex in Sumo Logic?
-## Supported parsing and search operators
+Create an Ingest Time field extraction rule with a parse expression using `parse regex`. For example: `parse regex "user=(?\\S+)"` extracts the `user` field from every matching log message at ingestion time, making it available in all searches and dashboards without repeating the regex in every query.
-The following operators can be used as part of the **Parse Expression** in an Ingest Time Field Extraction Rule.
+### What is the difference between Ingest Time and Run Time field extraction?
-* parse regex
-* parse anchor
-* parse nodrop
-* csv
-* fields
-* json
-* keyvalue
-* num
+Ingest Time rules parse any log format using manually written expressions and apply to data ingested after the rule is created, providing better search performance. Run Time rules parse JSON data automatically during a search using Dynamic Parsing and have no rule limit. Run Time rules are more flexible but add overhead at query time.
-:::note
-The **multi** and **auto** options are not supported in FERs.
-:::
+### What operators can be used in a field extraction rule parse expression?
+Ingest Time field extraction rules support: `parse regex`, `parse anchor`, `parse nodrop`, `csv`, `fields`, `json`, `keyvalue`, and `num`. The `multi` and `auto` options are not supported.
-## Limitations
+### How to parse multiple fields from a log message in a single rule?
-The `parse multi` operator is not supported in FERs.
+Use a single parse expression with multiple wildcards or named capture groups. For example: `parse "[hostId=*] [module=*] [localUserName=*]" as hostId, module, localUserName` extracts three fields from each matching log message in one rule.
-
+### What are the best practices for designing field extraction rules?
+
+Lock down the scope to target only the logs that need parsing. Create multiple specific rules rather than one complex rule. Extract only fields that are actually needed. Test the scope as a search before saving the rule. Avoid reusing the same field name in multiple rules that target the same messages.
+
+### Can field extraction rules be managed with Terraform?
+
+Yes. Use the `sumologic_field_extraction_rule` resource in the Sumo Logic Terraform provider to create and manage field extraction rules as code. See
+[Use Terraform with Sumo Logic](/docs/api/about-apis/terraform-with-sumo-logic/).
diff --git a/docs/manage/field-extractions/index.md b/docs/manage/field-extractions/index.md
index 8f47434486..7791f94cd8 100644
--- a/docs/manage/field-extractions/index.md
+++ b/docs/manage/field-extractions/index.md
@@ -1,38 +1,132 @@
---
slug: /manage/field-extractions
-title: Field Extractions
-description: Use Field Extraction Rules (FERs) to parse fields from log messages at ingestion time, improving search performance for alerts, dashboards, and ad hoc queries.
+title: Field Extraction Rules Overview
+sidebar_label: Field Extractions
+description: Field Extraction Rules in Sumo Logic parse fields from log messages at ingest time, eliminating the need to parse fields in every query and improving search performance.
+keywords:
+ - field-extraction-rules
+ - FER
+ - parse log fields
+ - log field parsing
+ - ingest time parsing
+ - extract fields from logs
+ - automatic log parsing
+ - log search performance
+head:
+ - tagName: script
+ attributes:
+ type: application/ld+json
+ innerHTML: |
+ {
+ "@context": "https://schema.org",
+ "@type": "FAQPage",
+ "mainEntity": [
+ {
+ "@type": "Question",
+ "name": "What is a Field Extraction Rule in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "A Field Extraction Rule (FER) parses fields from log messages at the time they are ingested into Sumo Logic. Once a rule is in place, the pre-parsed fields are available for searches, alerts, scheduled searches, and dashboards without needing to parse fields in every query."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to parse fields from logs automatically in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Create a Field Extraction Rule under Data Management > Logs > Field Extraction Rules. Define a scope to target the relevant log sources and a parse expression to extract the fields. The rule applies to all data ingested after it is created."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What is the difference between ingest time and run time field extraction in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Ingest Time rules parse fields when log data arrives, making those fields available immediately in searches, alerts, and dashboards without any query-level parsing. Run Time rules parse fields during a search query. Ingest Time rules improve search performance but only apply to data ingested after the rule is created."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How many field extraction rules can a Sumo Logic account have?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Accounts can have up to 50 Ingest Time field extraction rules and up to 200 fields total. Enterprise and Enterprise Suite accounts support up to 400 fields. Fields created from log metadata and Ingest Time rules share the same quota."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "Do field extraction rules apply to historical log data?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "No. Ingest Time field extraction rules only apply to data ingested after the rule is created. To parse historical data, use parse operators in a query or create Scheduled Views to extract fields from data ingested before the rule existed."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What happens when a field extraction rule is deleted in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Deleting a Field Extraction Rule does not delete the fields it was parsing. Any unwanted fields must be deleted separately from the Fields page."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What permissions are needed to create a field extraction rule in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "The Manage field extraction rules role capability is required to create, edit, or delete field extraction rules."
+ }
+ }
+ ]
+ }
---
+
import useBaseUrl from '@docusaurus/useBaseUrl';
import Iframe from 'react-iframe';
Field extractions allow you to parse [fields](/docs/manage/fields) from your log messages at the time the messages are ingested, which eliminates the need to parse fields at the query level. With Field Extraction Rules (FERs) in place, users can use the pre-parsed fields for ad hoc searches, scheduled searches, real-time alerts, and dashboards. In addition, field extraction rules help standardize field names and searches, simplify the search syntax and scope definition, and improve search performance.
-Fields are extracted from the time you create your FER moving forward. Therefore, set your FERs early on to take advantage of this automatic parsing mechanism.
+:::note
+The **Manage field extraction rules** [role capability](/docs/manage/users-roles/roles/role-capabilities/) is required to create, edit, or delete a field extraction rule.
+:::
+
+:::info
+Fields are extracted from the time you create your FER moving forward. Therefore, set your FERs early on to take advantage of this automatic parsing mechanism. For best practices on naming your fields, see [Field Naming Convention](field-naming-convention.md).
+:::
+
+:::training Micro Lesson
+
+
+
+:::
-For best practices on naming your fields, see [Field Naming Convention](field-naming-convention.md).
+## How do I access the Field Extraction Rules page?
[**New UI**](/docs/get-started/sumo-logic-ui/). To access the Field Extraction Rules page, in the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the Field Extraction Rules page, in the main Sumo Logic menu select **Manage Data > Logs > Field Extraction Rules**.
+
+
To refine the table results, use the **Add a filter** section located above the table. *AND* logic is applied when filtering between different sections, while *OR* logic is applied when filtering within the same section.
:::note
You can see the suggestions only if there are two or more responses for the same column or section.
:::
-:::important
-You need the **Manage field extraction rules** [role capability](../users-roles/roles/role-capabilities.md) to create a field extraction rule.
-:::
-
-
-
The Field Extraction Rules page displays the following information:
-When hovering over a row in the table there are icons that appear on the far right for editing, disabling and deleting the rule.
-
* **Status** shows a checkmark in a green circle
to indicate if the rule is actively being applied or an exclamation mark in a red circle
to indicate if the rule is disabled.
* **Rule Name**
* **Applied At** indicates when the field extraction process occurs, either at Ingest or Run time.
@@ -41,41 +135,17 @@ When hovering over a row in the table there are icons that appear on the far ri
* **Last Modified** date and time by user
* **Fields Capacity** (bottom of table) shows how many fields your account is using, out of the total available for use.
+:::info
You can view the fields created in your account and what features are referencing them on the [Fields](/docs/manage/fields) page.
+:::
-On the Field Extraction Rules page you can:
-
-* Click **+ Add** to [create a Field Extraction Rule](create-field-extraction-rule.md).
-* Search Field Extraction Rules by name and scope.
-* [**Edit** a Field Extraction Rule](edit-field-extraction-rules.md).
-* **Disable** a Field Extraction Rule.
-* **Delete** a Field Extraction Rule.
-
-## Limitations
+## What are the limits for field extraction rules?
import FerLimit from '../../reuse/fer-limitations.md';
-## Micro lesson: Field extraction rules basics
-
-:::training Micro Lesson
-
-
-
-:::
-
-## Edit a Field Extraction Rule
+## How do I edit a field extraction rule?
Changes to Field Extraction Rules are implemented immediately.
@@ -83,7 +153,7 @@ Changes to Field Extraction Rules are implemented immediately.
1. Find the rule in the table and click it. A window appears on the right of the table, click the **Edit** button.
1. Make changes as needed and click **Save** when done.
-## Delete a Field Extraction Rule
+## How do I delete a field extraction rule?
Deleting a Field Extraction Rule doesn't delete the fields it was parsing. You can delete any unwanted fields on the [Fields](/docs/manage/fields) page.
@@ -132,3 +202,34 @@ In this section, we'll introduce the following concepts:
+
+
+## FAQs
+
+### What is a Field Extraction Rule in Sumo Logic?
+
+A Field Extraction Rule (FER) parses fields from log messages at ingestion time. Once in place, the pre-parsed fields are available for searches, alerts, scheduled searches, and dashboards without needing to parse fields in every query.
+
+### How to parse fields from logs automatically in Sumo Logic?
+
+Create a Field Extraction Rule under **Data Management > Logs > Field Extraction Rules**. Define a scope to target the relevant log sources and a parse expression to extract the fields. The rule applies to all data ingested after it is created.
+
+### What is the difference between ingest time and run time field extraction?
+
+Ingest Time rules parse fields when log data arrives, making those fields immediately available in searches and alerts without query-level parsing. Run Time rules parse fields during a search. Ingest Time rules improve performance but only apply to data ingested after the rule is created.
+
+### How many field extraction rules can an account have?
+
+Accounts support up to 50 Ingest Time rules and 200 fields total. Enterprise and Enterprise Suite accounts support up to 400 fields. Fields from log metadata and Ingest Time rules share the same quota.
+
+### Do field extraction rules apply to historical log data?
+
+No. Ingest Time FERs only apply to data ingested after the rule is created. To parse historical data, use parse operators in a query or create Scheduled Views to extract fields from data ingested before the rule existed.
+
+### What happens when a field extraction rule is deleted?
+
+Deleting a rule does not delete the fields it was parsing. Delete any unwanted fields separately from the [Fields](/docs/manage/fields/) page.
+
+### What permissions are needed to create a field extraction rule?
+
+The **Manage field extraction rules** role capability is required to create, edit, or delete field extraction rules.
diff --git a/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md b/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md
index 2dec714702..11e4ee42f3 100644
--- a/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md
+++ b/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md
@@ -1,13 +1,89 @@
---
id: daily-volume
-title: Daily Volume
-description: Control the capacity of daily log ingestion volume sent to Sumo Logic from collectors.
+title: How to Set a Daily Log Ingestion Limit in Sumo Logic
+sidebar_label: Daily Volume
+description: Learn how to control the capacity of daily log ingestion volume sent to Sumo Logic from collectors.
+keywords:
+ - ingest-budget
+ - log-ingestion-limit
+ - reduce-log-ingestion-costs
+ - daily-ingestion-limit
+ - log-cost-control
+ - per-team-ingestion-limit
+ - ingestion-budget-scope
+head:
+ - tagName: script
+ attributes:
+ type: application/ld+json
+ innerHTML: |
+ {
+ "@context": "https://schema.org",
+ "@type": "FAQPage",
+ "mainEntity": [
+ {
+ "@type": "Question",
+ "name": "How to set a daily log ingestion limit in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Create an ingest budget under Data Management > Data Collection > Ingest Budget. Set a scope using a metadata field such as _sourceCategory or a custom field like team=payments, define the daily capacity limit, and choose whether to stop collecting or keep collecting when the limit is reached."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to set a per-team log ingestion limit in Sumo Logic?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Assign a custom field such as team=payments to collectors or sources, then create an ingest budget with the scope team=payments and a daily capacity limit. Each team can have its own budget, and log data matching the scope counts against that team's limit."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "What happens when a Sumo Logic ingest budget capacity is reached?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "When the capacity is reached, Sumo Logic either stops collecting data or keeps collecting depending on the action configured. Stop Collecting halts ingestion immediately. Keep Collecting continues ingestion and logs the overage in the Audit Index."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How does Sumo Logic ingest budget scope work?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "The scope defines which log data the budget applies to. It uses a metadata field key-value pair such as _sourceCategory=prod/payments or a custom field like team=platform. A single wildcard is supported, for example _sourceCategory=prod*. Log data matching the scope counts against the budget's daily capacity."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "When does a Sumo Logic ingest budget reset?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Ingest budgets reset automatically every 24 hours at the time and time zone configured when the budget was created. A budget can also be reset manually at any time from the Ingest Budgets page, without affecting the next scheduled reset."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "How to get alerted when an ingest budget is close to its limit?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Set an Audit Threshold percentage when creating the budget. When usage reaches that threshold, an event is logged in the Audit Index. Schedule a search against the Audit Index to trigger an alert when the threshold is approaching or exceeded."
+ }
+ },
+ {
+ "@type": "Question",
+ "name": "Can a log message count against multiple ingest budgets?",
+ "acceptedAnswer": {
+ "@type": "Answer",
+ "text": "Yes. If two budgets have overlapping scopes, a log message matching both scopes consumes capacity from both budgets. This can be used to create sub-budgets — for example, a combined 1 TB limit across all services with separate per-service limits underneath."
+ }
+ }
+ ]
+ }
---
import useBaseUrl from '@docusaurus/useBaseUrl';
:::note
-If you want to use APIs to manage ingest budgeting, you must use [Ingest Budget Management V2 APIs](/docs/api/ingest-budget-v2/). Ingest Budget Management V1 APIs have been removed and are no longer supported.
+To manage ingest budgets via API, use the [Ingest Budget Management V2 API](/docs/api/ingest-budget-v2/). Ingest Budget Management V1 APIs have been removed and are no longer supported.
:::
import TerraformLink from '../../../reuse/terraform-link.md';
@@ -24,16 +100,16 @@ Ingest budgets automatically reset their capacity utilization tracking every 24
An ingest budget's capacity usage is logged in the Audit Index when the audit threshold is reached and continues to be logged until the budget is reset. To track and schedule alerts on ingest budget capacity-usage and resets see [audit ingest budgets](#audit-ingest-budgets).
-## Availability
+## What permissions are required to use ingest budgets?
| Account Type | Account Level |
|:--------------|:--------------------------------------|
| CloudFlex | Enterprise |
| Credits | Trial, Enterprise Operations, Enterprise Security, Enterprise Suite |
-## Rules
+## What are the requirements and limits for ingest budgets?
-* There is a limit of 100 ingest budgets.
+* A maximum of **100 ingest budgets** per account.
* Bytes are calculated in base 2 (binary format, 1024 based).
* Ingest Budgets do not affect [throttling](/docs/manage/ingestion-volume/log-ingestion/#log-throttling).
* [Traces](/docs/apm/traces) are not calculated and are not supported.
@@ -43,7 +119,7 @@ An ingest budget's capacity usage is logged in the Audit Index when the audit th
* Data is not automatically recovered or ingested later once the capacity tracking is reset.
* In the scope, do not wrap values in quotes, unless the value explicitly has quotes. For example, if you want to assign the scope with `_collector` and the name of the collector is `CloudTrail`, you would assign the scope as `_collector=CloudTrail` instead of `_collector="CloudTrail"`.
-## Budget assignment
+## How does ingest budget scope work?
The scope feature allows you to assign ingest budgets to your log data using one of the following options:
@@ -54,20 +130,17 @@ The value supports a single wildcard, such as `_sourceCategory=prod*payment`. F
[V2 ingest budgets](/docs/api/ingest-budget-v2/) provide you the ability to assign budgets to your log data by either [fields](/docs/manage/fields) or the following [built in metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) fields, `_collector`, `_source`, `_sourceCategory`, `_sourceHost`, and `_sourceName`.
-## Source type behavior
+## How do certain source types behave when ingestion is stopped?
-A few sources on Hosted Collectors behave differently when instructed to stop collecting data.
+Some hosted collector sources behave differently when a **Stop Collecting** action is triggered:
-* HTTP sources will drop data requests, yet still return a `200 OK` response.
-* AWS S3 based sources will skip objects.
-* Cloud Syslog sources will keep the connection open yet drop incoming syslog messages.
+| Source type | Behaviour when stopped |
+|:--|:--|
+| HTTP sources | Drops data requests but still returns a `200 OK` response. |
+| AWS S3-based sources | Skips objects. |
+| Cloud Syslog sources | Keeps the connection open but drops incoming syslog messages. |
-## Tools
-
-* [Ingest Budget Management API V2](/docs/api/ingest-budget-v2.md)
-* Terraform provider: [sumologic_ingest_budget_v2](https://registry.terraform.io/providers/SumoLogic/sumologic/latest/docs/resources/ingest_budget_v2)
-
-## Manage ingest budgets
+## How to manage ingest budgets?
Use the **Ingest Budgets** page to manage your ingest budgets.
@@ -101,7 +174,7 @@ budget:
When hovering over a row in the Ingest Budgets table there are icons that appear on the far right for editing and deleting the ingest budget.
-#### Create ingest budget
+## How to create an ingest budget?
1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Ingest Budget**. You can also click the **Go To...** menu at the top of the screen and select **Ingest Budget**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Ingest Budgets**.
1. Click the **+ Add Budget** button on the top right of the table. A panel named **Create Ingest Budget** appears to the right of the Ingest Budgets table.
@@ -121,7 +194,7 @@ When hovering over a row in the Ingest Budgets table there are icons that appear
* **Audit Threshold**. The threshold, as a percentage, of when an ingest budget's capacity usage is logged in the Audit Index.
1. When you're finished configuring the ingest budget click **Add**.
-#### Reset ingest budget
+## How to reset an ingest budget manually?
You can manually reset a budget at any time to set its capacity utilization tracking to zero. This won't affect the next scheduled reset time and can be done as many times as needed.
@@ -129,27 +202,25 @@ You can manually reset a budget at any time to set its capacity utilization tra
1. In the table find the ingest budget you want to reset and click the row to open its details pane.
1. Click the **Reset** button.
-#### Edit ingest budget
+## How to edit an ingest budget?
1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Ingest Budget**. You can also click the **Go To...** menu at the top of the screen and select **Ingest Budget**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Ingest Budgets**.
1. In the table find the ingest budget you want to edit and click the edit icon
on the right of the row or click the row and then click the edit icon in the details panel.
1. Make your changes and click **Update**.
-#### Delete ingest budget
+## How to delete an ingest budget?
1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Ingest Budget**. You can also click the **Go To...** menu at the top of the screen and select **Ingest Budget**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Ingest Budgets**.
1. In the table find the ingest budget you want to delete and click the delete icon
on the right of the row or click the row and then click the delete icon in the details panel.
1. You will get a confirmation prompt, ensure that you are deleting the desired ingest budget and then click **Delete**.
-### Budget assignment examples
-
-#### Control ingest by team or service
+## How to control ingest by team or service?
You can assign collectors and sources with [fields](/docs/manage/fields) based on teams and services. For example, a field could be `team=` or `service=`. With these fields assigned, you can create a budget with the scope `team=` to achieve team based budgets. You can leverage source fields for finer control over the scope of the budget. You can map a model of your deployment or organization to metadata fields and then create ingest budgets with a scope referencing them.
-#### Match against multiple budgets
+## Can a log message match against multiple budgets?
-Log messages can match against multiple budgets if two or more budgets have overlapping scopes. For example, see the following two budgets:
+Yes. Log messages can match against multiple budgets if two or more budgets have overlapping scopes. For example, see the following two budgets:
* Budget #1
@@ -194,38 +265,34 @@ To ensure the combined daily ingestion for the infrastructure components ALB, Ka
* Capacity= 100 GB
* Action = “stop collecting”
-### Audit ingest budgets
+## How do I get alerted when an ingest budget is approaching its limit?
The [Audit Index](/docs/manage/security/audit-indexes/audit-index) logs events when an ingest budget has reached its configured Audit Threshold percent. There are two different log formats.
1. Approaching or exceeding capacity
-1. Resets
+ * `budget_name` is the name of the ingest budget.
+ * `budget_scope` is the ingest budget's scope.
+ * `Usage status` is either `Approaching` (≥ 85%) or `Exceeded` (≥ 100%) its set capacity limit.
+
+ ```
+ Budget budget_name with scope budget_scope consumed 6330.00% of capacity since last reset at 2020-09-17T13:38:53.663 -0700.
+ Capacity: 200 bytes
+ Usage: 12660 bytes
+ Usage status: Exceeded
+ Action: drop_data
+ Next reset: 2020-09-18T13:35:00.000 -0700
+ ```
-**Approaching or exceeding capacity example**, where:
-
-* `budget_name` is the name of the ingest budget.
-* `budget_scope` is the ingest budget's scope.
-* `Usage status` is either `Approaching` (≥ 85%) or `Exceeded` (≥ 100%) its set capacity limit.
-
-```
-Budget budget_name with scope budget_scope consumed 6330.00% of capacity since last reset at 2020-09-17T13:38:53.663 -0700.
-Capacity: 200 bytes
-Usage: 12660 bytes
-Usage status: Exceeded
-Action: drop_data
-Next reset: 2020-09-18T13:35:00.000 -0700
-```
-
-**Reset example**, where `budget_name` is the name of the ingest budget and `budget_scope` is the ingest budget's scope:
+1. Resets
-```
-Budget budget_name with scope budget_scope consumed 0.00% of capacity and is reset at 2020-09-18T00:03:34.574 -0700.
-Capacity: 1000 bytes
-Usage: 0 bytes
-Next reset: 2020-09-19T00:00:00.000 -0700
-```
+ `budget_name` is the name of the ingest budget and `budget_scope` is the ingest budget's scope:
-#### Audit Index queries
+ ```
+ Budget budget_name with scope budget_scope consumed 0.00% of capacity and is reset at 2020-09-18T00:03:34.574 -0700.
+ Capacity: 1000 bytes
+ Usage: 0 bytes
+ Next reset: 2020-09-19T00:00:00.000 -0700
+ ```
You can schedule the following searches to get alerts when needed, see [Create a Scheduled Search](/docs/alerts/scheduled-searches/schedule-search/) for details.
@@ -253,9 +320,9 @@ Search for only capacity usage logs:
_index=sumologic_audit _sourceName=VOLUME_QUOTA _sourceCategory=account_management "Budget" "last reset"
```
-### Health events
+## How do I monitor the health of ingest budgets?
-Health events allow you to keep track of the health of your collectors, sources, and Ingest Budgets. You can use them to find and investigate common errors and warnings that are known to cause collection issues. See [Health Events](/docs/manage/health-events.md) for details.
+Health events allow you to keep track of the health of your collectors, sources, and ingest budgets. You can use them to find and investigate common errors and warnings that are known to cause collection issues. See [Health Events](/docs/manage/health-events.md) for details.
Ingest budgets that have exceeded their capacity are placed in an error health state. The following are two common queries used to investigate the health of ingest budgets.
@@ -275,3 +342,38 @@ _index=sumologic_system_events "IngestBudget"
| where eventType = "Health-Change" AND resourceType = "IngestBudget" and severity="Warning"
```
+## Where can ingest budgets be managed programmatically?
+
+- **API**. [Ingest Budget Management V2 API](/docs/api/ingest-budget-v2/)
+- **Terraform**. [`sumologic_ingest_budget_v2`](https://registry.terraform.io/providers/SumoLogic/sumologic/latest/docs/resources/ingest_budget_v2)
+
+
+## FAQs
+
+### How to set a daily log ingestion limit in Sumo Logic?
+
+Create an ingest budget under **Data Management > Data Collection > Ingest Budget**. Set a scope using a metadata field such as `_sourceCategory` or a custom field like `team=payments`, define the daily capacity limit, and choose whether to stop collecting or keep collecting when the limit is reached.
+
+### How to set a per-team log ingestion limit?
+
+Assign a custom field such as `team=payments` to the relevant collectors or sources, then create an ingest budget with the scope `team=payments` and a daily capacity. Each team can have its own budget with an independent daily limit.
+
+### What happens when an ingest budget capacity is reached?
+
+If the action is set to **Stop Collecting**, ingestion halts immediately for all sources matching the scope. If set to **Keep Collecting**, ingestion continues and the overage is logged in the Audit Index for monitoring and alerting.
+
+### How does ingest budget scope work?
+
+The scope is a metadata key-value pair that defines which log data counts against the budget. Use built-in fields like `_sourceCategory=prod/payments` or custom fields like `team=platform`. A single wildcard is supported — for example, `_sourceCategory=prod*`.
+
+### When does an ingest budget reset?
+
+Ingest budgets reset automatically every 24 hours at the configured time and time zone. A budget can also be reset manually at any time from the Ingest Budgets page without affecting the next scheduled reset.
+
+### How to get alerted when an ingest budget is close to its limit?
+
+Set an **Audit Threshold** when creating the budget. When usage reaches that percentage, an event is logged in the Audit Index. Schedule a search against the Audit Index to send an alert notification when the threshold is approaching or exceeded.
+
+### Can a log message count against more than one ingest budget?
+
+Yes. If two budgets have overlapping scopes, a matching log message consumes capacity from both. This is useful for creating sub-budgets — for example, a total combined limit with separate per-service or per-team limits underneath.
\ No newline at end of file