From 8a639a5772d3821bc139d8e4490c8f39bcaddb1b Mon Sep 17 00:00:00 2001 From: Alekh Nema Date: Fri, 24 Apr 2026 12:39:00 +0530 Subject: [PATCH 01/12] Doc for Hashing functionality in local file source template. --- .../block-blob/steps-multi-storage-account.md | 80 +++++++++++++++ .../processing-rules/hash-rules.md | 99 +++++++++++++++++++ .../processing-rules/index.md | 6 ++ .../processing-rules/overview.md | 3 +- 4 files changed, 187 insertions(+), 1 deletion(-) create mode 100644 docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md create mode 100644 docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md diff --git a/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md b/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md new file mode 100644 index 0000000000..fbaa700209 --- /dev/null +++ b/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md @@ -0,0 +1,80 @@ +## Ingesting from Multiple Storage Accounts (Optional) + +If you want to ingest data into Sumo Logic from multiple storage accounts, perform following tasks for each storage account separately. + +:::note +The following steps assume you have noted down the resource group name, storage account name, and container name where the blobs will be ingested from. +::: + +* Authorize App Service read from storage account +* Create an Event Grid Subscription - Subscribes all blob creation events to the Event Hub created by ARM template + +### Step 1: Authorize App Service to read from storage account + +This section provides instructions on authorizing the App Service to list the Storage Account key. This enables the Azure function to read from the storage account. + +To authorize the App Service to list the Storage Account key, do the following: + +1. Go to **Storage Account** and click **Access Control(IAM)**. + + +1. Click the **Add** **+** at the top of the page. + + +1. Select **Add role assignment** from dropdown. +1. In the **Add role assignment** window, go to **Role** tab and choose **Storage Blob Data Reader**. Click **Next**. +1. In **Members** tab, select **Managed Identity**. +1. In the **Select Managed identities** window, + + * **Subscription**: Choose Pay as you Go. + * **Managed Identity**: Choose Function App. + * **Select**: **Select SUMOBRDLQProcessor\** and **SUMOBRTaskConsumer\** app services which are created by the ARM template. Click **Select**. +1. Click **Review + assign** +1. Click **Save**. + +### Step 2: Create an Event Grid Subscription + +This section provides instructions for creating an event grid subscription, that subscribes all blob creation events to the Event Hub created by ARM template + +To create an event grid subscription, do the following: + +1. Go to the storage account which needs to be monitored additionally. Go under Events blade in left pane. + +1. At the top of the **Event subscriptions** tab, click **+Event Subscription** to create new event subscription. + + +1. Specify the following values for **Event Subscription Details**: + + * **Name:** Fill the event subscription name. + * **Event Schema:** Select **Event Grid Schema**. + +1. Specify the following values for **Topic Details**: + + * **System Topic Name**. Provide the topic name, if the system topic already exists then it will automatically select the existing topic. + +1. Specify the following details for Event Types: + + * Select **Blob Created** from the **Filter to Event Types** dropdown. + +1. Specify the following details for Endpoint Types: + + * **Endpoint Type**. Select **Event Hubs** from the dropdown. + * **Endpoint.** Click on **Configure an endpoint.** + + The Select Event Hub dialog appears. + + +1. Specify the following Select Event Hub parameters, then click **Confirm Selection.** + + * **Resource Group**. Select the resource group you created by ARM template. + * **Event Hub Namespace**. Select **SUMOBREventHubNamespace\<*unique string*\\>**. + * **Event Hub**. Select **blobreadereventhub** from the dropdown. + +1. Specify the following Filters tab options(Optional): + + * Check Enable subject filtering. + * To filter events by container name, enter the following in the **Subject Begins With** field, replacing `` with the name of the container from where you want to export logs. `/blobServices/default/containers//` + +1. Click **Create**. + +1. Verify the deployment was successful by checking **Notifications** in the top right corner of the Azure Portal. \ No newline at end of file diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md new file mode 100644 index 0000000000..e46e5d0970 --- /dev/null +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md @@ -0,0 +1,99 @@ +--- +id: hash-rules +title: OpenTelemetry Remote Management Hash Rules +sidebar_label: Hash Rules +description: Create an OpenTelemetry collector remote management hash rule to replace an expression with a hash code. +--- + +A hash rule is a processing rule that allows you to replace an expression with a hash code generated for that value. Hashed data is completely hidden (obfuscated) before being sent to Sumo Logic. This can be very useful in situations where some type of data must not leave your premises, such as credit cards and social security numbers. Each unique value will have a unique hash code. + +The hash algorithm used is **SHA-256**. + +Ingestion volume is calculated after applying the hash filter. If the hash reduces the size of the log, the smaller size will be measured against ingestion limits. + +:::note +Currently available for Local File ST only. +::: + +## How it works + +When you add a hash rule action to your processing rules, you need to provide two inputs: + +1. **Expression**: A regular expression that must contain exactly **one capture group** `( )`. The string value matched through this capture group is what will be hashed using SHA-256. If there are multiple parts of the string which needs to be hashed, add additional hashing processing rules for it. + +2. **Replacement Format**: The formatted replacement string that will replace the matching string in the log. Use `%s` to refer to the hashed value from the SHA-256 function. The `%s` reference is mandatory and can only be used once. + +## Examples + +### Hash a password + +For example, to hash the password `Welcome123` from this log: + +``` +user=sumo password=Welcome123 +``` + +You could use the following configuration: + +**Expression:** +``` +password=([A-Za-z0-9]+) +``` + +**Replacement Format:** +``` +password=%s +``` + +**Result:** +- **Matching string**: `password=Welcome123` +- **Capture group**: `Welcome123` (this value is hashed) +- **Output log**: `user=sumo password=` + +Where `` is the SHA-256 hash of `Welcome123`. + +### Hash member IDs + +To hash member IDs from this log: + +``` +2012-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [memberid=dan@demo.com] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages] +``` + +You could use the following configuration: + +**Expression:** +``` +memberid=([^\]]+) +``` + +**Replacement Format:** +``` +memberid=%s +``` + +**Resulting hashed log:** + +``` +2012-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [memberid=906e9cc124c8e1085b10e1cec4cc6526f3637558be361d3b4bb54bb537e49a49] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages] +``` + +:::important +Any hashing expression should be tested and verified with a sample source file before applying it to your production logs. +::: + +## Rules and limitations + +* The regular expression must contain exactly **one capture group** enclosed in `( )`. Values inside this capture group will be hashed. If there are multiple parts of the string which needs to be hashed, add additional hashing processing rules for it. + +* You can use an anchor to detect specific values in your logs. Only the value within the capture group will be hashed. + +* The hash algorithm is **SHA-256** (MD5 is not supported for OpenTelemetry collectors). + +* Make sure you do not specify a regular expression that matches a full log line. Doing so will result in the entire log line being hashed. + +* The replacement format must include `%s` exactly once to reference the hashed value. + +* Do not unnecessarily match on more of the log than needed. Use precise regular expressions to ensure that only the intended sensitive information is hashed, not surrounding context. + +* Each unique value will produce a unique hash code. The same input value will always produce the same hash output, allowing you to correlate occurrences while keeping the actual value hidden. \ No newline at end of file diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md index 294762f31b..ae4f8adbf9 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md @@ -37,4 +37,10 @@ In this section, we'll introduce the following concepts:

Create an OTRM Windows source template mask rule to replace an expression with a mask string.

+
+
+ icon

OTRM Hash Rules

+

Create an OTRM hash rule to replace an expression with a hash code. Currently available for Local File ST only.

+
+
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md index 66171f4454..ea70cdee54 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md @@ -15,6 +15,7 @@ Processing rules for logs collection support the following rule types: * [Exclude messages that match](include-and-exclude-rules.md). Remove messages that you do not want to send to Sumo Logic at all ("denylist" filter). These messages are skipped by OpenTelemetry Collector and are not uploaded to Sumo Logic. * [Include messages that match](include-and-exclude-rules.md). Send only the data you'd like in your Sumo Logic account (an "allowlist" filter). This type of rule can be useful, for example, if you only want to include messages coming from a firewall. * [Mask messages that match](mask-rules.md). Replace an expression with a mask string that you can customize. This is another way to your protect data, such as passwords, that you do not normally track. +* [Hash messages that match](hash-rules.md). Replace an expression with a hash code generated for that value. This completely hides sensitive data such as credit cards and social security numbers before being sent to Sumo Logic. ## Metrics collection @@ -27,7 +28,7 @@ Processing rules for metrics collection support the following rule types: You can create one or more processing rules for a source template, combining the different types of filters to generate the exact data set you want sent to Sumo Logic. -When a Source has multiple rules they are processed in the following order: includes, excludes, masks.  +When a Source has multiple rules they are processed in the following order: includes, excludes followed by the order of occurence of hashing or masking rule. Exclude rules take priority over include rules. Include rules are processed first, however, if an exclude rule matches data that matched the include rule filter, the data is excluded. From 8c84955daf23ea59ca911c30164881cd2ec147be Mon Sep 17 00:00:00 2001 From: Alekh Nema Date: Fri, 24 Apr 2026 12:42:18 +0530 Subject: [PATCH 02/12] Correcting a typo --- .../remote-management/processing-rules/overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md index ea70cdee54..1cb2ad4fd5 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md @@ -28,7 +28,7 @@ Processing rules for metrics collection support the following rule types: You can create one or more processing rules for a source template, combining the different types of filters to generate the exact data set you want sent to Sumo Logic. -When a Source has multiple rules they are processed in the following order: includes, excludes followed by the order of occurence of hashing or masking rule. +When a Source has multiple rules they are processed in the following order: includes, excludes followed by the order of occurrence of hashing or masking rule. Exclude rules take priority over include rules. Include rules are processed first, however, if an exclude rule matches data that matched the include rule filter, the data is excluded. From 8f427110b003228773328f65f7119ea25ec25963 Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Fri, 24 Apr 2026 13:44:23 +0530 Subject: [PATCH 03/12] Update index.md arranged in alphabetic order --- .../remote-management/processing-rules/index.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md index ae4f8adbf9..b7b6950364 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md @@ -19,6 +19,12 @@ To configure processing rules, navigate to the remote management section in the In this section, we'll introduce the following concepts:
+
+
+ icon

OTRM Hash Rules

+

Create an OTRM hash rule to replace an expression with a hash code. Currently available for Local File ST only.

+
+
Rules icon

OTRM Include and Exclude Rules

@@ -37,10 +43,4 @@ In this section, we'll introduce the following concepts:

Create an OTRM Windows source template mask rule to replace an expression with a mask string.

-
-
- icon

OTRM Hash Rules

-

Create an OTRM hash rule to replace an expression with a hash code. Currently available for Local File ST only.

-
-
From a1c9abfdc814a3e829a19d6b6c21ed10a9ae8eab Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Fri, 24 Apr 2026 13:47:15 +0530 Subject: [PATCH 04/12] Update overview.md minor changes --- .../processing-rules/overview.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md index 1cb2ad4fd5..94ff96a772 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md @@ -10,12 +10,12 @@ Processing rules affect only the data sent to Sumo Logic; logs and metrics on y ## Logs collection -Processing rules for logs collection support the following rule types: +Processing rules for log collection support the following rule types: -* [Exclude messages that match](include-and-exclude-rules.md). Remove messages that you do not want to send to Sumo Logic at all ("denylist" filter). These messages are skipped by OpenTelemetry Collector and are not uploaded to Sumo Logic. +* [Exclude messages that match](include-and-exclude-rules.md). Remove messages that you do not want to send to Sumo Logic at all ("denylist" filter). These messages are skipped by the OpenTelemetry Collector and are not uploaded to Sumo Logic. * [Include messages that match](include-and-exclude-rules.md). Send only the data you'd like in your Sumo Logic account (an "allowlist" filter). This type of rule can be useful, for example, if you only want to include messages coming from a firewall. -* [Mask messages that match](mask-rules.md). Replace an expression with a mask string that you can customize. This is another way to your protect data, such as passwords, that you do not normally track. -* [Hash messages that match](hash-rules.md). Replace an expression with a hash code generated for that value. This completely hides sensitive data such as credit cards and social security numbers before being sent to Sumo Logic. +* [Mask messages that match](mask-rules.md). Replace an expression with a customizable mask string. This is another way to protect data you do not normally track, such as passwords. +* [Hash messages that match](hash-rules.md). Replace an expression with a hash code generated for that value. This completely obscures sensitive data, such as credit card numbers and Social Security numbers, before they are sent to Sumo Logic. ## Metrics collection @@ -26,13 +26,13 @@ Processing rules for metrics collection support the following rule types: ## How do processing rules work together? -You can create one or more processing rules for a source template, combining the different types of filters to generate the exact data set you want sent to Sumo Logic. +You can create one or more processing rules for a source template, combining different filter types to generate the exact dataset you want sent to Sumo Logic. -When a Source has multiple rules they are processed in the following order: includes, excludes followed by the order of occurrence of hashing or masking rule. +When a Source has multiple rules, they are processed in the following order: includes, excludes, followed by the order of occurrence of hashing or masking rules. -Exclude rules take priority over include rules. Include rules are processed first, however, if an exclude rule matches data that matched the include rule filter, the data is excluded. +Exclude rules take priority over include rules. Include rules are processed first. However, if an exclude rule matches data that matched the include rule filter, the data is excluded. ## Limitations * Regular expressions must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax). -* Processing rules are tested with maximum of 20 rules. +* Processing rules are tested with a maximum of 20 rules. From e04658d049af7c27f3a0588c2813619f59a8f237 Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Fri, 24 Apr 2026 14:16:36 +0530 Subject: [PATCH 05/12] Update steps-multi-storage-account.md --- .../block-blob/steps-multi-storage-account.md | 93 +++++++------------ 1 file changed, 34 insertions(+), 59 deletions(-) diff --git a/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md b/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md index fbaa700209..cf947b89fb 100644 --- a/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md +++ b/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md @@ -1,13 +1,13 @@ ## Ingesting from Multiple Storage Accounts (Optional) -If you want to ingest data into Sumo Logic from multiple storage accounts, perform following tasks for each storage account separately. +If you want to ingest data into Sumo Logic from multiple storage accounts, perform the following tasks for each storage account separately. :::note -The following steps assume you have noted down the resource group name, storage account name, and container name where the blobs will be ingested from. +The following steps assume you have noted down the resource group name, storage account name, and container name from which the blobs will be ingested. ::: -* Authorize App Service read from storage account -* Create an Event Grid Subscription - Subscribes all blob creation events to the Event Hub created by ARM template +* Authorize App Service to read from the storage account. +* Create an Event Grid Subscription - Subscribes all blob creation events to the Event Hub created by the ARM template. ### Step 1: Authorize App Service to read from storage account @@ -15,66 +15,41 @@ This section provides instructions on authorizing the App Service to list the St To authorize the App Service to list the Storage Account key, do the following: -1. Go to **Storage Account** and click **Access Control(IAM)**. - - +1. Navigate to the **Storage Account** and click **Access Control(IAM)**. 1. Click the **Add** **+** at the top of the page. - - -1. Select **Add role assignment** from dropdown. -1. In the **Add role assignment** window, go to **Role** tab and choose **Storage Blob Data Reader**. Click **Next**. -1. In **Members** tab, select **Managed Identity**. -1. In the **Select Managed identities** window, - - * **Subscription**: Choose Pay as you Go. - * **Managed Identity**: Choose Function App. - * **Select**: **Select SUMOBRDLQProcessor\** and **SUMOBRTaskConsumer\** app services which are created by the ARM template. Click **Select**. -1. Click **Review + assign** -1. Click **Save**. +1. Select **Add role assignment** from the dropdown. +1. In the **Add role assignment** window, select **Role > Storage Blob Data Reader > Next**. +1. In the **Members** tab, select **Managed Identity**. +1. In the **Select Managed identities** window: + * **Subscription**. Select **Pay as you Go**. + * **Managed Identity**. Choose **Function App**. + * **Select**. Select **SUMOBRDLQProcessor\** and **SUMOBRTaskConsumer\** app services created by the ARM template, then click **Select**. +1. Click **Review + assign**. +1. Click **Save** to complete the role assignment. ### Step 2: Create an Event Grid Subscription -This section provides instructions for creating an event grid subscription, that subscribes all blob creation events to the Event Hub created by ARM template +This section provides instructions for creating an event grid subscription that subscribes all blob creation events to the Event Hub created by the ARM template. To create an event grid subscription, do the following: -1. Go to the storage account which needs to be monitored additionally. Go under Events blade in left pane. - -1. At the top of the **Event subscriptions** tab, click **+Event Subscription** to create new event subscription. - - -1. Specify the following values for **Event Subscription Details**: - - * **Name:** Fill the event subscription name. - * **Event Schema:** Select **Event Grid Schema**. - -1. Specify the following values for **Topic Details**: - - * **System Topic Name**. Provide the topic name, if the system topic already exists then it will automatically select the existing topic. - -1. Specify the following details for Event Types: - - * Select **Blob Created** from the **Filter to Event Types** dropdown. - -1. Specify the following details for Endpoint Types: - - * **Endpoint Type**. Select **Event Hubs** from the dropdown. - * **Endpoint.** Click on **Configure an endpoint.** - - The Select Event Hub dialog appears. - - -1. Specify the following Select Event Hub parameters, then click **Confirm Selection.** - - * **Resource Group**. Select the resource group you created by ARM template. +1. Navigate to the storage account you want to monitor and open the **Events** blade from the left pane. +2. In the **Event subscriptions** tab, click **+Event Subscription** to create a new subscription. +3. Under **Event Subscription Details**, provide: + * **Name**. Enter a name for the subscription. + * **Event Schema**. Select **Event Grid Schema**. +4. Under **Topic Details**, enter a **System Topic Name**. If a topic already exists, it will be selected automatically. +5. Under **Event Types**, choose **Blob Created** from the **Filter to Event Types** dropdown. +6. Under **Endpoint Details**, + * Select **Event Hubs** as the **Endpoint Type** from the dropdown. + * Click **Configure an endpoint**, then proceed in the dialog. +7. In the **Select Event Hub** dialog, configure: + * **Resource Group**. Select the resource group you created via the ARM template. * **Event Hub Namespace**. Select **SUMOBREventHubNamespace\<*unique string*\\>**. - * **Event Hub**. Select **blobreadereventhub** from the dropdown. - -1. Specify the following Filters tab options(Optional): - - * Check Enable subject filtering. - * To filter events by container name, enter the following in the **Subject Begins With** field, replacing `` with the name of the container from where you want to export logs. `/blobServices/default/containers//` - -1. Click **Create**. - -1. Verify the deployment was successful by checking **Notifications** in the top right corner of the Azure Portal. \ No newline at end of file + * **Event Hub**. Choose **blobreadereventhub** from the dropdown. + * Click **Confirm Selection**. +8. (Optional) Under the **Filters** tab: + * Enable **Subject filtering**. + * To filter events by container name, set the **Subject Begins With** field to `/blobServices/default/containers//`, replacing `` with the name of the container you want to export logs from. +9. Click **Create** to finalize the subscription. +10. Verify the deployment by checking **Notifications** in the top-right corner of the Azure portal. From c68f655f27a11bfc42e5b6ee7174a39e17b00bc2 Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Fri, 24 Apr 2026 14:18:08 +0530 Subject: [PATCH 06/12] Update hash-rules.md --- .../processing-rules/hash-rules.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md index e46e5d0970..e488a85b94 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md @@ -5,11 +5,11 @@ sidebar_label: Hash Rules description: Create an OpenTelemetry collector remote management hash rule to replace an expression with a hash code. --- -A hash rule is a processing rule that allows you to replace an expression with a hash code generated for that value. Hashed data is completely hidden (obfuscated) before being sent to Sumo Logic. This can be very useful in situations where some type of data must not leave your premises, such as credit cards and social security numbers. Each unique value will have a unique hash code. +A hash rule is a processing rule that allows you to replace an expression with a hash code generated for that value. Hashed data is completely hidden (obfuscated) before being sent to Sumo Logic. This can be very useful in situations where certain types of data must not leave your premises, such as credit card numbers and Social Security numbers. Each unique value will have a unique hash code. The hash algorithm used is **SHA-256**. -Ingestion volume is calculated after applying the hash filter. If the hash reduces the size of the log, the smaller size will be measured against ingestion limits. +Ingestion volume is calculated after the hash filter is applied. If the hash reduces the log size, the smaller size will be measured against ingestion limits. :::note Currently available for Local File ST only. @@ -19,7 +19,7 @@ Currently available for Local File ST only. When you add a hash rule action to your processing rules, you need to provide two inputs: -1. **Expression**: A regular expression that must contain exactly **one capture group** `( )`. The string value matched through this capture group is what will be hashed using SHA-256. If there are multiple parts of the string which needs to be hashed, add additional hashing processing rules for it. +1. **Expression**: A regular expression that must contain exactly **one capture group** `( )`. The string value matched by this capture group will be hashed using SHA-256. If multiple parts of the string need to be hashed, add additional hashing rules for them. 2. **Replacement Format**: The formatted replacement string that will replace the matching string in the log. Use `%s` to refer to the hashed value from the SHA-256 function. The `%s` reference is mandatory and can only be used once. @@ -79,21 +79,21 @@ memberid=%s ``` :::important -Any hashing expression should be tested and verified with a sample source file before applying it to your production logs. +Any hashing expression should be tested and verified on a sample source file before being applied to your production logs. ::: ## Rules and limitations -* The regular expression must contain exactly **one capture group** enclosed in `( )`. Values inside this capture group will be hashed. If there are multiple parts of the string which needs to be hashed, add additional hashing processing rules for it. +* The regular expression must contain exactly **one capture group** enclosed in `( )`. Values inside this capture group will be hashed. If multiple parts of the string need to be hashed, add additional hashing rules for them. * You can use an anchor to detect specific values in your logs. Only the value within the capture group will be hashed. * The hash algorithm is **SHA-256** (MD5 is not supported for OpenTelemetry collectors). -* Make sure you do not specify a regular expression that matches a full log line. Doing so will result in the entire log line being hashed. +* Make sure you do not specify a regular expression that matches a full log line. Doing so will hash the entire log line. * The replacement format must include `%s` exactly once to reference the hashed value. -* Do not unnecessarily match on more of the log than needed. Use precise regular expressions to ensure that only the intended sensitive information is hashed, not surrounding context. +* Do not unnecessarily match on more of the log than needed. Use precise regular expressions to ensure that only the intended sensitive information is hashed, not the surrounding context. -* Each unique value will produce a unique hash code. The same input value will always produce the same hash output, allowing you to correlate occurrences while keeping the actual value hidden. \ No newline at end of file +* Each unique value will produce a unique hash code. The same input value will always produce the same hash output, allowing you to correlate occurrences while keeping the actual value hidden. From b64a71c11ec578378f8b4cde4d230b305a82c154 Mon Sep 17 00:00:00 2001 From: Alekh Nema Date: Mon, 27 Apr 2026 21:00:17 +0530 Subject: [PATCH 07/12] Reverting files which were commited mistakenly --- .../block-blob/steps-multi-storage-account.md | 55 ------------------- 1 file changed, 55 deletions(-) delete mode 100644 docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md diff --git a/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md b/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md deleted file mode 100644 index cf947b89fb..0000000000 --- a/docs/send-data/collect-from-other-data-sources/azure-blob-storage/block-blob/steps-multi-storage-account.md +++ /dev/null @@ -1,55 +0,0 @@ -## Ingesting from Multiple Storage Accounts (Optional) - -If you want to ingest data into Sumo Logic from multiple storage accounts, perform the following tasks for each storage account separately. - -:::note -The following steps assume you have noted down the resource group name, storage account name, and container name from which the blobs will be ingested. -::: - -* Authorize App Service to read from the storage account. -* Create an Event Grid Subscription - Subscribes all blob creation events to the Event Hub created by the ARM template. - -### Step 1: Authorize App Service to read from storage account - -This section provides instructions on authorizing the App Service to list the Storage Account key. This enables the Azure function to read from the storage account. - -To authorize the App Service to list the Storage Account key, do the following: - -1. Navigate to the **Storage Account** and click **Access Control(IAM)**. -1. Click the **Add** **+** at the top of the page. -1. Select **Add role assignment** from the dropdown. -1. In the **Add role assignment** window, select **Role > Storage Blob Data Reader > Next**. -1. In the **Members** tab, select **Managed Identity**. -1. In the **Select Managed identities** window: - * **Subscription**. Select **Pay as you Go**. - * **Managed Identity**. Choose **Function App**. - * **Select**. Select **SUMOBRDLQProcessor\** and **SUMOBRTaskConsumer\** app services created by the ARM template, then click **Select**. -1. Click **Review + assign**. -1. Click **Save** to complete the role assignment. - -### Step 2: Create an Event Grid Subscription - -This section provides instructions for creating an event grid subscription that subscribes all blob creation events to the Event Hub created by the ARM template. - -To create an event grid subscription, do the following: - -1. Navigate to the storage account you want to monitor and open the **Events** blade from the left pane. -2. In the **Event subscriptions** tab, click **+Event Subscription** to create a new subscription. -3. Under **Event Subscription Details**, provide: - * **Name**. Enter a name for the subscription. - * **Event Schema**. Select **Event Grid Schema**. -4. Under **Topic Details**, enter a **System Topic Name**. If a topic already exists, it will be selected automatically. -5. Under **Event Types**, choose **Blob Created** from the **Filter to Event Types** dropdown. -6. Under **Endpoint Details**, - * Select **Event Hubs** as the **Endpoint Type** from the dropdown. - * Click **Configure an endpoint**, then proceed in the dialog. -7. In the **Select Event Hub** dialog, configure: - * **Resource Group**. Select the resource group you created via the ARM template. - * **Event Hub Namespace**. Select **SUMOBREventHubNamespace\<*unique string*\\>**. - * **Event Hub**. Choose **blobreadereventhub** from the dropdown. - * Click **Confirm Selection**. -8. (Optional) Under the **Filters** tab: - * Enable **Subject filtering**. - * To filter events by container name, set the **Subject Begins With** field to `/blobServices/default/containers//`, replacing `` with the name of the container you want to export logs from. -9. Click **Create** to finalize the subscription. -10. Verify the deployment by checking **Notifications** in the top-right corner of the Azure portal. From ba968c5b9f815a63b235c49360d8a5fec701ce3f Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Wed, 29 Apr 2026 10:36:07 +0530 Subject: [PATCH 08/12] Update sidebars.ts --- sidebars.ts | 1 + 1 file changed, 1 insertion(+) diff --git a/sidebars.ts b/sidebars.ts index 8e536ea758..9c774a4605 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -304,6 +304,7 @@ module.exports = { collapsed: true, link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/processing-rules/index'}, items:[ + 'send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows', From 5f5f3c7f54a54e36015f9e3b053eeb5fa140e4d5 Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Wed, 29 Apr 2026 14:47:44 +0530 Subject: [PATCH 09/12] merging hash and mask rules docs into mask rules doc --- .../processing-rules/hash-rules.md | 99 ---------------- .../processing-rules/index.md | 8 +- .../processing-rules/mask-rules.md | 109 +++++++++++++++++- .../processing-rules/overview.md | 2 +- sidebars.ts | 1 - 5 files changed, 105 insertions(+), 114 deletions(-) delete mode 100644 docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md deleted file mode 100644 index e488a85b94..0000000000 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -id: hash-rules -title: OpenTelemetry Remote Management Hash Rules -sidebar_label: Hash Rules -description: Create an OpenTelemetry collector remote management hash rule to replace an expression with a hash code. ---- - -A hash rule is a processing rule that allows you to replace an expression with a hash code generated for that value. Hashed data is completely hidden (obfuscated) before being sent to Sumo Logic. This can be very useful in situations where certain types of data must not leave your premises, such as credit card numbers and Social Security numbers. Each unique value will have a unique hash code. - -The hash algorithm used is **SHA-256**. - -Ingestion volume is calculated after the hash filter is applied. If the hash reduces the log size, the smaller size will be measured against ingestion limits. - -:::note -Currently available for Local File ST only. -::: - -## How it works - -When you add a hash rule action to your processing rules, you need to provide two inputs: - -1. **Expression**: A regular expression that must contain exactly **one capture group** `( )`. The string value matched by this capture group will be hashed using SHA-256. If multiple parts of the string need to be hashed, add additional hashing rules for them. - -2. **Replacement Format**: The formatted replacement string that will replace the matching string in the log. Use `%s` to refer to the hashed value from the SHA-256 function. The `%s` reference is mandatory and can only be used once. - -## Examples - -### Hash a password - -For example, to hash the password `Welcome123` from this log: - -``` -user=sumo password=Welcome123 -``` - -You could use the following configuration: - -**Expression:** -``` -password=([A-Za-z0-9]+) -``` - -**Replacement Format:** -``` -password=%s -``` - -**Result:** -- **Matching string**: `password=Welcome123` -- **Capture group**: `Welcome123` (this value is hashed) -- **Output log**: `user=sumo password=` - -Where `` is the SHA-256 hash of `Welcome123`. - -### Hash member IDs - -To hash member IDs from this log: - -``` -2012-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [memberid=dan@demo.com] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages] -``` - -You could use the following configuration: - -**Expression:** -``` -memberid=([^\]]+) -``` - -**Replacement Format:** -``` -memberid=%s -``` - -**Resulting hashed log:** - -``` -2012-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [memberid=906e9cc124c8e1085b10e1cec4cc6526f3637558be361d3b4bb54bb537e49a49] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages] -``` - -:::important -Any hashing expression should be tested and verified on a sample source file before being applied to your production logs. -::: - -## Rules and limitations - -* The regular expression must contain exactly **one capture group** enclosed in `( )`. Values inside this capture group will be hashed. If multiple parts of the string need to be hashed, add additional hashing rules for them. - -* You can use an anchor to detect specific values in your logs. Only the value within the capture group will be hashed. - -* The hash algorithm is **SHA-256** (MD5 is not supported for OpenTelemetry collectors). - -* Make sure you do not specify a regular expression that matches a full log line. Doing so will hash the entire log line. - -* The replacement format must include `%s` exactly once to reference the hashed value. - -* Do not unnecessarily match on more of the log than needed. Use precise regular expressions to ensure that only the intended sensitive information is hashed, not the surrounding context. - -* Each unique value will produce a unique hash code. The same input value will always produce the same hash output, allowing you to correlate occurrences while keeping the actual value hidden. diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md index b7b6950364..d685c3ebfe 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md @@ -19,12 +19,6 @@ To configure processing rules, navigate to the remote management section in the In this section, we'll introduce the following concepts:
-
-
- icon

OTRM Hash Rules

-

Create an OTRM hash rule to replace an expression with a hash code. Currently available for Local File ST only.

-
-
Rules icon

OTRM Include and Exclude Rules

@@ -33,7 +27,7 @@ In this section, we'll introduce the following concepts:
- Rules icon

OTRM Mask Rules

+ Rules icon

OTRM Hash and Mask Rules

Create an OTRM mask rule to replace an expression with a mask string.

diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md index c4362748c6..0166a4d789 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md @@ -1,10 +1,107 @@ --- id: mask-rules -title: OpenTelemetry Remote Management Mask Rules -sidebar_label: Mask Rules +title: OpenTelemetry Remote Management Hash and Mask Rules +sidebar_label: Hash and Mask Rules description: Create an OpenTelemetry collector remote management mask rule to replace an expression with a mask string. --- +## OpenTelemetry Remote Management Hash Rules + +A hash rule is a processing rule that allows you to replace an expression with a hash code generated for that value. Hashed data is completely hidden (obfuscated) before being sent to Sumo Logic. This can be very useful in situations where certain types of data must not leave your premises, such as credit card numbers and Social Security numbers. Each unique value will have a unique hash code. + +The hash algorithm used is **SHA-256**. + +Ingestion volume is calculated after the hash filter is applied. If the hash reduces the log size, the smaller size will be measured against ingestion limits. + +:::note +Currently available for Local File ST only. +::: + +### How it works + +When you add a hash rule action to your processing rules, you need to provide two inputs: + +1. **Expression**: A regular expression that must contain exactly **one capture group** `( )`. The string value matched by this capture group will be hashed using SHA-256. If multiple parts of the string need to be hashed, add additional hashing rules for them. + +2. **Replacement Format**: The formatted replacement string that will replace the matching string in the log. Use `%s` to refer to the hashed value from the SHA-256 function. The `%s` reference is mandatory and can only be used once. + +### Examples + +#### Hash a password + +For example, to hash the password `Welcome123` from this log: + +``` +user=sumo password=Welcome123 +``` + +You could use the following configuration: + +**Expression:** +``` +password=([A-Za-z0-9]+) +``` + +**Replacement Format:** +``` +password=%s +``` + +**Result:** +- **Matching string**: `password=Welcome123` +- **Capture group**: `Welcome123` (this value is hashed) +- **Output log**: `user=sumo password=` + +Where `` is the SHA-256 hash of `Welcome123`. + +#### Hash member IDs + +To hash member IDs from this log: + +``` +2012-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [memberid=dan@demo.com] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages] +``` + +You could use the following configuration: + +**Expression:** +``` +memberid=([^\]]+) +``` + +**Replacement Format:** +``` +memberid=%s +``` + +**Resulting hashed log:** + +``` +2012-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [memberid=906e9cc124c8e1085b10e1cec4cc6526f3637558be361d3b4bb54bb537e49a49] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages] +``` + +:::important +Any hashing expression should be tested and verified on a sample source file before being applied to your production logs. +::: + +### Rules and limitations + +* The regular expression must contain exactly **one capture group** enclosed in `( )`. Values inside this capture group will be hashed. If multiple parts of the string need to be hashed, add additional hashing rules for them. + +* You can use an anchor to detect specific values in your logs. Only the value within the capture group will be hashed. + +* The hash algorithm is **SHA-256** (MD5 is not supported for OpenTelemetry collectors). + +* Make sure you do not specify a regular expression that matches a full log line. Doing so will hash the entire log line. + +* The replacement format must include `%s` exactly once to reference the hashed value. + +* Do not unnecessarily match on more of the log than needed. Use precise regular expressions to ensure that only the intended sensitive information is hashed, not the surrounding context. + +* Each unique value will produce a unique hash code. The same input value will always produce the same hash output, allowing you to correlate occurrences while keeping the actual value hidden. + +## OpenTelemetry Remote Management Mask Rules + :::note This document does not cover masking logs for Windows source templates. For details on masking logs for Windows, refer to [Mask Rules for the Windows Source Template](mask-rules-windows.md). ::: @@ -13,9 +110,9 @@ A mask rule is a processing rule that hides irrelevant or sensitive information Ingestion volume is calculated after applying the mask filter. If the mask reduces the size of the log, the smaller size will be measured against ingestion limits. Masking is an effective method to reduce overall ingestion volume. -## Examples +### Examples -### Mask an email address +#### Mask an email address For example, to mask the email address `dan@demo.com` from this log: @@ -36,7 +133,7 @@ Using the masking string `auth=User:AAA` would produce the following result: Any masking expression should be tested and verified with a sample source file before applying it to your production logs. ::: -### Mask credit card numbers +#### Mask credit card numbers You can mask credit card numbers from log messages using a regular expression within a mask rule. Once masked with a known string, you can then perform a search for that string within your logs to detect if credit card numbers may be leaking into your log files. @@ -58,7 +155,7 @@ Samples include: * **Discover**. 6011-0009-9013-9424  \|  6500000000000002  \|  6011 0009 9013 9424 -## Rules and limitations +### Rules and limitations * Expressions that you want masked must be selected by the regular expression you given. And the masking string provided will mask whole of the string which is selected by the regular expression. diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md index 94ff96a772..c8c7a065c9 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md @@ -15,7 +15,7 @@ Processing rules for log collection support the following rule types: * [Exclude messages that match](include-and-exclude-rules.md). Remove messages that you do not want to send to Sumo Logic at all ("denylist" filter). These messages are skipped by the OpenTelemetry Collector and are not uploaded to Sumo Logic. * [Include messages that match](include-and-exclude-rules.md). Send only the data you'd like in your Sumo Logic account (an "allowlist" filter). This type of rule can be useful, for example, if you only want to include messages coming from a firewall. * [Mask messages that match](mask-rules.md). Replace an expression with a customizable mask string. This is another way to protect data you do not normally track, such as passwords. -* [Hash messages that match](hash-rules.md). Replace an expression with a hash code generated for that value. This completely obscures sensitive data, such as credit card numbers and Social Security numbers, before they are sent to Sumo Logic. +* [Hash messages that match](mask-rules.md). Replace an expression with a hash code generated for that value. This completely obscures sensitive data, such as credit card numbers and Social Security numbers, before they are sent to Sumo Logic. ## Metrics collection diff --git a/sidebars.ts b/sidebars.ts index 9c774a4605..8e536ea758 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -304,7 +304,6 @@ module.exports = { collapsed: true, link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/processing-rules/index'}, items:[ - 'send-data/opentelemetry-collector/remote-management/processing-rules/hash-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows', From 2b9d751b079b9b15c150f36ad925c814e663b5e1 Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Wed, 29 Apr 2026 17:27:42 +0530 Subject: [PATCH 10/12] Update mask-rules.md --- .../remote-management/processing-rules/mask-rules.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md index 0166a4d789..174d6200f7 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md @@ -2,7 +2,7 @@ id: mask-rules title: OpenTelemetry Remote Management Hash and Mask Rules sidebar_label: Hash and Mask Rules -description: Create an OpenTelemetry collector remote management mask rule to replace an expression with a mask string. +description: Use hash and mask processing rules to replace an expression with the respective hash and mask strings. --- ## OpenTelemetry Remote Management Hash Rules From 9d035a0fc4854642e0857496c595e5db44be419b Mon Sep 17 00:00:00 2001 From: Amee Lepcha Date: Wed, 29 Apr 2026 17:28:26 +0530 Subject: [PATCH 11/12] Update index.md --- .../remote-management/processing-rules/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md index d685c3ebfe..2d6468eb3d 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md @@ -28,7 +28,7 @@ In this section, we'll introduce the following concepts:
Rules icon

OTRM Hash and Mask Rules

-

Create an OTRM mask rule to replace an expression with a mask string.

+

Create an OTRM hash and mask rule to replace an expression with the respective hash and mask string.

From 9e78ec625b5f52aabbde97b6cdd3e8cfe247d045 Mon Sep 17 00:00:00 2001 From: John Pipkin Date: Wed, 29 Apr 2026 12:32:33 -0500 Subject: [PATCH 12/12] Updates from review --- .../remote-management/processing-rules/index.md | 6 ++++++ .../remote-management/processing-rules/mask-rules.md | 6 ++---- .../remote-management/processing-rules/overview.md | 1 + sidebars.ts | 1 + 4 files changed, 10 insertions(+), 4 deletions(-) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md index 2d6468eb3d..2d502894d0 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md @@ -19,6 +19,12 @@ To configure processing rules, navigate to the remote management section in the In this section, we'll introduce the following concepts:
+
+
+ Rules icon

OTRM Overview

+

Get an overview of how to use processing rules to specify what kind of data is sent to Sumo Logic using OpenTelemetry remote management.

+
+
Rules icon

OTRM Include and Exclude Rules

diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md index 174d6200f7..14ab6bd494 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md @@ -103,7 +103,7 @@ Any hashing expression should be tested and verified on a sample source file bef ## OpenTelemetry Remote Management Mask Rules :::note -This document does not cover masking logs for Windows source templates. For details on masking logs for Windows, refer to [Mask Rules for the Windows Source Template](mask-rules-windows.md). +This document does not cover masking logs for Windows source templates. For details on masking logs for Windows, refer to [OpenTelemetry Remote Management Windows Source Template Mask Rules](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows/). ::: A mask rule is a processing rule that hides irrelevant or sensitive information from logs before they are ingested. When you create a mask rule, the selected expression will be replaced with a mask string before the data is sent to Sumo Logic. You can either specify a custom mask string or use the default `"#####"`. @@ -137,9 +137,7 @@ Any masking expression should be tested and verified with a sample source file b You can mask credit card numbers from log messages using a regular expression within a mask rule. Once masked with a known string, you can then perform a search for that string within your logs to detect if credit card numbers may be leaking into your log files. -To mask credit card numbers in logs, you can use a masking filter with the following regular expression: - -The following regular expression can be used within a masking filter to mask American Express, Visa (16 digit only), Mastercard, and Discover credit card numbers: +To mask credit card numbers in logs, you can use a masking filter with the following regular expression. The following regular expression can be used within a masking filter to mask American Express, Visa (16 digit only), Mastercard, and Discover credit card numbers: ``` ((?:(?:4\d{3})|(?:5[1-5]\d{2})|6(?:011|5[0-9]{2}))(?:-?|\040?)(?:\d{4}(?:-?|\040?)){3}|(?:3[4,7]\d{2})(?:-?|\040?)\d{6}(?:-?|\040?)\d{5}) diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md index c8c7a065c9..7409dd8eda 100644 --- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md +++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md @@ -2,6 +2,7 @@ id: overview title: OpenTelemetry Remote Management Processing Rules sidebar_label: Overview +description: Get an overview of how to use processing rules to specify what kind of data is sent to Sumo Logic using OpenTelemetry remote management. --- import useBaseUrl from '@docusaurus/useBaseUrl'; diff --git a/sidebars.ts b/sidebars.ts index a531616215..310812ab11 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -304,6 +304,7 @@ module.exports = { collapsed: true, link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/processing-rules/index'}, items:[ + 'send-data/opentelemetry-collector/remote-management/processing-rules/overview', 'send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules', 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows',