[ti_flashpoint][alert] Add Flashpoint Alert data stream#16714
[ti_flashpoint][alert] Add Flashpoint Alert data stream#16714akshraj-crest wants to merge 2 commits intoelastic:feature/ti_flashpoint-0.1.0from
Conversation
| @@ -0,0 +1,3 @@ | |||
| dependencies: | |||
| ecs: | |||
| reference: git@v9.2.0 | |||
There was a problem hiding this comment.
9.3.0 is out. Can you update to this?
| /packages/ti_domaintools @elastic/security-service-integrations | ||
| /packages/ti_eclecticiq @elastic/security-service-integrations | ||
| /packages/ti_eset @elastic/security-service-integrations | ||
| /packages/ti_flashpoint @elastic/security-service-integrations |
There was a problem hiding this comment.
Is the idea to add threat indicators to this integration?
All the ti_* integrations ingest threat indicators feed and detection rules rely on ti_* to run detections.
There was a problem hiding this comment.
Flashpoint ingests indicators and vulnerabilities, that's why we’ve aligned this as a ti_* package
| Integrating Flashpoint with Elastic SIEM provides centralized visibility into threat intelligence alerts. Kibana dashboards present key metrics such as `Total Alerts`, along with visualizations showing `Alerts by Data Type`, `Source`, and `Origin`. | ||
|
|
||
| `Alert Trends over Time`, `Top Authors`, `MIME Types`, `Alert Sources`, and `Geographic Distribution` of related resources help analysts quickly monitor activity and investigate alerts. These insights support efficient threat monitoring and analysis workflows. |
There was a problem hiding this comment.
I don't think naming all visualisations is good idea as it can be maintenance burden.
There was a problem hiding this comment.
Agreed, We can list only relevant visualizations to avoid maintenance overhead.
| max([ | ||
| state.cursor.max_created_at.parse_time("2006-01-02T15:04:05"), | ||
| body.items.map(e, timestamp(e.created_at)).max() | ||
| ]).format("2006-01-02T15:04:05") |
There was a problem hiding this comment.
max also works for a pair, without having to instantiate a list
| ((has(body.items) && size(body.items) > 0) ? | ||
| (has(state.?cursor.max_created_at) ? | ||
| optional.of( | ||
| max([ | ||
| state.cursor.max_created_at.parse_time("2006-01-02T15:04:05"), | ||
| body.items.map(e, timestamp(e.created_at)).max(), | ||
| ]).format("2006-01-02T15:04:05") | ||
| ) | ||
| : | ||
| optional.of(body.items.map(e, timestamp(e.created_at)).max().format("2006-01-02T15:04:05")) | ||
| ) | ||
| : | ||
| state.?cursor.max_created_at) |
There was a problem hiding this comment.
This is identical to L60-71.
I think you could just use either one of max_created_at or last_timestamp and keep updating for all pages (no pagination condition), and it might still work?
|
|
||
| # rename fields to snake_case (hyphen to underscore) | ||
| - script: | ||
| description: Convert field names from hyphen to underscore. |
There was a problem hiding this comment.
It also populates ctx.ti_flashpoint.alert and removes ctx.json. Its important to call them out.
Or use rename and remove processors after this for respective assignment of ctx.ti_flashpoint.alert or removal of ctx.json
| - UNIX | ||
| - ISO8601 | ||
| - yyyy-MM-dd' 'HH:mm:ss | ||
| - yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ | ||
| - yyyy-MM-dd'T'HH:mm:ssXXXXX | ||
| - yyyy-MM-dd' 'HH:mm:ssXXXXX |
There was a problem hiding this comment.
Since there are lot of patterns, to avoid the performance bottlenecks, can you confirm if they are ordered by most likely/common to appear.
There was a problem hiding this comment.
We can reorder them based on priority and likelihood of occurrence.
| field: file.hash.sha256 | ||
| value: '{{{_ingest._value.phash256}}}' |
There was a problem hiding this comment.
sha256 and phash are different algorithms. Suggest not to do this assignment.
| multi: false | ||
| required: true | ||
| show_user: false | ||
| default: 5000 |
There was a problem hiding this comment.
Is this the maximum allowed by the API?
There was a problem hiding this comment.
Yes, it is maximum allowed by API.
| @@ -0,0 +1,118 @@ | |||
| { | |||
There was a problem hiding this comment.
Suggest generate the sample event that can populate as many ECS fields as possible. For example, this doesn't have related fields or user fields.
Proposed commit message
The initial release includes alert data stream and associated dashboard.
Flashpoint fields are mapped to their corresponding ECS fields where possible.
Test samples were derived from live data samples, which were subsequently
sanitized.
Checklist
changelog.ymlfile.How to test this PR locally
To test the flashpoint package:
Screenshots
Go Code for Ingest Pipeline Generation
The alert data stream pipeline is generated using Go code built on top of the Dispear library.
Below is the code used for generating the pipeline logic: