You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to write a Sigma correlation rule that match if an event happened X times in the last X minutes, but while deduplicating events generated in the same second.
Let's use a real example: I want to generate an alert whenever a windows domain user fail to login more than 5 times in the last 5 minutes. I can write the following Sigma rule:
This could have worked well... however Windows tend to retry a logins attempts multiple times using different mechanisms (Kerberos, NTLM...). That mean that a single login attempt can generate up to 5 duplicate events at the same time (but not always... otherwise I could have just checked for 25 failed logins in 5m instead of 5 in 5m).
So I'm trying to handle this properly by filtering duplicate events in the Sigma rule. Basically what I want to write is "get 1 login_failed per second and alert if there is more than 5 matches in the last 5m".
I've tried using an intermediate correlation rule matching at least 1 event per 1s, but my rule doesn't match anything anymore with this change:
I've tried to look at the Sigma spec, but I don't see anything obvious that can help me.
Any idea? Or is this a scenario that isn't supported by Sigma rules?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm trying to write a Sigma correlation rule that match if an event happened X times in the last X minutes, but while deduplicating events generated in the same second.
Let's use a real example: I want to generate an alert whenever a windows domain user fail to login more than 5 times in the last 5 minutes. I can write the following Sigma rule:
This could have worked well... however Windows tend to retry a logins attempts multiple times using different mechanisms (Kerberos, NTLM...). That mean that a single login attempt can generate up to 5 duplicate events at the same time (but not always... otherwise I could have just checked for 25 failed logins in 5m instead of 5 in 5m).
So I'm trying to handle this properly by filtering duplicate events in the Sigma rule. Basically what I want to write is "get 1
login_failed
per second and alert if there is more than 5 matches in the last 5m".I've tried using an intermediate correlation rule matching at least 1 event per 1s, but my rule doesn't match anything anymore with this change:
I've tried to look at the Sigma spec, but I don't see anything obvious that can help me.
Any idea? Or is this a scenario that isn't supported by Sigma rules?
Beta Was this translation helpful? Give feedback.
All reactions