Forwarding CloudTrail Logs to Huntress SIEM via HTTP Event Collector (HEC) Updated February 2026
- Architecture Overview
- Prerequisites
- Part 1: Create the HEC Source in Huntress
- Part 2: Configure AWS CloudTrail
- Part 3: Create the S3 Bucket for CloudTrail Logs
- Part 4: Create the Lambda Forwarding Function
- Part 5: Configure the S3 Event Trigger
- Part 6: Verify End-to-End Flow
- Testing and Troubleshooting
- Huntress SIEM Query Examples
- Additional Resources
This guide walks through setting up a serverless pipeline that automatically forwards AWS CloudTrail logs to Huntress SIEM using the HTTP Event Collector (HEC) endpoint. The recommended architecture uses an S3-triggered Lambda function with an optional dead-letter log, which is the most cost-effective and reliable approach.
| Step | Component | Description |
|---|---|---|
| 1 | CloudTrail | Records API activity across your AWS account(s) |
| 2 | S3 Bucket | CloudTrail delivers compressed JSON log files (.json.gz) to S3 |
| 3 | S3 Event Notification | New object creation triggers the Lambda function |
| 4 | Lambda Function | Downloads the log file, decompresses it, extracts individual events, and wraps each in HEC-compatible JSON |
| 5 | Huntress HEC | Receives events via HTTPS POST to https://hec.huntress.io/services/collector on port 443 |
- Near real-time delivery: Lambda fires within seconds of CloudTrail writing a new file to S3.
- Serverless and cost-effective: No servers to manage. At typical CloudTrail volumes, Lambda costs are minimal (often under $2/month).
- Scalable: Lambda automatically handles bursts of CloudTrail activity.
- Reliable: Failed deliveries can be retried, and a dead-letter S3 bucket can capture failures for reprocessing.
- Active Huntress Managed SIEM subscription. If you're looking to get started with Huntress, contact Cosmistack, Authorized Huntress Reseller, for purchasing options
- Account Administrator role in the Huntress portal
- Access to SIEM > Source Management in the Huntress dashboard
- AWS account with IAM permissions to create/manage: CloudTrail trails, S3 buckets, Lambda functions, and IAM roles
- AWS CLI configured (optional, but helpful for testing)
- Region decision: Deploy Lambda in the same region as your CloudTrail S3 bucket to avoid cross-region data transfer costs
| Parameter | Value |
|---|---|
| Huntress HEC Endpoint | https://hec.huntress.io/services/collector |
| Port | 443 (HTTPS) |
| Auth Header Format | Authorization: Splunk <YOUR_HEC_TOKEN> |
| Required Payload Format | JSON with "event" as the top-level object |
| Source Type | aws:cloudtrail (recommended) |
NOTE: Huntress HEC uses the Splunk HEC protocol. The Authorization header must use the prefix "Splunk" (not "Bearer") followed by your token.
Start on the Huntress side so you have the HEC token ready before configuring AWS.
- Log in to the Huntress portal at https://huntress.io as an Account Administrator.
- Navigate to SIEM on the left navigation menu, then click Source Management.
- Click Add Source. Select the Generic HEC source type.
- If you have an MSP account with Huntress, select the Organization from the dropdown that should receive the CloudTrail data.
- Enter a descriptive name for the source, such as "AWS CloudTrail - Production" or "AWS CloudTrail - ".
- Record the HEC Token that Huntress generates. Copy this value and store it securely. You will need it for the Lambda function configuration.
- Note the Collector URL. It should be:
https://hec.huntress.io/services/collector
WARNING: Treat the HEC token like a password. Anyone with this token can send data to your Huntress SIEM instance. Store it in AWS Secrets Manager or Systems Manager Parameter Store (SecureString) rather than hardcoding it in your Lambda function.
NOTE: Each configured HEC source that sends data in the last 30 days counts as one Managed SIEM Data Source and is billed accordingly. You can send logs from multiple AWS accounts through a single HEC source and token, but you may wish to seperate accounts into individual sources based on your organization's requirements/use case.
If you already have a CloudTrail trail delivering to S3, skip to Part 3. Otherwise, create one now.
- Open the AWS Management Console and navigate to CloudTrail.
- Click Create trail (or "Trails" in the sidebar, then "Create trail").
- Trail name: Enter a descriptive name, e.g., "management-events-trail".
- Storage location: Choose "Create new S3 bucket" or select an existing bucket. Note the bucket name for later steps. We recommend using a dedicated bucket for management logs for seperation-of-concerns
- Log file SSE-KMS encryption: Optional but recommended. If enabled, your Lambda IAM role will need
kms:Decryptpermission on this key and you'll need to provide a KMS key alias. - Log file validation: Enable this for integrity verification (recommended).
- Event type: Select Management events at a minimum. Add Data events and/or Insights events based on your security requirements.
- Management events: Select Read and Write for comprehensive visibility.
- Click Create trail.
NOTE: CloudTrail typically delivers log files to S3 within 5-15 minutes of API activity occurring. Logs are compressed as .json.gz files organized by account ID, region, and date.
s3://<bucket>/AWSLogs/<account-id>/CloudTrail/<region>/YYYY/MM/DD/<filename>.json.gz
Understanding this path structure is important because the Lambda function will use it to parse the log files.
If CloudTrail created a new bucket in Part 2, you can skip bucket creation. However, you should still verify the bucket policy.
The bucket must allow CloudTrail to write logs. If you created the trail through the console, AWS applies this automatically. Verify the bucket policy includes the cloudtrail.amazonaws.com service principal with at least s3:PutObject and s3:GetBucketAcl permissions.
For production deployments, create a secondary S3 bucket to capture any CloudTrail log files that the Lambda function fails to forward to Huntress. This allows you to reprocess/audit failed deliveries.
- Create a new S3 bucket named something like
<your-trail-bucket>-dlq. - Apply a lifecycle policy to automatically expire objects after 30-90 days (these are only for reprocessing/auditing failed deliveries).
This is the core of the integration. The Lambda function is triggered by new objects in the CloudTrail S3 bucket, downloads and decompresses the log file, and forwards each event to the Huntress HEC endpoint.
- Go to IAM > Roles > Create role.
- Select "AWS Service" and choose Lambda as the use case.
- Attach the following managed policy:
AWSLambdaBasicExecutionRole(for CloudWatch Logs). - Proceed to create the user, and then return to the role summary page to attach an inline policy using the template in
CloudTrailToHuntressHEC-LamdaRole-InlinePolicy.jsonin this repository.
NOTE: If your CloudTrail logs are encrypted with KMS, add
kms:Decryptpermission for the KMS key ARN to this policy.
- Name the role something descriptive like "CloudTrailToHuntressHEC-LambdaRole".
- Go to AWS Secrets Manager > Store a new secret.
- Select "Other type of secret".
- Key:
hec_tokenValue: - Name the secret something like "huntress/hec-token".
- Complete the wizard and note the secret ARN.
- Go to Lambda > Create function.
- Function name: CloudTrailToHuntressHEC
- Runtime: Python 3.12 (or latest available)
- Architecture: arm64 (Graviton, lower cost) or x86_64
- Execution role: Use the role created in Step 4a.
- Timeout: Set to 120 seconds (under Configuration > General configuration). CloudTrail log files can be large.
- Memory: 256 MB is sufficient for most workloads. Increase to 512 MB if you have very large CloudTrail files.
Under Configuration > Environment variables, add:
| Variable | Value |
|---|---|
| HEC_URL | https://hec.huntress.io/services/collector |
| SECRET_NAME | huntress/hec-token |
| SOURCE_TYPE | aws:cloudtrail |
| DLQ_BUCKET | (optional) |
Paste the entire contents of huntress-siem-lambda-forwarder.py from this repository into the Lambda code editor. You should not need to make any changes to this code. This function:
- Retrieves the HEC token from Secrets Manager (with caching)
- Downloads and decompresses the CloudTrail log file from S3
- Extracts individual events and wraps them in HEC-compatible JSON
- Sends events to the Huntress HEC endpoint in batches of 100 events for efficiency. If any batch fails, it logs the error and optionally writes failed events to the DLQ S3 bucket for later reprocessing (if DLQ configured).
NOTE: This function uses only the Python standard library plus boto3 (which is included in the Lambda runtime). No additional layers or packages are required.
Now connect the S3 bucket to the Lambda function so new CloudTrail log files automatically trigger forwarding.
- Open your Lambda function in the AWS console.
- Click Add trigger.
- Select S3 as the trigger source.
- Select your CloudTrail S3 bucket.
- Event type: All object create events (
s3:ObjectCreated:*) - Prefix:
AWSLogs/(filters to only CloudTrail log paths) - Suffix:
.json.gz(filters to only compressed JSON log files) - Check the acknowledgment checkbox and click Add.
- Open the S3 bucket in the AWS console.
- Go to Properties > Event notifications > Create event notification.
- Event name: "CloudTrailToHuntressHEC"
- Prefix:
AWSLogs/ - Suffix:
.json.gz - Event types: Check All object create events.
- Destination: Lambda function > Select your function.
- Save changes.
WARNING: If your S3 bucket receives other file types (e.g., Config logs, access logs), the prefix and suffix filters are critical to avoid triggering the Lambda on non-CloudTrail files. Be precise with the prefix if your bucket is shared. We recommend using a dedicated bucket for CloudTrail logs for seperation-of-concerns and to avoid these issues.
After completing the setup, verify that data is flowing correctly through the entire pipeline.
NOTE: It may take up to 30 minutes for a new source to first appear in the Huntress SIEM dashboard.
- Generate AWS API activity. Perform a few actions in the AWS console (e.g., list S3 buckets, describe EC2 instances) to ensure CloudTrail has events to record.
- Wait 5-15 minutes for CloudTrail to deliver a new log file to S3.
- Check CloudWatch Logs. Go to CloudWatch > Log groups > /aws/lambda/CloudTrailToHuntressHEC. Look for log entries showing "Processing: s3://..." and "sent X events, HTTP 200".
- Verify in Huntress. Log in to the Huntress portal. Navigate to SIEM. Switch to the correct Account scope, if needed. Look for your CloudTrail source in Source Management, or run the query:
from logs | where event.provider == "GenericHEC" | keep message
Before deploying any AWS infrastructure, verify that your HEC token works by sending a test event directly from your local machine:
curl -v https://hec.huntress.io/services/collector \
-H "Authorization: Splunk YOUR_TOKEN_HERE" \
-d '{"event": "CloudTrail integration test from curl"}'Expected result: HTTP 200 response. Within 30 minutes you should see a GenericHEC source with your test message in Huntress SIEM.
If you get a non-200 response: Double-check your token for typos or extra whitespace. Ensure you are using port 443 and HTTPS.
Create a test event in the Lambda console that simulates an S3 notification. This lets you test the function without waiting for CloudTrail to deliver a real file.
- First, identify a real CloudTrail log file in your S3 bucket. Note its bucket name and object key.
- In the Lambda console, click Test > Create new event. Use the "S3 Put" template and modify the bucket and key values:
{
"Records": [
{
"s3": {
"bucket": {
"name": "YOUR-CLOUDTRAIL-BUCKET"
},
"object": {
"key": "AWSLogs/123456789012/CloudTrail/us-east-1/2026/02/10/file.json.gz"
}
}
}
]
}- Click Test and check the execution results and CloudWatch Logs output.
To test the full pipeline without waiting for real CloudTrail activity, upload a synthetic log file to S3:
# Create a test CloudTrail log file
echo '{"Records":[{"eventVersion":"1.08",
"eventTime":"2026-02-10T12:00:00Z",
"eventSource":"sts.amazonaws.com",
"eventName":"GetCallerIdentity",
"awsRegion":"us-east-1",
"sourceIPAddress":"1.2.3.4",
"userAgent":"aws-cli/2.0",
"userIdentity":{"type":"IAMUser",
"arn":"arn:aws:iam::123456789012:user/testuser"}}]}' \
| gzip > test-cloudtrail.json.gz
# Upload to the CloudTrail bucket path
aws s3 cp test-cloudtrail.json.gz \
s3://YOUR-BUCKET/AWSLogs/123456789012/CloudTrail/us-east-1/2026/02/10/test-cloudtrail.json.gzThis should trigger the Lambda function and deliver the event to Huntress.
| Symptom | Likely Cause | Fix |
|---|---|---|
| Lambda not triggering | S3 event notification misconfigured or prefix/suffix filter too restrictive | Verify notification in S3 > Properties > Event notifications. Check prefix matches AWSLogs/ and suffix matches .json.gz |
| Lambda Access Denied on S3 GetObject | IAM role missing s3:GetObject on the correct bucket/key | Update the IAM inline policy. If bucket is KMS encrypted, add kms:Decrypt |
| HTTP 401 from Huntress HEC | Invalid or expired HEC token | Regenerate the token in Huntress Source Management. Update Secrets Manager. |
| HTTP 400 from Huntress HEC | Malformed payload - "event" key missing or wrong Content-Type | Ensure each event is wrapped as {"event": }. Use Content-Type: application/json |
| Lambda timeout | Very large CloudTrail files or slow network | Increase Lambda timeout to 300s and memory to 512 MB. Reduce BATCH_SIZE. |
| Data appears in Huntress but events are delayed | Normal CloudTrail delivery latency (5-15 min) plus SIEM indexing | This is expected behavior. For near-real-time, consider CloudTrail Lake or CloudWatch-based delivery. |
| No source visible in Huntress SIEM | Viewing from wrong Org Scope | Click the Huntress logo at top right to switch to the correct Org scope. Check Source Management. Note that it may take up to 30 minutes for a source to appear in the Huntress dashboard after receiving its first event. |
| Events not parsed correctly | Sending the entire log file as one event instead of individual records | Ensure Lambda iterates over data["Records"] and sends each event individually. |
Run these queries in CloudWatch Logs Insights against the /aws/lambda/CloudTrailToHuntressHEC log group:
Find all errors in the last 24 hours:
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
| limit 50
Check delivery success rate:
fields @timestamp, @message
| filter @message like /sent.*events.*HTTP/
| parse @message "sent * events, HTTP *" as eventCount, httpStatus
| stats count() as batches, sum(eventCount) as totalEvents by httpStatus
Find Lambda invocations by S3 key:
fields @timestamp, @message
| filter @message like /Processing: s3/
| sort @timestamp desc
| limit 20
- Create a CloudWatch Alarm on the Lambda function's Errors metric. Alert if errors exceed 0 in a 5-minute period.
- Create a CloudWatch Alarm on the Lambda function's Duration metric. Alert if invocations approach the timeout threshold.
- In Huntress SIEM, set up a non-reporting source escalation for your CloudTrail source so you are alerted if data stops flowing.
- Periodically check the DLQ bucket for failed deliveries. If files accumulate, investigate the root cause and reprocess them.
Once CloudTrail data is flowing into Huntress SIEM, use these queries to search and analyze events.
from logs | where event.provider == "GenericHEC"
from logs
| where event.provider == "GenericHEC"
| where message contains "ConsoleLogin"
from logs
| where event.provider == "GenericHEC"
| where message contains "ConsoleLogin"
| where message contains "Failure"
from logs
| where event.provider == "GenericHEC"
| where message contains "\"type\":\"Root\""
NOTE: Generic HEC sources do not have ECS normalized fields. You will be searching against the raw message content. If Huntress adds a native AWS CloudTrail source type in the future, it will provide parsed fields and categories.
- Cosmistack Huntress Reseller Page
- Huntress HEC Documentation
- Huntress SIEM Troubleshooting
- AWS CloudTrail User Guide
- AWS Lambda Developer Guide
This project is licensed under the MIT License. All trademarks are the property of their respective owners. Huntress and the Huntress logo are trademarks of Huntress Labs, Inc. in the United States and other countries. AWS is a trademark of Amazon Web Services, Inc. or its affiliates in the United States and/or other countries. All other trademarks are the property of their respective owners. This guide is provided for informational purposes only and does not constitute an endorsement or partnership between Cosmistack, Inc. and Huntress Labs, Inc. or Amazon Web Services, Inc. Use of this guide is at your own risk.
Copyright (c) 2026 Cosmistack, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.