AWS

AWS Security Hub + Config in 2026: A Lean Misconfiguration Detection Pipeline for Small Teams

Why this still matters in 2026

Most cloud incidents I see in small and midsize environments still come from configuration drift, not zero-days. A public snapshot left on, an IAM policy widened during a late-night change window, or logging disabled “temporarily” and never restored. AWS gives you enough native controls to catch a lot of this early, but teams often wire them together in a heavy way and then abandon the setup because it is noisy and expensive.

This guide is a practical middle path: use AWS Config for state tracking, Security Hub for normalized findings, and EventBridge for targeted escalation. No big SIEM assumptions, no 40-service architecture diagram, just enough to detect what actually hurts.

What this pipeline does (and what it does not)

The goal is simple: detect high-impact misconfigurations quickly and route only actionable findings to humans. This is not a full threat detection stack. It will not replace runtime EDR, workload IDS, or deep forensic tooling. It is a control-plane hygiene pipeline.

Best fit:

  • 1-3 cloud engineers covering security part-time
  • Single account or a modest AWS Organization
  • Need measurable security signal in days, not quarters

Step 1: Start with a focused Config rule set

A common mistake is enabling every managed rule on day one. You get alert fatigue, then you mute everything. Start with a curated set tied to real blast-radius issues:

  • S3 buckets with public read/write access
  • CloudTrail disabled or not multi-region
  • Root account MFA absent
  • Security groups exposing management ports to 0.0.0.0/0
  • RDS and EBS encryption not enforced

Example CLI flow for recorder and delivery channel (adjust names/roles):

aws configservice put-configuration-recorder   --configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/awsconfig-role   --recording-group allSupported=true,includeGlobalResourceTypes=true

aws configservice put-delivery-channel   --delivery-channel name=default,s3BucketName=my-config-logs-bucket

aws configservice start-configuration-recorder   --configuration-recorder-name default

Use infrastructure-as-code for rule definitions once your baseline stabilizes. Manual console setup is fine for a pilot, but drift starts there too.

Step 2: Enable Security Hub and map severity expectations

Security Hub aggregates findings from Config and other integrated services into a single ASFF format. That normalization is the real value: you can create one triage workflow instead of parsing ten data models.

Before integration, define internal severity mapping. AWS “HIGH” is useful, but your business impact context is what decides paging. For example, a public S3 bucket in a sandbox account may be ticket-only, while the same finding in production with customer data should trigger immediate response.

aws securityhub enable-security-hub
aws securityhub batch-enable-standards   --standards-subscription-requests StandardsArn=arn:aws:securityhub:::standards/aws-foundational-security-best-practices/v/1.0.0

Step 3: Route only high-value findings with EventBridge

Do not send everything to Slack/Teams/email. Filter aggressively at the bus. A clean strategy is:

  • Auto-ticket MEDIUM findings for business-hour review
  • Notify on-call only for HIGH/CRITICAL with exploitable exposure
  • Suppress known accepted-risk resources using tags and rule exceptions

Event pattern concept:

{
  "source": ["aws.securityhub"],
  "detail-type": ["Security Hub Findings - Imported"],
  "detail": {
    "findings": {
      "Severity": { "Label": ["HIGH", "CRITICAL"] },
      "RecordState": ["ACTIVE"],
      "Workflow": { "Status": ["NEW", "NOTIFIED"] }
    }
  }
}

Then target Lambda or your incident webhook with resource metadata and runbook URL embedded.

Step 4: Build a weekly “drift debt” review

Real teams do emergency exceptions. The risk comes when exceptions become permanent. Once a week, review:

  • Top recurring misconfiguration types
  • Resources in SUPPRESSED state older than 30 days
  • Accounts/OUs with rising finding velocity

If a finding category keeps coming back, fix the provisioning path (Terraform module, Service Catalog constraint, SCP), not the symptom.

Trade-offs and caveats

Config can get expensive in high-churn environments. Scope recording thoughtfully for ephemeral resources if needed. Security Hub can also feel noisy if every integrated source is enabled without policy. Start narrow, prove value, then expand.

Also, remember that this stack is mostly preventive/detective for configuration posture. It does not see everything happening inside instances or containers. Pair it with runtime controls where the data sensitivity justifies it.

A practical success metric

Don’t measure success by “number of findings.” Measure by mean time from risky change to human awareness, and by repeat-rate reduction for the same control failure. If your public exposure findings drop quarter over quarter and your on-call is not drowning, the pipeline is doing its job.