From Observability to Action: Using Falco for Kubernetes Threat Detection

Learn how to integrate Falco into your cluster for real-time alerts, custom rules, and enhanced runtime security.

brown and white owl flying during daytime

Photo by Erik van Dijk on Unsplash

Modern cloud-native environments built on Kubernetes offer unprecedented flexibility and scalability, but with this power comes a new set of security challenges. Containers are ephemeral, workloads are dynamic, and threats can appear and disappear in seconds. In such an environment, traditional security approaches often fall short, especially when it comes to detecting and responding to runtime threats.

While observability tools like Prometheus, Loki, and OpenTelemetry are essential for monitoring the health and performance of applications, they are not designed to detect suspicious behavior. Metrics can tell you if CPU usage has spiked, and logs can show you what an application decided to print, but neither tells you if someone has just spawned a shell inside a running container or accessed sensitive files unexpectedly.

This is where Falco comes in.

Falco is a CNCF-graduated open source runtime security tool that watches your Kubernetes workloads like a security camera, monitoring system calls at the kernel level. It detects unexpected behavior in real time, such as a container trying to read /etc/shadow, start a shell or open a network socket it shouldn’t, and generates alerts the moment it happens.

In this blog post, we’ll dive into how Falco works, how to deploy it in your Kubernetes cluster, how to write and customize detection rules, and how it fits into a broader cloud-native security strategy. Whether you’re new to Kubernetes security or looking to enhance your existing observability stack with actionable threat detection, this guide will help you get started.

What Is Falco?

Falco is an open-source runtime security tool designed to detect unexpected behavior in your containers and Kubernetes environments. Originally created by Sysdig, Falco was donated to the Cloud Native Computing Foundation (CNCF), where it has since graduated as a mature and production-ready project. This milestone reflects not only its reliability, but also its growing importance in the cloud-native ecosystem.

At its core, Falco acts like a security camera for your containers, continuously monitoring the Linux kernel for system calls and checking them against a customizable set of rules. It can detect actions such as:

  • A shell being spawned inside a container
  • A sensitive file being read
  • A new process being launched unexpectedly
  • A container making an outbound network connection

To achieve this, Falco operates at the kernel level and can leverage one of two data collection backends:

Falco doesn’t stop at detection, it provides real-time alerts that can be forwarded to your preferred systems via integrations like Falcosidekick, making it easy to notify teams through Slack, Teams, Prometheus, or custom webhooks.

What sets Falco apart is its focus on behavioral detection rather than signature-based scanning. Instead of trying to match known bad files or images (like a vulnerability scanner), it observes what’s happening right now in your system, catching attacks and misconfigurations as they unfold, even if the container was initially clean.

This unique runtime capability makes Falco a powerful complement to static security tools, forming a more complete picture of your system’s security posture.

How Falco Detects Threats

So how does Falco actually know when something fishy is going on?

It all starts with system calls. These are the low-level requests every process makes when it wants to do something like read a file, launch a new command, or open a network socket. Normally, these calls fly under the radar, but Falco is listening in.

Thanks to its deep integration with the Linux kernel (via eBPF or a kernel module), Falco can see every system call made by every process on your node. That includes your containers, sidecars, and even processes running outside of Kubernetes pods. Once it sees these syscalls, Falco compares them against a set of rules that define what’s considered “suspicious” behavior.

What Do These Rules Look Like?

Falco rules are written in YAML and describe specific conditions to watch for. For example:

# Shell inside a container
- rule: Shell in Container
  desc: Detect shell activity inside a container
  condition: container.id != host and proc.name in (bash, sh, zsh)
  output: "Shell detected in container (user=%user.name command=%proc.cmdline)"
  priority: WARNING

This rule triggers if someone runs bash, sh, or zsh inside a container, something that usually shouldn’t happen in a well-behaved pod.

# Unexpected outbound network connection
- rule: Outbound Connection from Container
  desc: Detect outbound network connections from containers
  condition: container.id != host and evt.type = connect
  output: "Outbound connection detected (command=%proc.cmdline container=%container.name)"
  priority: NOTICE

Great for spotting malware or rogue services phoning home.

These are just two of many built-in rules. You can tweak them or write your own to match your environment. For example, you might want to alert when someone touches a specific secret file, or when a binary is executed in a restricted namespace.

Real-Time, No Waiting Around

The moment a rule matches, Falco fires off an alert. And it’s fast, like, milliseconds fast. These alerts can be written to stdout, logs, or if you use Falcosidekick, sent straight to Slack, Teams, Discord, Prometheus, Grafana, Elasticsearch, or just about anywhere else you want.

This real-time capability is what makes Falco so powerful. It doesn’t just give you information, it gives you a chance to act before something escalates. It’s like having a tripwire inside every pod.

Installing Falco in Kubernetes

Alright, now that you know what Falco does and how it detects threats, let’s get it running in your Kubernetes cluster.

Thankfully, deploying Falco isn’t a massive project, you can get started in just a few minutes using Helm, the package manager for Kubernetes. Under the hood, Falco runs as a DaemonSet, meaning one instance is deployed to each node in your cluster to keep watch.

Add the Helm Repo

First, add the official Falco Helm chart repository:

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

Install Falco

Now, install Falco using the default values:

helm install falco falcosecurity/falco --namespace falco --create-namespace

This installs Falco as a DaemonSet, ensuring that every node in your cluster is monitored. Each pod will use either the kernel module or eBPF, depending on your environment. If you prefer to use eBPF (which is recommended on most modern distros), you can enable it via Helm values:

helm install falco falcosecurity/falco \
  --namespace falco --create-namespace \
  --set driver.kind=ebpf

Tip: If you’re running managed Kubernetes (like GKE, EKS, or AKS), check compatibility with eBPF before switching from the default kernel module.

What About Permissions?

Since Falco needs low-level access to syscalls, it requires privileged access on each node. The Helm chart automatically sets up:

  • A DaemonSet to schedule Falco on each node
  • RBAC permissions to allow it to read Kubernetes metadata (pods, namespaces, etc.)
  • A privileged security context, so Falco can hook into the kernel

If you’re running a hardened cluster with strict PodSecurityPolicies or seccomp profiles, you may need to tweak some settings to allow Falco to start correctly.

Validate the Setup

Once deployed, check if the Falco pods are running:

kubectl get pods -n falco

You should see one pod per node. Now, try to trigger a basic rule, like opening a shell inside a container:

kubectl run -i --tty test --image=alpine -- sh

If Falco is working, you’ll see alerts in the logs:

kubectl logs -n falco -l app.kubernetes.io/name=falco

You should see something like:

23:05:14.954043974: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=sh proc_exepath=/bin/busybox parent=runc command=sh terminal=34817 exe_flags=EXE_WRITABLE|EXE_LOWER_LAYER container_id=395dbcf999e1 container_image=docker.io/library/alpine container_image_tag=latest container_name=test k8s_ns=default k8s_pod_name=test)

🎉 Boom. You’ve just detected your first runtime event with Falco.

Other ways to install Falco

If you want to avoid giving privileged access to a pod inside your Kubernetes cluster, you can also install Falco directly on your Linux nodes, either as a system service or by running the official Docker container outside of Kubernetes.

This approach can improve your security posture by keeping Falco outside the cluster entirely, reducing its exposure and eliminating the need for privileged containers. However, there’s a tradeoff: you’ll need to manage configuration, rules, and updates manually on each node. That adds complexity, especially as your cluster scales.

Still, if you’re running a small cluster or want to isolate security tooling from in-cluster workloads, running Falco outside of Kubernetes is a perfectly valid and well-supported option.

Customizing Falco Rules

One of the things that makes Falco so powerful (and fun to work with) is its rule engine. You’re not stuck with just the built-in rules, Falco gives you the tools to define exactly what “suspicious” means in your environment. Whether it’s someone poking at /etc/shadow, installing packages at runtime, or running a shell in a container, you get to decide what matters.

How the Rule Engine Works

Falco rules are written in YAML and structured around a few core pieces:

  • Conditions – logic that matches system call events
  • Rules – definitions that describe what to look for
  • Outputs – the alert message shown when a rule matches
  • Priorities – the severity level (INFO, NOTICE, WARNING, ERROR, CRITICAL)

Behind the scenes, Falco processes each syscall event and evaluates it against all active rules. If the condition matches, an alert is generated.

Here’s a simplified version of what a rule looks like:

- rule: Read Sensitive File
  desc: Alert when someone tries to read a sensitive file like /etc/shadow
  condition: open_read and fd.name=/etc/shadow
  output: "Sensitive file read (user=%user.name command=%proc.cmdline)"
  priority: ERROR
  tags: [filesystem, sensitive]

This rule watches for any read access to /etc/shadow, which contains password hashes on most Linux systems. If a process opens that file, boom, Falco throws an alert.

TIP: You can use macros and lists to make rules more reusable and readable. For example, you could define a list of sensitive files and reuse it across rules.

Writing Your Own Rule

Let’s say you want to be notified when someone tries to run nc (netcat) inside a container, a common tool used for reverse shells or exfiltration. Here’s how you could write it:

- rule: Netcat Executed in Container
  desc: Detect use of netcat inside a container
  condition: container and proc.name = nc
  output: "Netcat detected in container (command=%proc.cmdline container=%container.name)"
  priority: WARNING
  tags: [network, container, suspicious]

To use your custom rule, you can create a file like custom-rules.yaml and mount it as a volume into your Falco DaemonSet, or append your rule to /etc/falco/falco_rules.local.yaml if you’re running Falco directly on the host.

If you’re using Helm, you can pass custom rules by creating a custom-rules.yaml:

customRules:
  rules-custom.yaml: |-
    - rule: Netcat Executed in Container
      desc: Detect use of netcat inside a container
      condition: container and proc.name = nc
      output: "Netcat detected in container (command=%proc.cmdline container=%container.name)"
      priority: WARNING
      tags: [network, container, suspicious]
helm upgrade --install falco falcosecurity/falco \
  --namespace falco --create-namespace \
  --set driver.kind=ebpf \
  -f custom-rules.yaml

Testing & Tuning for False Positives

Now comes the tricky part: tuning.

Security tools are only useful if they don’t drown you in noise. Falco is powerful, but out of the box, some rules may be too broad, or too noisy, for your environment.

A good workflow looks like this:

  1. Start in warning mode – Don’t panic, just log alerts for a while.
  2. Watch the logs – See which alerts show up often and decide if they’re real concerns.
  3. Refine your rules – Add more specific conditions, exclude known-good processes or namespaces, and adjust priorities.

For example, maybe you’ve got an init container that reads a file normally flagged as sensitive. Instead of disabling the rule, you can tweak the condition:

condition: open_read and fd.name=/etc/shadow and container.name != init-reader

This way, you stay protected without silencing the rule entirely.

Falco rules are expressive, lightweight, and surprisingly readable once you get the hang of them. They turn observability into insight and insight into action.

Next up: let’s look at how to send these alerts somewhere useful. Because let’s be honest… reading logs in the terminal all day isn’t anyone’s idea of fun.

Alerting & Integrations

Okay, so you’ve got Falco running and it’s catching suspicious activity, nice! But staring at logs or kubectl logs -n falco ... all day isn’t practical. If you really want to turn detections into action, you need to route those alerts somewhere useful.

The good news? Falco gives you several ways to handle alerts, and there’s a fantastic tool called Falcosidekick that makes integrations super flexible.

Basic Output Options

Out of the box, Falco can send alerts to:

  • stdout – simple and useful if you’re running Falco in a container
  • Files – logs written to /var/log/falco.log by default
  • Syslog – send alerts to the system log
  • Webhook – send JSON-formatted alerts to a custom endpoint

Here’s a default-style alert in the logs:

23:01:45.123456781: Warning Shell detected in container (user=root command=sh container=test)

That’s fine for testing or dev setups, but in production, you probably want alerts sent to Slack, Microsoft Teams, Discord, or a monitoring system like Prometheus or Grafana.

Meet Falcosidekick: Your Alerting Sidekick

Falcosidekick is a lightweight Go service that listens for Falco’s webhook output and forwards alerts to over 50+ services, including:

  • Slack, Microsoft Teams, Discord, Telegram
  • Prometheus, Alertmanager, Grafana Loki
  • Opsgenie, PagerDuty, Datadog, Elasticsearch
  • Webhooks, MQTT, AWS Lambda, and more

To set it up:

  1. Deploy Falcosidekick alongside Falco (can be in the same namespace)
  2. Configure Falco to send alerts to it via webhook
  3. Configure Falcosidekick with your desired integrations (via environment variables or config file)

Helm makes this process even smoother. Here’s a quick example using Helm to deploy Falco with Falcosidekick:

helm install falco falcosecurity/falco \
  --namespace falco --create-namespace \
  --set falcosidekick.enabled=true \
  --set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/services/..." \
  --set falcosidekick.config.slack.channel="#alerts"

Boom, you’ve got runtime security alerts landing in Slack.

Bonus: Falcosidekick also includes a UI for viewing and filtering alerts in real-time, which is super handy during testing or incident response.

Choosing the Right Alert Routing Strategy

Not all alerts are created equal. Some you’ll want immediately in Slack or Teams. Others, like lower-priority notices, might be better sent to Grafana Loki or Elasticsearch for later analysis.

Here are a few common strategies:

  • Critical alerts (e.g., shell inside container, sensitive file access):

    Send to Slack, Teams, or PagerDuty with high visibility

  • Warning-level alerts (e.g., use of netcat, unexpected binary execution):

    Log to Loki or Elasticsearch, optionally route to a shared channel

  • Informational alerts (e.g., Kubernetes API activity, exec into a pod):

    Store in Prometheus or a dashboard for trend analysis

Using labels, tags, or custom rule priorities, you can filter and route alerts differently based on their severity or source.

Falco’s real-time detection is powerful, but it’s the integrations that really bring it to life. Connecting it to the tools your team already uses ensures you actually see the alerts, and more importantly, respond in time.

Up next: let’s check out some real-world use cases and show how Falco makes a difference when things go sideways.

Real-World Use Cases

Alright, we’ve talked theory, setup, and rules. But let’s get into the real stuff. When does Falco actually shine? What kinds of threats can it help you catch in the wild?

Here are a few examples of real-world scenarios where Falco has your back.

Detecting Container Escapes or Privilege Escalation

Container escapes are the nightmare scenario: an attacker breaks out of a container and gains access to the host system. That’s game over, unless you catch it in time.

With Falco running on every node, you can detect suspicious system calls that indicate something’s gone very wrong.

Example:
An attacker breaks into a vulnerable pod, then uses a kernel exploit to spawn a root shell outside the container context. Falco spots the anomaly:

Shell detected outside container (user=root command=bash parent=<some process>)

You can also detect unexpected privilege escalations, for example, if a process suddenly gains root access via sudo, setuid, or capabilities. These are the breadcrumbs that real-world attackers leave behind, and Falco is watching.

Monitoring for Unexpected Binaries or Crypto Miners

Ever had a pod suddenly start mining crypto? Hopefully not, but it happens more often than you’d think, especially in public clusters or systems exposed to the internet.

Falco can alert you if a process runs a suspicious binary, like xmrig, wget, curl, or even a base64-decoded payload.

Example Rule:

- rule: Known Crypto Miner Executed
  desc: Detect execution of common crypto mining tools
  condition: proc.name in (xmrig, minerd)
  output: "Possible crypto miner running (command=%proc.cmdline)"
  priority: CRITICAL

Even if an attacker brings in their own tools, they can’t hide from syscall-level monitoring.

Combining Falco with OPA, Kyverno & Friends

Falco doesn’t replace tools like Open Policy Agent (OPA) or Kyverno, but it complements them beautifully.

  • OPA/Kyverno: Prevent things from happening by enforcing policies at admission time.
  • Falco: Detect suspicious behavior at runtime, even if something slips through.

Let’s say someone finds a way to exec into a pod after it’s deployed. OPA and Kyverno can’t stop that, it’s out of their scope. But Falco sees it:

Exec into container detected (command=sh user=root pod=my-api-pod)

Better yet, you can chain them together:

  • Falco detects the incident
  • Falcosidekick sends a webhook
  • A controller triggers remediation (like quarantining the pod or revoking credentials)

This “observe-and-react” combo can turn your cluster into a self-healing security system, something many large-scale platforms are moving toward.

Falco isn’t just a fancy log generator. It’s a runtime watchdog that gives you visibility where other tools can’t: deep inside the container, during the moment something happens.

Next up, let’s explore how Falco compares to other tools, and where it fits into your broader security stack.

Comparing Falco to Other Tools

The cloud-native security landscape is growing fast, there’s no shortage of tools out there. So where exactly does Falco fit in? And how does it compare to other popular solutions like Trivy, Tetragon, or Cilium Hubble?

Let’s break it down.

Falco vs. Static Scanning Tools (like Trivy)

Trivy is a fantastic tool for vulnerability scanning. It checks your container images, Git repositories, IaC files, and more for known issues, before anything is deployed.

But here’s the thing:
Once a container is up and running, Trivy steps aside. It’s great at telling you, “Hey, this image has a known CVE,” but it won’t catch someone spawning a shell in a running pod or downloading suspicious binaries at runtime.

That’s where Falco comes in.

Think of it like this:

  • Trivy = security guard at the front door checking ID
  • Falco = motion detector inside the house watching for unexpected movement

They don’t compete, they complement each other beautifully.

Falco vs. eBPF-Based Projects (like Tetragon, Cilium Hubble)

eBPF is a hot topic right now, and for good reason. Tools like Tetragon (from Cilium) and Hubble offer powerful kernel-level observability using eBPF to track networking, process activity, and more.

So how does Falco compare?

  • Tetragon is built more for fine-grained observability and enforcement (e.g., blocking a process from spawning).
  • Hubble is focused on network visibility, helping you understand which pods are talking to each other.
  • Falco is focused on detecting abnormal behavior at runtime, using a rule engine designed for security alerts, not just observability.

Also worth noting: Falco can run in both kernel module and eBPF mode, so it’s already part of the same tech wave. It just leans more into security use cases out of the box.

Why You Need Both: Runtime + Static Analysis

No single tool can cover everything. That’s why the strongest security strategies combine:

  • Static analysis (Trivy, Snyk, Checkov) to catch known vulnerabilities and misconfigurations before deployment
  • Runtime detection (Falco, Tetragon) to catch real-time attacks, misuse, or drift after deployment

Put simply: prevention is ideal, but detection is essential.

Falco doesn’t replace your scanners or policies, it backs them up. It catches the things that slip through, whether that’s an attacker exploiting a zero-day or a developer doing something they shouldn’t.

Best Practices for Using Falco Effectively

Falco is incredibly powerful, but like any security tool, it needs a bit of tuning to truly shine in your environment. Without some care, you might end up with too many alerts (hello, alert fatigue 👋) or miss critical events because of poor rule management.

Here are some tried-and-true best practices to make Falco work for you, not against you.

Tune the Rules to Fit Your Environment

Falco ships with a large set of default rules. That’s awesome for getting started, but not every rule is going to make sense for your workloads.

For example:

  • A shell in a container might be fine in your debugging namespace.
  • Running curl inside a container might be normal during image pulls or tests.

Instead of disabling these rules entirely, tweak them to ignore known-good behaviors. Add conditions like:

container.name != "debug-tools"

or

user.name != "devops"

Start small. Monitor what Falco flags in staging first, then adapt the rules before rolling them into production.

Reduce the Noise Before It Hits Your Team

The key to avoiding alert fatigue? Don’t send every alert to Slack or PagerDuty.

Instead, create alert routing strategies:

  • Critical alerts go to Slack, Teams, or PagerDuty
  • Low-priority or frequent alerts go to a dashboard, like Grafana or Kibana
  • Everything else gets logged or forwarded to long-term storage for later review

Falcosidekick helps a lot here, you can route based on alert priority, source, or even rule name. That way, your team only gets notified when it really matters.

Manage Rules the GitOps Way

If you’re already using GitOps (with tools like ArgoCD or Flux), you’ll love this one: treat your Falco rules as code.

Store your custom rules in Git and version them like any other part of your infrastructure. Benefits:

  • Auditability – know who changed what and why
  • Testing – validate changes in staging first
  • Rollback – quickly revert a bad rule change
  • Automation – auto-deploy new rules with your CD pipeline

If you’re using Helm to deploy Falco, you can point to your custom rules file with --set-file or use a ConfigMap that’s managed by your GitOps tool.

Here’s a quick example:

helm upgrade falco falcosecurity/falco \
  --set-file customRules.customRulesFile=./rules/custom-rules.yaml

This workflow makes it easy to iterate and keep rules consistent across clusters.

Pro tip: Set up a CI job that lints or validates your Falco rules before merging into main. Nothing’s worse than pushing a broken YAML file into production.

Falco is like a super-sensitive microphone, it hears everything. But with the right rule tuning, noise filtering, and GitOps-style management, you can turn that noise into signal and that signal into fast, informed action.

Conclusion

As Kubernetes adoption continues to grow, so does the complexity of keeping workloads secure. Traditional tools like vulnerability scanners and admission controllers are important, but they only cover part of the story. Once your pods are up and running, you’re flying blind unless you have runtime visibility.

That’s where Falco comes in.

Falco gives you eyes inside your containers. It watches for suspicious system calls, detects unexpected behavior in real time, and helps you respond before small issues become big incidents. Whether it’s someone spawning a shell in a container, a crypto miner sneaking into your cluster, or a misconfigured pod accessing sensitive files, Falco’s on it.

And the best part? It’s open source, CNCF-graduated, and easy to get started with.

Ready to Give Falco a Try?

Here are a few actionable steps to get going:

  1. Install Falco in a test cluster using Helm

    📘 Falco Helm Chart Docs
  2. Trigger some test rules (e.g., open a shell inside a container)

    kubectl run -i --tty test --image=alpine -- sh

  3. Explore Falco logs and alerts

    Check logs with kubectl logs -n falco -l app.kubernetes.io/name=falco

  4. Install Falcosidekick to forward alerts to your favorite tools

  5. Write your first custom rule to match a behavior unique to your environment

Falco won’t replace your existing security stack, it completes it. And in a world where attacks move fast and containers are short-lived, having real-time runtime detection is no longer a “nice to have.” It’s essential.

Give Falco a spin, and turn your observability into action.

Sources & Further Reading

Here are some useful resources to explore Falco, threat detection, and Kubernetes security further:

Official Documentation & Tools

Blog Posts & Guides

Talks & Videos

Don’t Trust Me - Seriously

The author takes no responsibility for any mishaps, broken servers, or existential crises caused by following this information.

Found a mistake? Open an issue or PR on GitHub, or ping me on Mastodon/LinkedIn/Twitter. Let’s improve it together.

Also, this isn’t an ad - unless my enthusiasm and advocacy for cool stuff count as advertising.