diff --git a/charts/falco/config b/charts/falco/config index ee4a4a829..e7c51af7d 100644 --- a/charts/falco/config +++ b/charts/falco/config @@ -4,7 +4,7 @@ export USE_OPENSOURCE_CHART=false export REPO_URL=https://falcosecurity.github.io/charts export REPO_NAME=falcosecurity export CHART_NAME=falco -export VERSION=2.0.17 +export VERSION=4.8.3 # pr, issue, none export UPGRADE_METHOD=pr diff --git a/charts/falco/falco/Chart.yaml b/charts/falco/falco/Chart.yaml index 4c4db300f..ff01426e1 100644 --- a/charts/falco/falco/Chart.yaml +++ b/charts/falco/falco/Chart.yaml @@ -1,5 +1,5 @@ apiVersion: v2 -appVersion: 0.32.2 +appVersion: 0.38.2 description: Falco home: https://falco.org icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/falco/horizontal/color/falco-horizontal-color.svg @@ -16,8 +16,8 @@ maintainers: name: falco sources: - https://github.com/falcosecurity/falco -version: 2.0.17 +version: 4.8.3 dependencies: - name: falco - version: "2.0.17" + version: "4.8.3" repository: "https://falcosecurity.github.io/charts" diff --git a/charts/falco/falco/README.md b/charts/falco/falco/README.md index 0859ee03d..b6be5baef 100644 --- a/charts/falco/falco/README.md +++ b/charts/falco/falco/README.md @@ -4,7 +4,11 @@ ## Introduction -The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in `values.yaml` file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info. +The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info. + +## Attention + +Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required. ## Adding `falcosecurity` repository @@ -20,7 +24,9 @@ helm repo update To install the chart with the release name `falco` in namespace `falco` run: ```bash -helm install falco falcosecurity/falco --namespace falco --create-namespace +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco ``` After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*: @@ -35,62 +41,160 @@ falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 ``` -The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in `values.yaml` of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node. -> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment +The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node. +> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment. ### Falco, Event Sources and Kubernetes -Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or the **drivers** (system events). +Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events). + +Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/third-party/learning/#falco-with-multiple-sources). + +#### About Drivers + +Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are: + +* [Kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) +* [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) +* [Modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe) + +The driver should be installed on the node where Falco is running. The _kernel module_ (default option) and the _eBPF probe_ are installed on the node through an *init container* (i.e. `falco-driver-loader`) that tries download a prebuilt driver or build it on-the-fly as a fallback. The _Modern eBPF probe_ doesn't require an init container because it is shipped directly into the Falco binary. However, the _Modern eBPF probe_ requires [recent BPF features](https://falco.org/docs/event-sources/kernel/#modern-ebpf-probe). + +##### Pre-built drivers -Note that **multiple event sources can not be handled in the same Falco instance**. It means, you can not have Falco deployed leveraging **drivers** for syscalls events and at the same time loading **plugins**. Here you can find the [tracking issue](https://github.com/falcosecurity/falco/issues/2074) about multiple **event sources** in the same Falco instance. -If you need to handle **syscalls** and **plugins** events than consider deploying different Falco instances, one for each use case. -#### About the Driver +The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2). -Falco needs a **driver** (the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) or the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe)) that taps into the stream of system calls and passes that system calls to Falco. The driver must be installed on the node where Falco is running. +The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all). -By default the drivers are managed using an *init container* which includes a script (`falco-driver-loader`) that either tries to build the driver on-the-fly or downloads a prebuilt driver as a fallback. Usually, no action is required. +##### Building the driver on the fly (fallback) -If a prebuilt driver is not available for your distribution/kernel, Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation. +If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly. -### About Plugins +Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation. + +##### Selecting a different driver loader image + +Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`. + +#### About Plugins [Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*: * Event sourcing capability; * Field extraction capability; -Plugin capabilities are *composable*, we can have a single plugin with both the capabilities. Or on the other hand we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this are the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco +Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco + +Note that **the driver is not required when using plugins**. + +#### About gVisor +gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor. +Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml): +```yaml +driver: + gvisor: + enabled: true + runsc: + path: /home/containerd/usr/local/sbin + root: /run/containerd/runsc + config: /run/containerd/runsc/config.toml +``` +Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set: +* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes; +* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; +* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. -Note that **the driver is not required when using plugins**. When *plugins* are enabled Falco is deployed without the *init container*. +If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl). +A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments. + +##### Example: running Falco on GKE, with or without gVisor-enabled pods + +If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.: + +``` +helm install falco-gvisor falcosecurity/falco \ + --create-namespace \ + --namespace falco-gvisor \ + -f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml +``` + +Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual: + +``` +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=ebpf +``` + +The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it. + +##### Falco+gVisor additional resources +An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/). +If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/) + +### About Falco Artifacts +Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md). + +The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one: +* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts; +* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them; + +For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300) ### Deploying Falco in Kubernetes -After the clarification of the different **event sources** and how they are consumed by Falco using the **drivers** and the **plugins**, now lets discuss about how Falco is deployed in Kubernetes. +After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes. The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**. #### Daemonset -When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster. -Using the default values of the helm chart we get Falco deployed with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module). +When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster. -If the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) is desired, we just need to set `driver.kind=ebpf` as as show in the following snippet: +**Kernel module** +To run Falco with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) you can use the default values of the helm chart: -```yaml -driver: - enabled: true - kind: ebpf +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco +``` + +**eBPF probe** + +To run Falco with the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=ebpf ``` -There are other configurations related to the eBPF probe, for more info please check the `values.yaml` file. After you have made your changes to the configuration file you just need to run: + +There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run: ```bash -helm install falco falcosecurity/falco --namespace "your-custom-name-space" --create-namespace +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace "your-custom-name-space" \ + -f "path-to-custom-values.yaml-file" +``` + +**modern eBPF probe** + +To run Falco with the [modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe-experimental) you just need to set `driver.kind=modern_bpf` as shown in the following snippet: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=modern_ebpf ``` #### Deployment -In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service. +In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service. The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins: * need to be reachable when ingesting logs directly from remote services; * need only one active replica, otherwise events will be sent/received to/from different Falco instances; - ## Uninstalling the Chart To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`; @@ -104,7 +208,7 @@ There are many reasons why we would have to inspect the messages emitted by the ```bash kubectl logs -n falco falco-pod-name ``` -where `falco-pods-name` is the name of the Falco pod running in your cluster. +where `falco-pods-name` is the name of the Falco pod running in your cluster. The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command: ```bash kubectl logs -f -n falco falco-pod-name @@ -118,8 +222,56 @@ kubectl logs -p -n falco falco-pod-name A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong. ### Enabling real time logs -By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening. -In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in `values.yaml` file. +By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening. +In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file. + +## K8s-metacollector +Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed. +A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it. +The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata +from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers. + +Kubernetes' resources for which metadata will be collected and sent to Falco: +* pods; +* namespaces; +* deployments; +* replicationcontrollers; +* replicasets; +* services; + +### Plugin +Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component +in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin. +The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information +in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the +associated Falco instance is deployed, resulting in node-level granularity. + +### Exported Fields: Old and New +The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the +[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still +available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still +usable and will return meaningful data when the `container runtime collectors` are enabled: +* k8s.pod.name; +* k8s.pod.id; +* k8s.pod.label; +* k8s.pod.labels; +* k8s.pod.ip; +* k8s.pod.cni.json; +* k8s.pod.namespace.name; + +The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new +[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new +`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco. + +### Enabling the k8s-metacollector +The following command will deploy Falco + k8s-metacollector + k8smeta: +```bash +helm install falco falcosecurity/falco \ + --namespace falco \ + --create-namespace \ + --set collectors.kubernetes.enabled=true +``` + ## Loading custom rules Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs. @@ -186,14 +338,41 @@ The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://gi The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin: ```yaml +# -- Disable the drivers since we want to deploy only the k8saudit plugin. driver: enabled: false +# -- Disable the collectors, no syscall events to enrich with metadata. collectors: enabled: false +# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable. controller: kind: deployment + deployment: + # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. + # For more info check the section on Plugins in the README.md file. + replicas: 1 + +falcoctl: + artifact: + install: + # -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects. + enabled: true + follow: + # -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules. + enabled: true + config: + artifact: + install: + # -- Resolve the dependencies for artifacts. + resolveDeps: true + # -- List of artifacts to be installed by the falcoctl init container. + # Only rulesfile, the plugin will be installed as a dependency. + refs: [k8saudit-rules:0.5] + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: [k8saudit-rules:0.5] services: - name: k8saudit-webhook @@ -204,7 +383,7 @@ services: protocol: TCP falco: - rulesFile: + rules_file: - /etc/falco/k8s_audit_rules.yaml - /etc/falco/rules.d plugins: @@ -218,23 +397,30 @@ falco: - name: json library_path: libjson.so init_config: "" + # Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container. load_plugins: [k8saudit, json] + ``` -What the above configuration does is: +Here is the explanation of the above configuration: * disable the drivers by setting `driver.enabled=false`; * disable the collectors by setting `collectors.enabled=false`; -* deploy the Falco using a k8s *deploment* by setting `controller.kind=deployment`; -* makes our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`; +* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`; +* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`; +* enable the `falcoctl-artifact-install` init container; +* configure `falcoctl-artifact-install` to install the required plugins; +* disable the `falcoctl-artifact-follow` sidecar container; * load the correct ruleset for our plugin in `falco.rulesFile`; -* configure the plugins to be loaded, in this case the `k8saudit` and `json`; +* configure the plugins to be loaded, in this case, the `k8saudit` and `json`; * and finally we add our plugins in the `load_plugins` to be loaded by Falco. -The configuration can be found in the `values-k8saudit.yaml` file ready to be used: - +The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used: ```bash #make sure the falco namespace exists -helm install falco falcosecurity/falco --namespace falco -f ./values-k8saudit.yaml --create-namespace +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + -f ./values-k8saudit.yaml ``` After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*: ```bash @@ -252,7 +438,7 @@ Furthermore you can check that Falco logs through *kubectl logs* ```bash kubectl logs -n falco falco-64484d9579-qckms ``` -In the logs you should have something similar to the following, indcating that Falco has loaded the required plugins: +In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins: ```bash Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b) Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml @@ -304,7 +490,7 @@ spec: - Master - content: | # ... paste audit-policy.yaml here ... - # https://raw.githubusercontent.com/falcosecurity/evolution/master/examples/k8s_audit_config/audit-policy.yaml + # https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml name: audit-policy.yaml roles: - Master @@ -325,10 +511,11 @@ The preferred way to use the gRPC is over a Unix socket. To install Falco with gRPC enabled over a **unix socket**, you have to: ```shell -helm install falco \ - --set falco.grpc.enabled=true \ - --set falco.grpc_output.enabled=true \ - falcosecurity/falco +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.grpc.enabled=true \ + --set falco.grpc_output.enabled=true ``` ### gRPC over network @@ -339,24 +526,266 @@ How to generate the certificates is [documented here](https://falco.org/docs/grp To install Falco with gRPC enabled over the **network**, you have to: ```shell -helm install falco \ - --set falco.grpc.enabled=true \ - --set falco.grpcOutput.enabled=true \ - --set falco.grpc.unixSocketPath="" \ - --set-file certs.server.key=/path/to/server.key \ - --set-file certs.server.crt=/path/to/server.crt \ - --set-file certs.ca.crt=/path/to/ca.crt \ - falcosecurity/falco +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.grpc.enabled=true \ + --set falco.grpc_output.enabled=true \ + --set falco.grpc.unixSocketPath="" \ + --set-file certs.server.key=/path/to/server.key \ + --set-file certs.server.crt=/path/to/server.crt \ + --set-file certs.ca.crt=/path/to/ca.crt + ``` +## Enable http_output + +HTTP output enables Falco to send events through HTTP(S) via the following configuration: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.http_output.enabled=true \ + --set falco.http_output.url="http://some.url/some/path/" \ + --set falco.json_output=true \ + --set json_include_output_property=true +``` + +Additionally, you can enable mTLS communication and load HTTP client cryptographic material via: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.http_output.enabled=true \ + --set falco.http_output.url="https://some.url/some/path/" \ + --set falco.json_output=true \ + --set json_include_output_property=true \ + --set falco.http_output.mtls=true \ + --set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \ + --set falco.http_output.client_key="/etc/falco/certs/client/client.key" \ + --set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \ + --set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt" +``` + +Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value. + ## Deploy Falcosidekick with Falco [`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`. -All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/falcosidekick#configuration). +All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration). For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`. If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that. ## Configuration -All the configurable parameters of the falco chart and their default values can be found [here](./generated/helm-values.md). +The following table lists the main configurable parameters of the falco chart v4.8.3 and their default values. See [values.yaml](./values.yaml) for full list. + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Affinity constraint for pods' scheduling. | +| certs | object | `{"ca":{"crt":""},"client":{"crt":"","key":""},"existingClientSecret":"","existingSecret":"","server":{"crt":"","key":""}}` | certificates used by webserver and grpc server. paste certificate content or use helm with --set-file or use existing secret containing key, crt, ca as well as pem bundle | +| certs.ca.crt | string | `""` | CA certificate used by gRPC, webserver and AuditSink validation. | +| certs.client.crt | string | `""` | Certificate used by http mTLS client. | +| certs.client.key | string | `""` | Key used by http mTLS client. | +| certs.existingSecret | string | `""` | Existing secret containing the following key, crt and ca as well as the bundle pem. | +| certs.server.crt | string | `""` | Certificate used by gRPC and webserver. | +| certs.server.key | string | `""` | Key used by gRPC and webserver. | +| collectors.containerd.enabled | bool | `true` | Enable ContainerD support. | +| collectors.containerd.socket | string | `"/run/containerd/containerd.sock"` | The path of the ContainerD socket. | +| collectors.crio.enabled | bool | `true` | Enable CRI-O support. | +| collectors.crio.socket | string | `"/run/crio/crio.sock"` | The path of the CRI-O socket. | +| collectors.docker.enabled | bool | `true` | Enable Docker support. | +| collectors.docker.socket | string | `"/var/run/docker.sock"` | The path of the Docker daemon socket. | +| collectors.enabled | bool | `true` | Enable/disable all the metadata collectors. | +| collectors.kubernetes | object | `{"collectorHostname":"","collectorPort":"","enabled":false,"pluginRef":"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0"}` | kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 | +| collectors.kubernetes.collectorHostname | string | `""` | collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override it, make sure to set here the address of the k8s-metacollector. It is used by the k8smeta plugin to connect to the k8s-metacollector. | +| collectors.kubernetes.collectorPort | string | `""` | collectorPort designates the port on which the k8s-metacollector gRPC service listens. If not specified the value of the port named `broker-grpc` in k8s-metacollector.service.ports is used. The default values is 45000. It is used by the k8smeta plugin to connect to the k8s-metacollector. | +| collectors.kubernetes.enabled | bool | `false` | enabled specifies whether the Kubernetes metadata should be collected using the k8smeta plugin and the k8s-metacollector component. It will deploy the k8s-metacollector external component that fetches Kubernetes metadata and pushes them to Falco instances. For more info see: https://github.com/falcosecurity/k8s-metacollector https://github.com/falcosecurity/charts/tree/master/charts/k8s-metacollector When this option is disabled, Falco falls back to the container annotations to grab the metadata. In such a case, only the ID, name, namespace, labels of the pod will be available. | +| collectors.kubernetes.pluginRef | string | `"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0"` | pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0. | +| containerSecurityContext | object | `{}` | Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl" | +| controller.annotations | object | `{}` | | +| controller.daemonset.updateStrategy.type | string | `"RollingUpdate"` | Perform rolling updates by default in the DaemonSet agent ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ | +| controller.deployment.replicas | int | `1` | Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. For more info check the section on Plugins in the README.md file. | +| controller.kind | string | `"daemonset"` | | +| controller.labels | object | `{}` | Extra labels to add to the daemonset or deployment | +| customRules | object | `{}` | Third party rules enabled for Falco. More info on the dedicated section in README.md file. | +| driver.ebpf | object | `{"bufSizePreset":4,"dropFailedExit":false,"hostNetwork":false,"leastPrivileged":false,"path":"${HOME}/.falco/falco-bpf.o"}` | Configuration section for ebpf driver. | +| driver.ebpf.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. | +| driver.ebpf.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. | +| driver.ebpf.hostNetwork | bool | `false` | Needed to enable eBPF JIT at runtime for performance reasons. Can be skipped if eBPF JIT is enabled from outside the container | +| driver.ebpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_SYS_ADMIN, CAP_SYS_PTRACE}. On kernel versions >= 5.8 'CAP_PERFMON' and 'CAP_BPF' could replace 'CAP_SYS_ADMIN' but please pay attention to the 'kernel.perf_event_paranoid' value on your system. Usually 'kernel.perf_event_paranoid>2' means that you cannot use 'CAP_PERFMON' and you should fallback to 'CAP_SYS_ADMIN', but the behavior changes across different distros. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-1 | +| driver.ebpf.path | string | `"${HOME}/.falco/falco-bpf.o"` | path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init container deployed with the chart. | +| driver.enabled | bool | `true` | Set it to false if you want to deploy Falco without the drivers. Always set it to false when using Falco with plugins. | +| driver.gvisor | object | `{"runsc":{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}}` | Gvisor configuration. Based on your system you need to set the appropriate values. Please, remember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes. | +| driver.gvisor.runsc | object | `{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}` | Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods. | +| driver.gvisor.runsc.config | string | `"/run/containerd/runsc/config.toml"` | Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. | +| driver.gvisor.runsc.path | string | `"/home/containerd/usr/local/sbin"` | Absolute path of the `runsc` binary in the k8s nodes. | +| driver.gvisor.runsc.root | string | `"/run/containerd/runsc"` | Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; | +| driver.kind | string | `"auto"` | kind tells Falco which driver to use. Available options: kmod (kernel driver), ebpf (eBPF probe), modern_ebpf (modern eBPF probe). | +| driver.kmod | object | `{"bufSizePreset":4,"dropFailedExit":false}` | kmod holds the configuration for the kernel module. | +| driver.kmod.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. | +| driver.kmod.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. | +| driver.loader | object | `{"enabled":true,"initContainer":{"args":[],"env":[],"image":{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falco-driver-loader","tag":""},"resources":{},"securityContext":{}}}` | Configuration for the Falco init container. | +| driver.loader.enabled | bool | `true` | Enable/disable the init container. | +| driver.loader.initContainer.args | list | `[]` | Arguments to pass to the Falco driver loader init container. | +| driver.loader.initContainer.env | list | `[]` | Extra environment variables that will be pass onto Falco driver loader init container. | +| driver.loader.initContainer.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| driver.loader.initContainer.image.registry | string | `"docker.io"` | The image registry to pull from. | +| driver.loader.initContainer.image.repository | string | `"falcosecurity/falco-driver-loader"` | The image repository to pull from. | +| driver.loader.initContainer.resources | object | `{}` | Resources requests and limits for the Falco driver loader init container. | +| driver.loader.initContainer.securityContext | object | `{}` | Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`. | +| driver.modernEbpf.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. | +| driver.modernEbpf.cpusForEachBuffer | int | `2` | cpusForEachBuffer is the index that controls how many CPUs to assign to a single syscall buffer. | +| driver.modernEbpf.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. | +| driver.modernEbpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the modern bpf driver is enabled (i.e., setting the `driver.kind` option to `modern-bpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_BPF, CAP_PERFMON, CAP_SYS_PTRACE}. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 | +| extra.args | list | `[]` | Extra command-line arguments. | +| extra.env | list | `[]` | Extra environment variables that will be pass onto Falco containers. | +| extra.initContainers | list | `[]` | Additional initContainers for Falco pods. | +| falco.base_syscalls | object | `{"custom_set":[],"repair":false}` | - [Suggestions] NOTE: setting `base_syscalls.repair: true` automates the following suggestions for you. These suggestions are subject to change as Falco and its state engine evolve. For execve* events: Some Falco fields for an execve* syscall are retrieved from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning a new process. The `close` syscall is used to purge file descriptors from Falco's internal thread / process cache table and is necessary for rules relating to file descriptors (e.g. open, openat, openat2, socket, connect, accept, accept4 ... and many more) Consider enabling the following syscalls in `base_syscalls.custom_set` for process rules: [clone, clone3, fork, vfork, execve, execveat, close] For networking related events: While you can log `connect` or `accept*` syscalls without the socket syscall, the log will not contain the ip tuples. Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also necessary. We recommend the following as the minimum set for networking-related rules: [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, getsockopt] Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process when the running process opens a file or makes a network connection, consider adding the following to the above recommended syscall sets: ... setresuid, setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, fchdir ... | +| falco.buffered_outputs | bool | `false` | Enabling buffering for the output queue can offer performance optimization, efficient resource usage, and smoother data flow, resulting in a more reliable output mechanism. By default, buffering is disabled (false). | +| falco.config_files[0] | string | `"/etc/falco/config.d"` | | +| falco.falco_libs.thread_table_size | int | `262144` | | +| falco.file_output | object | `{"enabled":false,"filename":"./events.txt","keep_alive":false}` | When appending Falco alerts to a file, each new alert will be added to a new line. It's important to note that Falco does not perform log rotation for this file. If the `keep_alive` option is set to `true`, the file will be opened once and continuously written to, else the file will be reopened for each output message. Furthermore, the file will be closed and reopened if Falco receives the SIGUSR1 signal. | +| falco.grpc | object | `{"bind_address":"unix:///run/falco/falco.sock","enabled":false,"threadiness":0}` | gRPC server using a local unix socket | +| falco.grpc.threadiness | int | `0` | When the `threadiness` value is set to 0, Falco will automatically determine the appropriate number of threads based on the number of online cores in the system. | +| falco.grpc_output | object | `{"enabled":false}` | Use gRPC as an output service. gRPC is a modern and high-performance framework for remote procedure calls (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC output in Falco provides a modern and efficient way to integrate with other systems. By default the setting is turned off. Enabling this option stores output events in memory until they are consumed by a gRPC client. Ensure that you have a consumer for the output events or leave it disabled. | +| falco.http_output | object | `{"ca_bundle":"","ca_cert":"","ca_path":"/etc/falco/certs/","client_cert":"/etc/falco/certs/client/client.crt","client_key":"/etc/falco/certs/client/client.key","compress_uploads":false,"echo":false,"enabled":false,"insecure":false,"keep_alive":false,"mtls":false,"url":"","user_agent":"falcosecurity/falco"}` | Send logs to an HTTP endpoint or webhook. | +| falco.http_output.ca_bundle | string | `""` | Path to a specific file that will be used as the CA certificate store. | +| falco.http_output.ca_cert | string | `""` | Path to the CA certificate that can verify the remote server. | +| falco.http_output.ca_path | string | `"/etc/falco/certs/"` | Path to a folder that will be used as the CA certificate store. CA certificate need to be stored as indivitual PEM files in this directory. | +| falco.http_output.client_cert | string | `"/etc/falco/certs/client/client.crt"` | Path to the client cert. | +| falco.http_output.client_key | string | `"/etc/falco/certs/client/client.key"` | Path to the client key. | +| falco.http_output.compress_uploads | bool | `false` | compress_uploads whether to compress data sent to http endpoint. | +| falco.http_output.echo | bool | `false` | Whether to echo server answers to stdout | +| falco.http_output.insecure | bool | `false` | Tell Falco to not verify the remote server. | +| falco.http_output.keep_alive | bool | `false` | keep_alive whether to keep alive the connection. | +| falco.http_output.mtls | bool | `false` | Tell Falco to use mTLS | +| falco.json_include_output_property | bool | `true` | When using JSON output in Falco, you have the option to include the "output" property itself in the generated JSON output. The "output" property provides additional information about the purpose of the rule. To reduce the logging volume, it is recommended to turn it off if it's not necessary for your use case. | +| falco.json_include_tags_property | bool | `true` | When using JSON output in Falco, you have the option to include the "tags" field of the rules in the generated JSON output. The "tags" field provides additional metadata associated with the rule. To reduce the logging volume, if the tags associated with the rule are not needed for your use case or can be added at a later stage, it is recommended to turn it off. | +| falco.json_output | bool | `false` | When enabled, Falco will output alert messages and rules file loading/validation results in JSON format, making it easier for downstream programs to process and consume the data. By default, this option is disabled. | +| falco.libs_logger | object | `{"enabled":false,"severity":"debug"}` | The `libs_logger` setting in Falco determines the minimum log level to include in the logs related to the functioning of the software of the underlying `libs` library, which Falco utilizes. This setting is independent of the `priority` field of rules and the `log_level` setting that controls Falco's operational logs. It allows you to specify the desired log level for the `libs` library specifically, providing more granular control over the logging behavior of the underlying components used by Falco. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". It is not recommended for production use. | +| falco.load_plugins | list | `[]` | Add here all plugins and their configuration. Please consult the plugins documentation for more info. Remember to add the plugins name in "load_plugins: []" in order to load them in Falco. | +| falco.log_level | string | `"info"` | The `log_level` setting determines the minimum log level to include in Falco's logs related to the functioning of the software. This setting is separate from the `priority` field of rules and specifically controls the log level of Falco's operational logging. By specifying a log level, you can control the verbosity of Falco's operational logs. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". | +| falco.log_stderr | bool | `true` | Send information logs to stderr. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. | +| falco.log_syslog | bool | `true` | Send information logs to syslog. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. | +| falco.metrics | object | `{"convert_memory_to_mb":true,"enabled":false,"include_empty_values":false,"interval":"1h","kernel_event_counters_enabled":true,"libbpf_stats_enabled":true,"output_rule":true,"resource_utilization_enabled":true,"rules_counters_enabled":true,"state_counters_enabled":true}` | - [Usage] `enabled`: Disabled by default. `interval`: The stats interval in Falco follows the time duration definitions used by Prometheus. https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations Time durations are specified as a number, followed immediately by one of the following units: ms - millisecond s - second m - minute h - hour d - day - assuming a day has always 24h w - week - assuming a week has always 7d y - year - assuming a year has always 365d Example of a valid time duration: 1h30m20s10ms A minimum interval of 100ms is enforced for metric collection. However, for production environments, we recommend selecting one of the following intervals for optimal monitoring: 15m 30m 1h 4h 6h `output_rule`: To enable seamless metrics and performance monitoring, we recommend emitting metrics as the rule "Falco internal: metrics snapshot". This option is particularly useful when Falco logs are preserved in a data lake. Please note that to use this option, the Falco rules config `priority` must be set to `info` at a minimum. `output_file`: Append stats to a `jsonl` file. Use with caution in production as Falco does not automatically rotate the file. `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. `rules_counters_enabled`: Emit counts for each rule. `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. `state_counters_enabled`: Emit counters related to Falco's state engine, including added, removed threads or file descriptors (fds), and failed lookup, store, or retrieve actions in relation to Falco's underlying process cache table (threadtable). We also log the number of currently cached containers if applicable. `kernel_event_counters_enabled`: Emit kernel side event and drop counters, as an alternative to `syscall_event_drops`, but with some differences. These counters reflect monotonic values since Falco's start and are exported at a constant stats interval. `libbpf_stats_enabled`: Exposes statistics similar to `bpftool prog show`, providing information such as the number of invocations of each BPF program attached by Falco and the time spent in each program measured in nanoseconds. To enable this feature, the kernel must be >= 5.1, and the kernel configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, or an equivalent statistics feature, is not available for non `*bpf*` drivers. Additionally, please be aware that the current implementation of `libbpf` does not support granularity of statistics at the bpf tail call level. `include_empty_values`: When the option is set to true, fields with an empty numeric value will be included in the output. However, this rule does not apply to high-level fields such as `n_evts` or `n_drops`; they will always be included in the output even if their value is empty. This option can be beneficial for exploring the data schema and ensuring that fields with empty values are included in the output. todo: prometheus export option todo: syscall_counters_enabled option | +| falco.output_timeout | int | `2000` | The `output_timeout` parameter specifies the duration, in milliseconds, to wait before considering the deadline exceeded. By default, the timeout is set to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block the Falco output channel for up to 2 seconds without triggering a timeout error. Falco actively monitors the performance of output channels. With this setting the timeout error can be logged, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. It's important to note that Falco outputs will not be discarded from the output queue. This means that if an output channel becomes blocked indefinitely, it indicates a potential issue that needs to be addressed by the user. | +| falco.outputs | object | `{"max_burst":1000,"rate":0}` | A throttling mechanism, implemented as a token bucket, can be used to control the rate of Falco outputs. Each event source has its own rate limiter, ensuring that alerts from one source do not affect the throttling of others. The following options control the mechanism: - rate: the number of tokens (i.e. right to send a notification) gained per second. When 0, the throttling mechanism is disabled. Defaults to 0. - max_burst: the maximum number of tokens outstanding. Defaults to 1000. For example, setting the rate to 1 allows Falco to send up to 1000 notifications initially, followed by 1 notification per second. The burst capacity is fully restored after 1000 seconds of no activity. Throttling can be useful in various scenarios, such as preventing notification floods, managing system load, controlling event processing, or complying with rate limits imposed by external systems or APIs. It allows for better resource utilization, avoids overwhelming downstream systems, and helps maintain a balanced and controlled flow of notifications. With the default settings, the throttling mechanism is disabled. | +| falco.outputs_queue | object | `{"capacity":0}` | Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter allows you to customize the queue capacity. Please refer to the official documentation: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. On a healthy system with optimized Falco rules, the queue should not fill up. If it does, it is most likely happening due to the entire event flow being too slow, indicating that the server is under heavy load. `capacity`: the maximum number of items allowed in the queue is determined by this value. Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. In other words, when this configuration is set to 0, the number of allowed items is effectively set to the largest possible long value, disabling this setting. In the case of an unbounded queue, if the available memory on the system is consumed, the Falco process would be OOM killed. When using this option and setting the capacity, the current event would be dropped, and the event loop would continue. This behavior mirrors kernel-side event drops when the buffer between kernel space and user space is full. | +| falco.plugins | list | `[{"init_config":null,"library_path":"libk8saudit.so","name":"k8saudit","open_params":"http://:9765/k8s-audit"},{"library_path":"libcloudtrail.so","name":"cloudtrail"},{"init_config":"","library_path":"libjson.so","name":"json"}]` | Customize subsettings for each enabled plugin. These settings will only be applied when the corresponding plugin is enabled using the `load_plugins` option. | +| falco.priority | string | `"debug"` | Any rule with a priority level more severe than or equal to the specified minimum level will be loaded and run by Falco. This allows you to filter and control the rules based on their severity, ensuring that only rules of a certain priority or higher are active and evaluated by Falco. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug" | +| falco.program_output | object | `{"enabled":false,"keep_alive":false,"program":"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"}` | Redirect the output to another program or command. Possible additional things you might want to do with program output: - send to a slack webhook: program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - logging (alternate method than syslog): program: logger -t falco-test - send over a network connection: program: nc host.example.com 80 If `keep_alive` is set to `true`, the program will be started once and continuously written to, with each output message on its own line. If `keep_alive` is set to `false`, the program will be re-spawned for each output message. Furthermore, the program will be re-spawned if Falco receives the SIGUSR1 signal. | +| falco.rule_matching | string | `"first"` | - [Examples] Only enable two rules: rules: - disable: rule: "*" - enable: rule: Netcat Remote Code Execution in Container - enable: rule: Delete or rename shell history Disable all rules with a specific tag: rules: - disable: tag: network [Incubating] `rule_matching` - Falco has to be performant when evaluating rules against events. To quickly understand which rules could trigger on a specific event, Falco maintains buckets of rules sharing the same event type in a map. Then, the lookup in each bucket is performed through linear search. The `rule_matching` configuration key's values are: - "first": when evaluating conditions of rules in a bucket, Falco will stop to evaluate rules if it finds a matching rules. Since rules are stored in buckets in the order they are defined in the rules files, this option could prevent other rules to trigger even if their condition is met, causing a shadowing problem. - "all": with this value Falco will continue evaluating all the rules stored in the bucket, so that multiple rules could be triggered upon one event. | +| falco.rules_files | list | `["/etc/falco/falco_rules.yaml","/etc/falco/falco_rules.local.yaml","/etc/falco/rules.d"]` | The location of the rules files that will be consumed by Falco. | +| falco.stdout_output | object | `{"enabled":true}` | Redirect logs to standard output. | +| falco.syscall_event_drops | object | `{"actions":["log","alert"],"max_burst":1,"rate":0.03333,"simulate_drops":false,"threshold":0.1}` | For debugging/testing it is possible to simulate the drops using the `simulate_drops: true`. In this case the threshold does not apply. | +| falco.syscall_event_drops.actions | list | `["log","alert"]` | Actions to be taken when system calls were dropped from the circular buffer. | +| falco.syscall_event_drops.max_burst | int | `1` | Max burst of messages emitted. | +| falco.syscall_event_drops.rate | float | `0.03333` | Rate at which log/alert messages are emitted. | +| falco.syscall_event_drops.simulate_drops | bool | `false` | Flag to enable drops for debug purposes. | +| falco.syscall_event_drops.threshold | float | `0.1` | The messages are emitted when the percentage of dropped system calls with respect the number of events in the last second is greater than the given threshold (a double in the range [0, 1]). | +| falco.syscall_event_timeouts | object | `{"max_consecutives":1000}` | Generates Falco operational logs when `log_level=notice` at minimum Falco utilizes a shared buffer between the kernel and userspace to receive events, such as system call information, in userspace. However, there may be cases where timeouts occur in the underlying libraries due to issues in reading events or the need to skip a particular event. While it is uncommon for Falco to experience consecutive event timeouts, it has the capability to detect such situations. You can configure the maximum number of consecutive timeouts without an event after which Falco will generate an alert, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. The default value is set to 1000 consecutive timeouts without receiving any events. The mapping of this value to a time interval depends on the CPU frequency. | +| falco.syslog_output | object | `{"enabled":true}` | Send logs to syslog. | +| falco.time_format_iso_8601 | bool | `false` | When enabled, Falco will display log and output messages with times in the ISO 8601 format. By default, times are shown in the local time zone determined by the /etc/localtime configuration. | +| falco.watch_config_files | bool | `true` | Watch config file and rules files for modification. When a file is modified, Falco will propagate new config, by reloading itself. | +| falco.webserver | object | `{"enabled":true,"k8s_healthz_endpoint":"/healthz","listen_port":8765,"prometheus_metrics_enabled":false,"ssl_certificate":"/etc/falco/falco.pem","ssl_enabled":false,"threadiness":0}` | Falco supports an embedded webserver that runs within the Falco process, providing a lightweight and efficient way to expose web-based functionalities without the need for an external web server. The following endpoints are exposed: - /healthz: designed to be used for checking the health and availability of the Falco application (the name of the endpoint is configurable). - /versions: responds with a JSON object containing the version numbers of the internal Falco components (similar output as `falco --version -o json_output=true`). Please note that the /versions endpoint is particularly useful for other Falco services, such as `falcoctl`, to retrieve information about a running Falco instance. If you plan to use `falcoctl` locally or with Kubernetes, make sure the Falco webserver is enabled. The behavior of the webserver can be controlled with the following options, which are enabled by default: The `ssl_certificate` option specifies a combined SSL certificate and corresponding key that are contained in a single file. You can generate a key/cert as follows: $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem $ cat certificate.pem key.pem > falco.pem $ sudo cp falco.pem /etc/falco/falco.pem | +| falcoctl.artifact.follow | object | `{"args":["--log-format=json"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible with the running version of Falco before installing it. | +| falcoctl.artifact.follow.args | list | `["--log-format=json"]` | Arguments to pass to the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.securityContext | object | `{}` | Security context for the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.install | object | `{"args":["--log-format=json"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before Falco starts. It provides them to Falco by using an emptyDir volume. | +| falcoctl.artifact.install.args | list | `["--log-format=json"]` | Arguments to pass to the falcoctl-artifact-install init container. | +| falcoctl.artifact.install.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-install init container. | +| falcoctl.artifact.install.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-install init container. | +| falcoctl.artifact.install.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-install init container. | +| falcoctl.artifact.install.securityContext | object | `{}` | Security context for the falcoctl init container. | +| falcoctl.config | object | `{"artifact":{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}},"indexes":[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]}` | Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. | +| falcoctl.config.artifact | object | `{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}}` | Configuration used by the artifact commands. | +| falcoctl.config.artifact.allowedTypes | list | `["rulesfile","plugin"]` | List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained in the list it will refuse to downloade and install that artifact. | +| falcoctl.config.artifact.follow.every | string | `"6h"` | How often the tool checks for new versions of the followed artifacts. | +| falcoctl.config.artifact.follow.falcoversions | string | `"http://localhost:8765/versions"` | HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible with the running Falco instance. | +| falcoctl.config.artifact.follow.pluginsDir | string | `"/plugins"` | See the fields of the artifact.install section. | +| falcoctl.config.artifact.follow.refs | list | `["falco-rules:3"]` | List of artifacts to be followed by the falcoctl sidecar container. | +| falcoctl.config.artifact.follow.rulesfilesDir | string | `"/rulesfiles"` | See the fields of the artifact.install section. | +| falcoctl.config.artifact.install.pluginsDir | string | `"/plugins"` | Same as the one above but for the artifacts. | +| falcoctl.config.artifact.install.refs | list | `["falco-rules:3"]` | List of artifacts to be installed by the falcoctl init container. | +| falcoctl.config.artifact.install.resolveDeps | bool | `true` | Resolve the dependencies for artifacts. | +| falcoctl.config.artifact.install.rulesfilesDir | string | `"/rulesfiles"` | Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir mounted also by the Falco pod. | +| falcoctl.config.indexes | list | `[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]` | List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview | +| falcoctl.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| falcoctl.image.registry | string | `"docker.io"` | The image registry to pull from. | +| falcoctl.image.repository | string | `"falcosecurity/falcoctl"` | The image repository to pull from. | +| falcoctl.image.tag | string | `"0.9.0"` | The image tag to pull. | +| falcosidekick | object | `{"enabled":false,"fullfqdn":false,"listenPort":""}` | For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml | +| falcosidekick.enabled | bool | `false` | Enable falcosidekick deployment. | +| falcosidekick.fullfqdn | bool | `false` | Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used). | +| falcosidekick.listenPort | string | `""` | Listen port. Default value: 2801 | +| fullnameOverride | string | `""` | Same as nameOverride but for the fullname. | +| healthChecks | object | `{"livenessProbe":{"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | Parameters used | +| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | Tells the kubelet that it should wait X seconds before performing the first probe. | +| healthChecks.livenessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. | +| healthChecks.livenessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. | +| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | Tells the kubelet that it should wait X seconds before performing the first probe. | +| healthChecks.readinessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. | +| healthChecks.readinessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. | +| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| image.registry | string | `"docker.io"` | The image registry to pull from. | +| image.repository | string | `"falcosecurity/falco-no-driver"` | The image repository to pull from | +| image.tag | string | `""` | The image tag to pull. Overrides the image tag whose default is the chart appVersion. | +| imagePullSecrets | list | `[]` | Secrets containing credentials when pulling from private/secure registries. | +| metrics | object | `{"convertMemoryToMB":true,"enabled":false,"includeEmptyValues":false,"interval":"1h","kernelEventCountersEnabled":true,"libbpfStatsEnabled":true,"outputRule":false,"resourceUtilizationEnabled":true,"rulesCountersEnabled":true,"service":{"create":true,"ports":{"metrics":{"port":8765,"protocol":"TCP","targetPort":8765}},"type":"ClusterIP"},"stateCountersEnabled":true}` | metrics configures Falco to enable and expose the metrics. | +| metrics.convertMemoryToMB | bool | `true` | convertMemoryToMB specifies whether the memory should be converted to mb. | +| metrics.enabled | bool | `false` | enabled specifies whether the metrics should be enabled. | +| metrics.includeEmptyValues | bool | `false` | includeEmptyValues specifies whether the empty values should be included in the metrics. | +| metrics.interval | string | `"1h"` | interval is stats interval in Falco follows the time duration definitions used by Prometheus. https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations Time durations are specified as a number, followed immediately by one of the following units: ms - millisecond s - second m - minute h - hour d - day - assuming a day has always 24h w - week - assuming a week has always 7d y - year - assuming a year has always 365d Example of a valid time duration: 1h30m20s10ms A minimum interval of 100ms is enforced for metric collection. However, for production environments, we recommend selecting one of the following intervals for optimal monitoring: 15m 30m 1h 4h 6h | +| metrics.libbpfStatsEnabled | bool | `true` | libbpfStatsEnabled exposes statistics similar to `bpftool prog show`, providing information such as the number of invocations of each BPF program attached by Falco and the time spent in each program measured in nanoseconds. To enable this feature, the kernel must be >= 5.1, and the kernel configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, or an equivalent statistics feature, is not available for non `*bpf*` drivers. Additionally, please be aware that the current implementation of `libbpf` does not support granularity of statistics at the bpf tail call level. | +| metrics.outputRule | bool | `false` | outputRule enables seamless metrics and performance monitoring, we recommend emitting metrics as the rule "Falco internal: metrics snapshot". This option is particularly useful when Falco logs are preserved in a data lake. Please note that to use this option, the Falco rules config `priority` must be set to `info` at a minimum. | +| metrics.resourceUtilizationEnabled | bool | `true` | resourceUtilizationEnabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. | +| metrics.rulesCountersEnabled | bool | `true` | rulesCountersEnabled specifies whether the counts for each rule should be emitted. | +| metrics.service | object | `{"create":true,"ports":{"metrics":{"port":8765,"protocol":"TCP","targetPort":8765}},"type":"ClusterIP"}` | service exposes the metrics service to be accessed from within the cluster. ref: https://kubernetes.io/docs/concepts/services-networking/service/ | +| metrics.service.create | bool | `true` | create specifies whether a service should be created. | +| metrics.service.ports | object | `{"metrics":{"port":8765,"protocol":"TCP","targetPort":8765}}` | ports denotes all the ports on which the Service will listen. | +| metrics.service.ports.metrics | object | `{"port":8765,"protocol":"TCP","targetPort":8765}` | metrics denotes a listening service named "metrics". | +| metrics.service.ports.metrics.port | int | `8765` | port is the port on which the Service will listen. | +| metrics.service.ports.metrics.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. | +| metrics.service.ports.metrics.targetPort | int | `8765` | targetPort is the port on which the Pod is listening. | +| metrics.service.type | string | `"ClusterIP"` | type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible from within the cluster. | +| mounts.enforceProcMount | bool | `false` | By default, `/proc` from the host is only mounted into the Falco pod when `driver.enabled` is set to `true`. This flag allows it to override this behaviour for edge cases where `/proc` is needed but syscall data source is not enabled at the same time (e.g. for specific plugins). | +| mounts.volumeMounts | list | `[]` | A list of volumes you want to add to the Falco pods. | +| mounts.volumes | list | `[]` | A list of volumes you want to add to the Falco pods. | +| nameOverride | string | `""` | Put here the new name if you want to override the release name used for Falco components. | +| namespaceOverride | string | `""` | Override the deployment namespace | +| nodeSelector | object | `{}` | Selectors used to deploy Falco on a given node/nodes. | +| podAnnotations | object | `{}` | Add additional pod annotations | +| podLabels | object | `{}` | Add additional pod labels | +| podPriorityClassName | string | `nil` | Set pod priorityClassName | +| podSecurityContext | object | `{}` | Set securityContext for the pods These security settings are overriden by the ones specified for the specific containers when there is overlap. | +| rbac.create | bool | `true` | | +| resources.limits | object | `{"cpu":"1000m","memory":"1024Mi"}` | Maximum amount of resources that Falco container could get. If you are enabling more than one source in falco, than consider to increase the cpu limits. | +| resources.requests | object | `{"cpu":"100m","memory":"512Mi"}` | Although resources needed are subjective on the actual workload we provide a sane defaults ones. If you have more questions or concerns, please refer to #falco slack channel for more info about it. | +| scc.create | bool | `true` | Create OpenShift's Security Context Constraint. | +| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. | +| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. | +| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | +| serviceMonitor | object | `{"create":false,"endpointPort":"metrics","interval":"15s","labels":{},"path":"/metrics","relabelings":[],"scheme":"http","scrapeTimeout":"10s","selector":{},"targetLabels":[],"tlsConfig":{}}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should discover and scrape metrics from the Falco service. | +| serviceMonitor.create | bool | `false` | create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. https://github.com/coreos/prometheus-operator Enable it only if the ServiceMonitor CRD is installed in your cluster. | +| serviceMonitor.endpointPort | string | `"metrics"` | endpointPort is the port in the Falco service that exposes the metrics service. Change the value if you deploy a custom service for Falco's metrics. | +| serviceMonitor.interval | string | `"15s"` | interval specifies the time interval at which Prometheus should scrape metrics from the service. | +| serviceMonitor.labels | object | `{}` | labels set of labels to be applied to the ServiceMonitor resource. If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right label here in order for the ServiceMonitor to be selected for target discovery. | +| serviceMonitor.path | string | `"/metrics"` | path at which the metrics are exposed by Falco. | +| serviceMonitor.relabelings | list | `[]` | relabelings configures the relabeling rules to apply the target’s metadata labels. | +| serviceMonitor.scheme | string | `"http"` | scheme specifies network protocol used by the metrics endpoint. In this case HTTP. | +| serviceMonitor.scrapeTimeout | string | `"10s"` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. | +| serviceMonitor.selector | object | `{}` | selector set of labels that should match the labels on the Service targeted by the current serviceMonitor. | +| serviceMonitor.targetLabels | list | `[]` | targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. | +| serviceMonitor.tlsConfig | object | `{}` | tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when scraping metrics from a service. It allows you to define the details of the TLS connection, such as CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support TLS configuration for the metrics endpoint. | +| services | string | `nil` | Network services configuration (scenario requirement) Add here your services to be deployed together with Falco. | +| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]` | Tolerations to allow Falco to run on Kubernetes masters. | +| tty | bool | `false` | Attach the Falco process to a tty inside the container. Needed to flush Falco logs as soon as they are emitted. Set it to "true" when you need the Falco logs to be immediately displayed. | diff --git a/charts/falco/falco/charts/falco/BREAKING-CHANGES.md b/charts/falco/falco/charts/falco/BREAKING-CHANGES.md new file mode 100644 index 000000000..5881962a1 --- /dev/null +++ b/charts/falco/falco/charts/falco/BREAKING-CHANGES.md @@ -0,0 +1,230 @@ +# Helm chart Breaking Changes + - [4.0.0](#400) + - [Drivers](#drivers) + - [K8s Collector](#k8s-collector) + - [Plugins](#plugins) + - [3.0.0](#300) + - [Falcoctl](#falcoctl-support) + - [Rulesfiles](#rulesfiles) + - [Falco Images](#drop-support-for-falcosecurityfalco-image) + - [Driver Loader Init Container](#driver-loader-simplified-logic) + +## 4.0.0 +### Drivers +The `driver` section has been reworked based on the following PR: https://github.com/falcosecurity/falco/pull/2413. +It is an attempt to uniform how a driver is configured in Falco. +It also groups the configuration based on the driver type. +Some of the drivers has been renamed: +* kernel modules has been renamed from `module` to `kmod`; +* the ebpf probe has not been changed. It's still `ebpf`; +* the modern ebpf probe has been renamed from `modern-bpf` to `modern_ebpf`. + +The `gvisor` configuration has been moved under the `driver` section since it is considered a driver on its own. + +### K8s Collector +The old Kubernetes client has been removed in Falco 0.37.0. For more info checkout this issue: https://github.com/falcosecurity/falco/issues/2973#issuecomment-1877803422. +The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) and [k8s-meta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) substitute +the old implementation. + +The following resources needed by Falco to connect to the API server are no longer needed and has been removed from the chart: +* service account; +* cluster role; +* cluster role binding. + +When the `collectors.kubernetes` is enabled the chart deploys the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) and configures Falco to load the +[k8s-meta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin. + +By default, the `collectors.kubernetes.enabled` is off; for more info, see the following issue: https://github.com/falcosecurity/falco/issues/2995. + +### Plugins +The Falco docker image does not ship anymore the plugins: https://github.com/falcosecurity/falco/pull/2997. +For this reason, the `resolveDeps` is now enabled in relevant values files (ie. `values-k8saudit.yaml`). +When installing `rulesfile` artifacts `falcoctl` will try to resolve its dependencies and install the required plugins. + +## 3.0.0 +The new chart deploys new *k8s* resources and new configuration variables have been added to the `values.yaml` file. People upgrading the chart from `v2.x.y` have to port their configuration variables to the new `values.yaml` file used by the `v3.0.0` chart. + +If you still want to use the old values, because you do not want to take advantage of the new and shiny **falcoctl** tool then just run: +```bash= +helm upgrade falco falcosecurity/falco \ + --namespace=falco \ + --reuse-values \ + --set falcoctl.artifact.install.enabled=false \ + --set falcoctl.artifact.follow.enabled=false +``` +This way you will upgrade Falco to `v0.34.0`. + +**NOTE**: The new version of Falco itself, installed by the chart, does not introduce breaking changes. You can port your previous Falco configuration to the new `values.yaml` by copy-pasting it. + + +### Falcoctl support + +[Falcoctl](https://github.com/falcosecurity/falcoctl) is a new tool born to automatize operations when deploying Falco. + +Before the `v3.0.0` of the charts *rulesfiles* and *plugins* were shipped bundled in the Falco docker image. It precluded the possibility to update the *rulesfiles* and *plugins* until a new version of Falco was released. Operators had to manually update the *rulesfiles or add new *plugins* to Falco. The process was cumbersome and error-prone. Operators had to create their own Falco docker images with the new plugins baked into it or wait for a new Falco release. + +Starting from the `v3.0.0` chart release, we add support for **falcoctl** in the charts. By deploying it alongside Falco it allows to: +- *install* artifacts of the Falco ecosystem (i.e plugins and rules at the moment of writing) +- *follow* those artifacts(only *rulesfile* artifacts are recommended), to keep them up-to-date with the latest releases of the Falcosecurity organization. This allows, for instance, to update rules detecting new vulnerabilities or security issues without the need to redeploy Falco. + +The chart deploys *falcoctl* using an *init container* and/or *sidecar container*. The first one is used to install artifacts and make them available to Falco at start-up time, the latter runs alongside Falco and updates the local artifacts when new updates are detected. + + Based on your deployment scenario: + +1. Falco without *plugins* and you just want to upgrade to the new Falco version: + ```bash= + helm upgrade falco falcosecurity/falco \ + --namespace=falco \ + --reuse-values \ + --set falcoctl.artifact.install.enabled=false \ + --set falcoctl.artifact.follow.enabled=false + ``` + When upgrading an existing release, *helm* uses the new chart version. Since we added new template files and changed the values schema(added new parameters) we explicitly disable the **falcoctl** tool. By doing so, the command will reuse the existing configuration but will deploy Falco version `0.34.0` + +2. Falco without *plugins* and you want to automatically get new *falco-rules* as soon as they are released: + ```bash= + helm upgrade falco falcosecurity/falco \ + --namespace=falco \ + ``` + Helm first applies the values coming from the new chart version, then overrides them using the values of the previous release. The outcome is a new release of Falco that: + * uses the previous configuration; + * runs Falco version `0.34.0`; + * uses **falcoctl** to install and automatically update the [*falco-rules*](https://github.com/falcosecurity/rules/); + * checks for new updates every 6h (default value). + + +3. Falco with *plugins* and you want just to upgrade Falco: + ```bash= + helm upgrade falco falcosecurity/falco \ + --namespace=falco \ + --reuse-values \ + --set falcoctl.artifact.install.enabled=false \ + --set falcoctl.artifact.follow.enabled=false + ``` + Very similar to scenario `1.` +4. Falco with plugins and you want to use **falcoctl** to download the plugins' *rulesfiles*: + * Save **falcoctl** configuration to file: + ```yaml= + cat << EOF > ./falcoctl-values.yaml + #################### + # falcoctl config # + #################### + falcoctl: + image: + # -- The image pull policy. + pullPolicy: IfNotPresent + # -- The image registry to pull from. + registry: docker.io + # -- The image repository to pull from. + repository: falcosecurity/falcoctl + # -- Overrides the image tag whose default is the chart appVersion. + tag: "main" + artifact: + # -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before + # Falco starts. It provides them to Falco by using an emptyDir volume. + install: + enabled: true + # -- Extra environment variables that will be pass onto falcoctl-artifact-install init container. + env: {} + # -- Arguments to pass to the falcoctl-artifact-install init container. + args: ["--verbose"] + # -- Resources requests and limits for the falcoctl-artifact-install init container. + resources: {} + # -- Security context for the falcoctl init container. + securityContext: {} + # -- Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for + # updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) + # that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the + # correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible + # with the running version of Falco before installing it. + follow: + enabled: true + # -- Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. + env: {} + # -- Arguments to pass to the falcoctl-artifact-follow sidecar container. + args: ["--verbose"] + # -- Resources requests and limits for the falcoctl-artifact-follow sidecar container. + resources: {} + # -- Security context for the falcoctl-artifact-follow sidecar container. + securityContext: {} + # -- Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. + config: + # -- List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: + # https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview + indexes: + - name: falcosecurity + url: https://falcosecurity.github.io/falcoctl/index.yaml + # -- Configuration used by the artifact commands. + artifact: + + # -- List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained + # in the list it will refuse to downloade and install that artifact. + allowedTypes: + - rulesfile + install: + # -- Do not resolve the depenencies for artifacts. By default is true, but for our use carse we disable it. + resolveDeps: false + # -- List of artifacts to be installed by the falcoctl init container. + refs: [k8saudit-rules:0.5] + # -- Directory where the *rulesfiles* are saved. The path is relative to the container, which in this case is an emptyDir + # mounted also by the Falco pod. + rulesfilesDir: /rulesfiles + # -- Same as the one above but for the artifacts. + pluginsDir: /plugins + follow: + # -- List of artifacts to be installed by the falcoctl init container. + refs: [k8saudit-rules:0.5] + # -- Directory where the *rulesfiles* are saved. The path is relative to the container, which in this case is an emptyDir + # mounted also by the Falco pod. + rulesfilesDir: /rulesfiles + # -- Same as the one above but for the artifacts. + pluginsDir: /plugins + EOF + ``` + * Set `falcoctl.artifact.install.enabled=true` to install *rulesfiles* of the loaded plugins. Configure **falcoctl** to install the *rulesfiles* of the plugins you are loading with Falco. For example, if you are loading **k8saudit** plugin then you need to set `falcoctl.config.artifact.install.refs=[k8saudit-rules:0.5]`. When Falco is deployed the **falcoctl** init container will download the specified artifacts based on their tag. + * Set `falcoctl.artifact.follow.enabled=true` to keep updated *rulesfiles* of the loaded plugins. + * Proceed to upgrade your Falco release by running: + ```bash= + helm upgrade falco falcosecurity/falco \ + --namespace=falco \ + --reuse-values \ + --values=./falcoctl-values.yaml + ``` +5. Falco with **multiple sources** enabled (syscalls + plugins): + 1. Upgrading Falco to the new version: + ```bash= + helm upgrade falco falcosecurity/falco \ + --namespace=falco \ + --reuse-values \ + --set falcoctl.artifact.install.enabled=false \ + --set falcoctl.artifact.follow.enabled=false + ``` + 2. Upgrading Falco and leveraging **falcoctl** for rules and plugins. Refer to point 4. for **falcoctl** configuration. + + +### Rulesfiles +Starting from `v0.3.0`, the chart drops the bundled **rulesfiles**. The previous version was used to create a configmap containing the following **rulesfiles**: +* application_rules.yaml +* aws_cloudtrail_rules.yaml +* falco_rules.local.yaml +* falco_rules.yaml +* k8s_audit_rules.yaml + +The reason why we are dropping them is pretty simple, the files are already shipped within the Falco image and do not apport any benefit. On the other hand, we had to manually update those files for each Falco release. + +For users out there, do not worry, we have you covered. As said before the **rulesfiles** are already shipped inside +the Falco image. Still, this solution has some drawbacks such as users having to wait for the next releases of Falco +to get the latest version of those **rulesfiles**. Or they could manually update them by using the [custom rules](. +/README.md#loading-custom-rules). + +We came up with a better solution and that is **falcoctl**. Users can configure the **falcoctl** tool to fetch and install the latest **rulesfiles** as provided by the *falcosecurity* organization. For more info, please check the **falcoctl** section. + +**NOTE**: if any user (wrongly) used to customize those files before deploying Falco please switch to using the +[custom rules](./README.md#loading-custom-rules). + +### Drop support for `falcosecurity/falco` image + +Starting from version `v2.0.0` of the chart the`falcosecurity/falco-no-driver` is the default image. We were still supporting the `falcosecurity/falco` image in `v2.0.0`. But in `v2.2.0` we broke the chart when using the `falcosecurity/falco` image. For more info please check out the following issue: https://github.com/falcosecurity/charts/issues/419 + +#### Driver-loader simplified logic +There is only one switch to **enable/disable** the driver-loader init container: driver.loader.enabled=true. This simplification comes as a direct consequence of dropping support for the `falcosecurity/falco` image. For more info: https://github.com/falcosecurity/charts/issues/418 diff --git a/charts/falco/falco/charts/falco/CHANGELOG.md b/charts/falco/falco/charts/falco/CHANGELOG.md index 36160174e..2f513295a 100644 --- a/charts/falco/falco/charts/falco/CHANGELOG.md +++ b/charts/falco/falco/charts/falco/CHANGELOG.md @@ -3,6 +3,373 @@ This file documents all notable changes to Falco Helm Chart. The release numbering uses [semantic versioning](http://semver.org). +## v4.8.3 + +* The init container, when driver.kind=auto, automatically generates + a new Falco configuration file and selects the appropriate engine + kind based on the environment where Falco is deployed. + + With this commit, along with falcoctl PR #630, the Helm charts now + support different driver kinds for Falco instances based on the + specific node they are running on. When driver.kind=auto is set, + each Falco instance dynamically selects the most suitable + driver (e.g., ebpf, kmod, modern_ebpf) for the node. + +-------------------------------------------------------+ + | Kubernetes Cluster | + | | + | +-------------------+ +-------------------+ | + | | Node 1 | | Node 2 | | + | | | | | | + | | Falco (ebpf) | | Falco (kmod) | | + | +-------------------+ +-------------------+ | + | | + | +-------------------+ | + | | Node 3 | | + | | | | + | | Falco (modern_ebpf)| | + | +-------------------+ | + +-------------------------------------------------------+ + +## v4.8.2 + +* fix(falco): correctly mount host filesystems when driver.kind is auto + + When falco runs with kmod/module driver it needs special filesystems + to be mounted from the host such /dev and /sys/module/falco. + This commit ensures that we mount them in the falco container. + + Note that, the /sys/module/falco is now mounted as /sys/module since + we do not know which kind of driver will be used. The falco folder + exists under /sys/module only when the kernel module is loaded, + hence it's not possible to use the /sys/module/falco hostpath when driver.kind + is set to auto. + +## v4.8.1 + +* fix(falcosidekick): add support for custom service type for webui redis + +## v4.8.0 + +* Upgrade Falco version to 0.38.2 + +## v4.7.2 + +* use rules_files key in the preset values files + +## v4.7.1 + +* fix(falco/config): use rules_files instead of deprecated key rules_file + +## v4.7.0 + +* bump k8smeta plugin to version 0.2.0. The new version, resolves a bug that prevented the plugin + from populating the k8smeta fields. For more info see: + * https://github.com/falcosecurity/plugins/issues/514 + * https://github.com/falcosecurity/plugins/pull/517 + +## v4.6.3 + +* fix(falco): mount client-certs-volume only if certs.existingClientSecret is defined + +## v4.6.2 + +* bump falcosidekick dependency to v0.8.* to match with future versions + +## v4.6.1 + +* bump falcosidekick dependency to v0.8.2 (fixes bug when using externalRedis in UI) + +## v4.6.0 + +* feat(falco): add support for Falco metrics + +## v4.5.2 + +* bump falcosidekick dependency version to v0.8.0, for falcosidekick 2.29.0 + +## v4.5.2 + +* reording scc configuration, making it more robust to plain yaml comparison + +## v4.5.1 + +* falco is now able to reconnect to containerd.socket + +## v4.5.0 + +* bump Falco version to 0.38.1 + +## v4.4.3 + +* Added a `labels` field in the controller to provide extra labeling for the daemonset/deployment + +## v4.4.2 + +* fix wrong check in pod template where `existingSecret` was used instead of `existingClientSecret` + +## v4.4.1 + +* bump k8s-metacollector dependency version to v0.1.1. See: https://github.com/falcosecurity/k8s-metacollector/releases + +## v4.3.1 + +* bump falcosidekick dependency version to v0.7.19 install latest version through falco chart + +## v4.3.0 + +* `FALCO_HOSTNAME` and `HOST_ROOT` are now set by default in pods configuration. + +## v4.2.6 + +* bump falcosidekick dependency version to v0.7.17 install latest version through falco chart + +## v4.2.5 + +* fix docs + +## v4.2.4 + +* bump falcosidekick dependency version to v0.7.15 install latest version through falco chart + +## v4.2.3 + +* fix(falco/helpers): adjust formatting to be compatible with older helm versions + +## v4.2.2 + +* fix(falco/README): dead link + +## v4.2.1 +* fix(falco/README): typos, formatting and broken links + +## v4.2.0 + +* Bump falco to v0.37.1 and falcoctl to v0.7.2 + +## v4.1.2 +* Fix links in output after falco install without sidekick + +## v4.1.1 + +* Update README.md. + +## v4.1.0 + +* Reintroduce the service account. + +## v4.0.0 +The new chart introduces some breaking changes. For folks upgrading Falco please see the BREAKING-CHANGES.md file. + +* Uniform driver names and configuration to the Falco one: https://github.com/falcosecurity/falco/pull/2413; +* Fix usernames and groupnames resolution by mounting the `/etc` filesystem; +* Drop old kubernetes collector related resources; +* Introduce the new k8s-metacollector and k8smeta plugin (experimental); +* Enable the dependency resolver for artifacts in falcoctl since the Falco image does not ship anymore the plugins; +* Bump Falco to 0.37.0; +* Bump falcoctl to 0.7.0. + +## v3.8.7 + +* Upgrade falcosidekick chart to `v0.7.11`. + +## v3.8.6 + +* no changes to the chart itself. Updated README.md and makefile. + +## v3.8.5 + +* Add mTLS cryptographic material load via Helm for Falco + +## v3.8.4 + +* Upgrade Falco to 0.36.2: https://github.com/falcosecurity/falco/releases/tag/0.36.2 + +## v3.8.3 + +* Upgrade falcosidekick chart to `v0.7.7`. + +## v3.8.2 + +* Upgrade falcosidekick chart to `v0.7.6`. + +## v3.8.1 + +* noop change just to test the ci + +## v3.8.0 + +* Upgrade Falco to 0.36.1: https://github.com/falcosecurity/falco/releases/tag/0.36.1 +* Sync values.yaml with 0.36.1 falco.yaml config file. + +## v3.7.1 + +* Update readme + +## v3.7.0 + +* Upgrade Falco to 0.36. https://github.com/falcosecurity/falco/releases/tag/0.36.0 +* Sync values.yaml with upstream falco.yaml config file. +* Upgrade falcoctl to 0.6.2. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.6.2 + +## v3.6.2 + +* Cleanup wrong files + +## v3.6.1 + +* Upgrade falcosidekick chart to `v0.7.1`. + +## v3.6.0 + +* Add `outputs` field to falco configuration + +## v3.5.0 + +## Major Changes + +* Support configuration of revisionHistoryLimit of the deployment + +## v3.4.1 + +* Upgrade falcosidekick chart to `v0.6.3`. + +## v3.4.0 + +* Introduce an ability to use an additional volumeMounts for `falcoctl-artifact-install` and `falcoctl-artifact-follow` containers. + +## v3.3.1 + +* No changes made to the falco chart, only some fixes in the makefile + +## v3.3.0 +* Upgrade Falco to 0.35.1. For more info see the release notes: https://github.com/falcosecurity/falco/releases/tag/0.35.1 +* Upgrade falcoctl to 0.5.1. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.5.1 +* Introduce least privileged mode in modern ebpf. For more info see: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 + +## v3.2.1 +* Set falco.http_output.url to empty string in values.yaml file + +## v3.2.0 +* Upgrade Falco to 0.35.0. For more info see the release notes: https://github.com/falcosecurity/falco/releases/tag/0.35.0 +* Sync values.yaml with upstream falco.yaml config file. +* Upgrade falcoctl to 0.5.0. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.5.0 +* The tag used to install and follow the falco rules is `1` +* The tag used to install and follow the k8saudit rules is `0.6` + +## v3.1.5 + +* Use list as default for env parameter of init and follow containers + +## v3.1.4 + +* Fix typo in values-k8audit file + +## v3.1.3 + +* Updates the grpc-service to use the correct label selector + +## v3.1.2 + +* Bump `falcosidekick` dependency to 0.6.1 + +## v3.1.1 +* Update `k8saudit` section in README.md file. + +## v3.1.0 +* Upgrade Falco to 0.34.1 + +## v3.0.0 +* Drop support for falcosecuriy/falco image, only the init container approach is supported out of the box; +* Simplify the driver-loader init container logic; +* Support **falcoctl** tool in the chart: + * Install the *rulesfile* artifacts; + * Follow the *rulesfile* artifacts in order to have the latest rules once they are released from falcosecurity org; +* Support the **modern-bpf** probe a new driver (experimental) +* Add a new file *BREAKING_CHANGES.md* to document the breaking changes and how to update the new chart. + +## v2.5.5 + +* Bump `falcosidekick` dependency to 0.5.16 + +## v2.5.4 + +* Fix incorrect entry in v2.5.2 changelog + +## v2.5.3 + +* Bump `falcosidekick` dependency to 0.5.14 + +## v2.5.2 + +* Fixed notes template to only include daemon set info if set to daemon set + +## v2.5.1 + +* Update README to clarify driver behavior for chart + +## v2.5.0 + +* Support custom dictionaries when setting environment variables + +Note: this is a breaking change. If you were passing _objects_ to `extra.env` or `driver.loader.initContainer.env` , you will need to update your values file to pass _lists_. + +## v2.4.7 + +* Add `controller.annotations` configuration + +## v2.4.6 + +* Bump `falcosidekick` dependency to 0.5.11 + +## v2.4.5 + +* Bump `falcosidekick` dependency to 0.5.10 + +## v2.4.4 + +* Update README for gRPC + +## v2.4.3 + +* Update README for gVisor and GKE + +## v2.4.2 + +* Add toleration for node-role.kubernetes.io/control-plane + +## v2.4.1 + +* Fixed error in values.yaml comments + +## v2.4.0 + +* Add support for Falco+gVisor +* Add new preset `values.yaml `file for gVisor-enabled GKE clusters + +## v2.3.1 + +* Fixed incorrect spelling of `been` + +## v2.3.0 + +* Add variable namespaceOverride to allow setting release namespace in values + +## v2.2.0 + +* Change the grpc socket path from `unix:///var/run/falco/falco.soc` to `unix:///run/falco/falco.sock`. Please note that this change is potentially a breaking change if upgrading falco from a previous version and you have external consumers of the grpc socket. + +## v2.1.0 + +* Bump Falco to 0.33.0 +* Implicitly disable `syscall` source when not required +* Update `values.yaml` to reflect the new configuration options in Falco 0.33.0 +* Mount `/sys/module/falco` when deployed using the `kernel module` +* Update rulesets for falco and plugins + +## v2.0.18 + +* Bump `falcosidekick` dependency to 0.5.9 + ## v2.0.17 * Fix: remove `namespace` from `clusterrole` and `clusterrolebinding` metadata @@ -134,7 +501,7 @@ update(falco/OWNERS): move inactive approvers to emeritus_approvers ## v1.18.5 * Bump falcosidekick chart dependency - + ## v1.18.4 * Now the url to falcosidekick on NOTES.txt on falco helm chart points to the right place. @@ -367,7 +734,7 @@ Remove whitespace around `falco.httpOutput.url` to fix the error `libcurl error: ### Minor Changes -* Upgrade to Falco 0.26.2, `DRIVERS_REPO` now defaults to https://download.falco.org/driver (see the [Falco changelog](https://github.com/falcosecurity/falco/blob/0.26.2/CHANGELOG.md)) +* Upgrade to Falco 0.26.2, `DRIVERS_REPO` now defaults to https://download.falco.org/?prefix=driver/ (see the [Falco changelog](https://github.com/falcosecurity/falco/blob/0.26.2/CHANGELOG.md)) ## v1.5.3 diff --git a/charts/falco/falco/charts/falco/Chart.lock b/charts/falco/falco/charts/falco/Chart.lock index 20ad8791f..740c169f7 100644 --- a/charts/falco/falco/charts/falco/Chart.lock +++ b/charts/falco/falco/charts/falco/Chart.lock @@ -1,6 +1,9 @@ dependencies: - name: falcosidekick repository: https://falcosecurity.github.io/charts - version: 0.5.4 -digest: sha256:4e6b91d415d70b7bc81b6a511d4b6ee8ee59548abf9129d7a264d803cf22f005 -generated: "2022-09-01T09:31:31.733304784Z" + version: 0.8.5 +- name: k8s-metacollector + repository: https://falcosecurity.github.io/charts + version: 0.1.10 +digest: sha256:d73d0fdbe32a9efabcc18d232be2d34bfdb94d11a5226e371fc487abced793c6 +generated: "2024-09-17T09:27:12.371428917Z" diff --git a/charts/falco/falco/charts/falco/Chart.yaml b/charts/falco/falco/charts/falco/Chart.yaml index 425fdea92..83dbe6bbb 100644 --- a/charts/falco/falco/charts/falco/Chart.yaml +++ b/charts/falco/falco/charts/falco/Chart.yaml @@ -1,10 +1,14 @@ apiVersion: v2 -appVersion: 0.32.2 +appVersion: 0.38.2 dependencies: - condition: falcosidekick.enabled name: falcosidekick repository: https://falcosecurity.github.io/charts - version: 0.5.4 + version: 0.8.* +- condition: collectors.kubernetes.enabled + name: k8s-metacollector + repository: https://falcosecurity.github.io/charts + version: 0.1.* description: Falco home: https://falco.org icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/falco/horizontal/color/falco-horizontal-color.svg @@ -21,4 +25,4 @@ maintainers: name: falco sources: - https://github.com/falcosecurity/falco -version: 2.0.17 +version: 4.8.3 diff --git a/charts/falco/falco/charts/falco/Makefile b/charts/falco/falco/charts/falco/Makefile deleted file mode 100644 index f27e13a84..000000000 --- a/charts/falco/falco/charts/falco/Makefile +++ /dev/null @@ -1,25 +0,0 @@ -#generate helm documentation -DOCS_IMAGE_VERSION="v1.11.0" - -#Here we use the "latest" tag since our CI uses the same(https://github.com/falcosecurity/charts/blob/2f04bccb5cacbbf3ecc2d2659304b74f865f41dd/.circleci/config.yml#L16). -LINT_IMAGE_VERSION="latest" - -docs: - docker run \ - --rm \ - --workdir=/helm-docs \ - --volume "$$(pwd):/helm-docs" \ - -u $$(id -u) \ - jnorwood/helm-docs:$(DOCS_IMAGE_VERSION) \ - helm-docs -t ./README.gotmpl -o ./generated/helm-values.md - -lint: helm-repo-update - docker run \ - -it \ - --workdir=/data \ - --volume $$(pwd)/..:/data \ - quay.io/helmpack/chart-testing:latest \ - ct lint --config ./tests/ct.yaml --charts ./falco --chart-dirs . - -helm-repo-update: - helm repo update diff --git a/charts/falco/falco/charts/falco/README.gotmpl b/charts/falco/falco/charts/falco/README.gotmpl index 74b446851..a50c32d0b 100644 --- a/charts/falco/falco/charts/falco/README.gotmpl +++ b/charts/falco/falco/charts/falco/README.gotmpl @@ -1,3 +1,589 @@ -# Configuration values for {{ template "chart.name" . }} chart -`Chart version: v{{ template "chart.version" . }}` +# Falco + +[Falco](https://falco.org) is a *Cloud Native Runtime Security* tool designed to detect anomalous activity in your applications. You can use Falco to monitor runtime security of your Kubernetes applications and internal components. + +## Introduction + +The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info. + +## Attention + +Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required. + +## Adding `falcosecurity` repository + +Before installing the chart, add the `falcosecurity` charts repository: + +```bash +helm repo add falcosecurity https://falcosecurity.github.io/charts +helm repo update +``` + +## Installing the Chart + +To install the chart with the release name `falco` in namespace `falco` run: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco +``` + +After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*: +```bash +kubectl get pods -n falco -o wide +``` +If everything went smoothly, you should observe an output similar to the following, indicating that all Falco instances are up and running in you cluster: + +```bash +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane +falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 +falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 +``` +The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node. +> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment. + +### Falco, Event Sources and Kubernetes +Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events). + +Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/third-party/learning/#falco-with-multiple-sources). + +#### About Drivers + +Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are: + +* [Kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) +* [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) +* [Modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe) + +The driver should be installed on the node where Falco is running. The _kernel module_ (default option) and the _eBPF probe_ are installed on the node through an *init container* (i.e. `falco-driver-loader`) that tries download a prebuilt driver or build it on-the-fly as a fallback. The _Modern eBPF probe_ doesn't require an init container because it is shipped directly into the Falco binary. However, the _Modern eBPF probe_ requires [recent BPF features](https://falco.org/docs/event-sources/kernel/#modern-ebpf-probe). + +##### Pre-built drivers + +The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2). + +The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all). + +##### Building the driver on the fly (fallback) + +If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly. + +Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation. + +##### Selecting a different driver loader image + +Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`. + +#### About Plugins +[Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*: + +* Event sourcing capability; +* Field extraction capability; + +Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco + +Note that **the driver is not required when using plugins**. + +#### About gVisor +gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor. +Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml): +```yaml +driver: + gvisor: + enabled: true + runsc: + path: /home/containerd/usr/local/sbin + root: /run/containerd/runsc + config: /run/containerd/runsc/config.toml +``` +Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set: +* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes; +* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; +* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. + +If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl). +A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments. + +##### Example: running Falco on GKE, with or without gVisor-enabled pods + +If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.: + +``` +helm install falco-gvisor falcosecurity/falco \ + --create-namespace \ + --namespace falco-gvisor \ + -f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml +``` + +Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual: + +``` +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=ebpf +``` + +The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it. + +##### Falco+gVisor additional resources +An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/). +If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/) + +### About Falco Artifacts +Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md). + +The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one: +* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts; +* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them; + +For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300) + +### Deploying Falco in Kubernetes +After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes. + +The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**. + +#### Daemonset +When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster. + +**Kernel module** +To run Falco with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) you can use the default values of the helm chart: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco +``` + +**eBPF probe** + +To run Falco with the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=ebpf +``` + +There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace "your-custom-name-space" \ + -f "path-to-custom-values.yaml-file" +``` + +**modern eBPF probe** + +To run Falco with the [modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe-experimental) you just need to set `driver.kind=modern_bpf` as shown in the following snippet: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=modern_ebpf +``` + +#### Deployment +In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service. +The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins: + +* need to be reachable when ingesting logs directly from remote services; +* need only one active replica, otherwise events will be sent/received to/from different Falco instances; + + +## Uninstalling the Chart + +To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`; + +```bash +helm uninstall falco --namespace falco +``` + +## Showing logs generated by Falco container +There are many reasons why we would have to inspect the messages emitted by the Falco container. When deployed in Kubernetes the Falco logs can be inspected through: +```bash +kubectl logs -n falco falco-pod-name +``` +where `falco-pods-name` is the name of the Falco pod running in your cluster. +The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command: +```bash +kubectl logs -f -n falco falco-pod-name +``` +The `-f (--follow)` flag follows the logs and live stream them to your terminal and it is really useful when you are debugging a new rule and want to make sure that the rule is triggered when some actions are performed in the system. + +If we need to access logs of a previous Falco run we do that by adding the `-p (--previous)` flag: +```bash +kubectl logs -p -n falco falco-pod-name +``` +A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong. + +### Enabling real time logs +By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening. +In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file. + +## K8s-metacollector +Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed. +A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it. +The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata +from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers. + +Kubernetes' resources for which metadata will be collected and sent to Falco: +* pods; +* namespaces; +* deployments; +* replicationcontrollers; +* replicasets; +* services; + +### Plugin +Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component +in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin. +The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information +in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the +associated Falco instance is deployed, resulting in node-level granularity. + +### Exported Fields: Old and New +The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the +[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still +available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still +usable and will return meaningful data when the `container runtime collectors` are enabled: +* k8s.pod.name; +* k8s.pod.id; +* k8s.pod.label; +* k8s.pod.labels; +* k8s.pod.ip; +* k8s.pod.cni.json; +* k8s.pod.namespace.name; + +The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new +[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new +`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco. + +### Enabling the k8s-metacollector +The following command will deploy Falco + k8s-metacollector + k8smeta: +```bash +helm install falco falcosecurity/falco \ + --namespace falco \ + --create-namespace \ + --set collectors.kubernetes.enabled=true +``` + +## Loading custom rules + +Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs. + +So the question is: How can we load custom rules in our Falco deployment? + +We are going to create a file that contains custom rules so that we can keep it in a Git repository. + +```bash +cat custom-rules.yaml +``` + +And the file looks like this one: + +```yaml +customRules: + rules-traefik.yaml: |- + - macro: traefik_consider_syscalls + condition: (evt.num < 0) + + - macro: app_traefik + condition: container and container.image startswith "traefik" + + # Restricting listening ports to selected set + + - list: traefik_allowed_inbound_ports_tcp + items: [443, 80, 8080] + + - rule: Unexpected inbound tcp connection traefik + desc: Detect inbound traffic to traefik using tcp on a port outside of expected set + condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik + output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image) + priority: NOTICE + + # Restricting spawned processes to selected set + + - list: traefik_allowed_processes + items: ["traefik"] + + - rule: Unexpected spawned process traefik + desc: Detect a process started in a traefik container outside of an expected set + condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik + output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image) + priority: NOTICE +``` + +So next step is to use the custom-rules.yaml file for installing the Falco Helm chart. + +```bash +helm install falco -f custom-rules.yaml falcosecurity/falco +``` + +And we will see in our logs something like: + +```bash +Tue Jun 5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml: +``` + +And this means that our Falco installation has loaded the rules and is ready to help us. + +## Kubernetes Audit Log + +The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin. It is entirely up to you to set up the [webhook backend](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#webhook-backend) of the Kubernetes API server to forward the Audit Log event to the Falco listening port. + +The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin: +```yaml +# -- Disable the drivers since we want to deploy only the k8saudit plugin. +driver: + enabled: false + +# -- Disable the collectors, no syscall events to enrich with metadata. +collectors: + enabled: false + +# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable. +controller: + kind: deployment + deployment: + # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. + # For more info check the section on Plugins in the README.md file. + replicas: 1 + + +falcoctl: + artifact: + install: + # -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects. + enabled: true + follow: + # -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules. + enabled: true + config: + artifact: + install: + # -- Resolve the dependencies for artifacts. + resolveDeps: true + # -- List of artifacts to be installed by the falcoctl init container. + # Only rulesfile, the plugin will be installed as a dependency. + refs: [k8saudit-rules:0.5] + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: [k8saudit-rules:0.5] + +services: + - name: k8saudit-webhook + type: NodePort + ports: + - port: 9765 # See plugin open_params + nodePort: 30007 + protocol: TCP + +falco: + rules_file: + - /etc/falco/k8s_audit_rules.yaml + - /etc/falco/rules.d + plugins: + - name: k8saudit + library_path: libk8saudit.so + init_config: + "" + # maxEventBytes: 1048576 + # sslCertificate: /etc/falco/falco.pem + open_params: "http://:9765/k8s-audit" + - name: json + library_path: libjson.so + init_config: "" + # Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container. + load_plugins: [k8saudit, json] + +``` +Here is the explanation of the above configuration: +* disable the drivers by setting `driver.enabled=false`; +* disable the collectors by setting `collectors.enabled=false`; +* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`; +* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`; +* enable the `falcoctl-artifact-install` init container; +* configure `falcoctl-artifact-install` to install the required plugins; +* disable the `falcoctl-artifact-follow` sidecar container; +* load the correct ruleset for our plugin in `falco.rulesFile`; +* configure the plugins to be loaded, in this case, the `k8saudit` and `json`; +* and finally we add our plugins in the `load_plugins` to be loaded by Falco. + +The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used: + + +```bash +#make sure the falco namespace exists +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + -f ./values-k8saudit.yaml +``` +After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*: +```bash +kubectl get pods -n falco -o wide +``` +If everything went smoothly, you should observe an output similar to the following, indicating that the Falco instance is up and running: + +```bash +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +falco-64484d9579-qckms 1/1 Running 0 101s 10.244.2.2 worker-node-2 +``` + +Furthermore you can check that Falco logs through *kubectl logs* + +```bash +kubectl logs -n falco falco-64484d9579-qckms +``` +In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins: +```bash +Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b) +Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml +Fri Jul 8 16:07:24 2022: Loading plugin (k8saudit) from file /usr/share/falco/plugins/libk8saudit.so +Fri Jul 8 16:07:24 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so +Fri Jul 8 16:07:24 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml: +Fri Jul 8 16:07:24 2022: Starting internal webserver, listening on port 8765 +``` +*Note that the support for the dynamic backend (also known as the `AuditSink` object) has been deprecated from Kubernetes and removed from this chart.* + +### Manual setup with NodePort on kOps + +Using `kops edit cluster`, ensure these options are present, then run `kops update cluster` and `kops rolling-update cluster`: +```yaml +spec: + kubeAPIServer: + auditLogMaxBackups: 1 + auditLogMaxSize: 10 + auditLogPath: /var/log/k8s-audit.log + auditPolicyFile: /srv/kubernetes/assets/audit-policy.yaml + auditWebhookBatchMaxWait: 5s + auditWebhookConfigFile: /srv/kubernetes/assets/webhook-config.yaml + fileAssets: + - content: | + # content of the webserver CA certificate + # remove this fileAsset and certificate-authority from webhook-config if using http + name: audit-ca.pem + roles: + - Master + - content: | + apiVersion: v1 + kind: Config + clusters: + - name: falco + cluster: + # remove 'certificate-authority' when using 'http' + certificate-authority: /srv/kubernetes/assets/audit-ca.pem + server: https://localhost:32765/k8s-audit + contexts: + - context: + cluster: falco + user: "" + name: default-context + current-context: default-context + preferences: {} + users: [] + name: webhook-config.yaml + roles: + - Master + - content: | + # ... paste audit-policy.yaml here ... + # https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml + name: audit-policy.yaml + roles: + - Master +``` +## Enabling gRPC + +The Falco gRPC server and the Falco gRPC Outputs APIs are not enabled by default. +Moreover, Falco supports running a gRPC server with two main binding types: +- Over a local **Unix socket** with no authentication +- Over the **network** with mandatory mutual TLS authentication (mTLS) + +> **Tip**: Once gRPC is enabled, you can deploy [falco-exporter](https://github.com/falcosecurity/falco-exporter) to export metrics to Prometheus. + +### gRPC over unix socket (default) + +The preferred way to use the gRPC is over a Unix socket. + +To install Falco with gRPC enabled over a **unix socket**, you have to: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.grpc.enabled=true \ + --set falco.grpc_output.enabled=true +``` + +### gRPC over network + +The gRPC server over the network can only be used with mutual authentication between the clients and the server using TLS certificates. +How to generate the certificates is [documented here](https://falco.org/docs/grpc/#generate-valid-ca). + +To install Falco with gRPC enabled over the **network**, you have to: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.grpc.enabled=true \ + --set falco.grpc_output.enabled=true \ + --set falco.grpc.unixSocketPath="" \ + --set-file certs.server.key=/path/to/server.key \ + --set-file certs.server.crt=/path/to/server.crt \ + --set-file certs.ca.crt=/path/to/ca.crt + +``` + +## Enable http_output + +HTTP output enables Falco to send events through HTTP(S) via the following configuration: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.http_output.enabled=true \ + --set falco.http_output.url="http://some.url/some/path/" \ + --set falco.json_output=true \ + --set json_include_output_property=true +``` + +Additionally, you can enable mTLS communication and load HTTP client cryptographic material via: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.http_output.enabled=true \ + --set falco.http_output.url="https://some.url/some/path/" \ + --set falco.json_output=true \ + --set json_include_output_property=true \ + --set falco.http_output.mtls=true \ + --set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \ + --set falco.http_output.client_key="/etc/falco/certs/client/client.key" \ + --set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \ + --set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt" +``` + +Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value. + +## Deploy Falcosidekick with Falco + +[`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`. +All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration). +For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`. + +If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that. + +## Configuration + +The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See [values.yaml](./values.yaml) for full list. + {{ template "chart.valuesSection" . }} diff --git a/charts/falco/falco/charts/falco/README.md b/charts/falco/falco/charts/falco/README.md index 0859ee03d..b6be5baef 100644 --- a/charts/falco/falco/charts/falco/README.md +++ b/charts/falco/falco/charts/falco/README.md @@ -4,7 +4,11 @@ ## Introduction -The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in `values.yaml` file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info. +The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info. + +## Attention + +Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required. ## Adding `falcosecurity` repository @@ -20,7 +24,9 @@ helm repo update To install the chart with the release name `falco` in namespace `falco` run: ```bash -helm install falco falcosecurity/falco --namespace falco --create-namespace +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco ``` After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*: @@ -35,62 +41,160 @@ falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 ``` -The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in `values.yaml` of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node. -> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment +The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node. +> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment. ### Falco, Event Sources and Kubernetes -Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or the **drivers** (system events). +Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events). + +Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/third-party/learning/#falco-with-multiple-sources). + +#### About Drivers + +Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are: + +* [Kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) +* [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) +* [Modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe) + +The driver should be installed on the node where Falco is running. The _kernel module_ (default option) and the _eBPF probe_ are installed on the node through an *init container* (i.e. `falco-driver-loader`) that tries download a prebuilt driver or build it on-the-fly as a fallback. The _Modern eBPF probe_ doesn't require an init container because it is shipped directly into the Falco binary. However, the _Modern eBPF probe_ requires [recent BPF features](https://falco.org/docs/event-sources/kernel/#modern-ebpf-probe). + +##### Pre-built drivers -Note that **multiple event sources can not be handled in the same Falco instance**. It means, you can not have Falco deployed leveraging **drivers** for syscalls events and at the same time loading **plugins**. Here you can find the [tracking issue](https://github.com/falcosecurity/falco/issues/2074) about multiple **event sources** in the same Falco instance. -If you need to handle **syscalls** and **plugins** events than consider deploying different Falco instances, one for each use case. -#### About the Driver +The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2). -Falco needs a **driver** (the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) or the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe)) that taps into the stream of system calls and passes that system calls to Falco. The driver must be installed on the node where Falco is running. +The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all). -By default the drivers are managed using an *init container* which includes a script (`falco-driver-loader`) that either tries to build the driver on-the-fly or downloads a prebuilt driver as a fallback. Usually, no action is required. +##### Building the driver on the fly (fallback) -If a prebuilt driver is not available for your distribution/kernel, Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation. +If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly. -### About Plugins +Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation. + +##### Selecting a different driver loader image + +Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`. + +#### About Plugins [Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*: * Event sourcing capability; * Field extraction capability; -Plugin capabilities are *composable*, we can have a single plugin with both the capabilities. Or on the other hand we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this are the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco +Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco + +Note that **the driver is not required when using plugins**. + +#### About gVisor +gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor. +Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml): +```yaml +driver: + gvisor: + enabled: true + runsc: + path: /home/containerd/usr/local/sbin + root: /run/containerd/runsc + config: /run/containerd/runsc/config.toml +``` +Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set: +* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes; +* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; +* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. -Note that **the driver is not required when using plugins**. When *plugins* are enabled Falco is deployed without the *init container*. +If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl). +A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments. + +##### Example: running Falco on GKE, with or without gVisor-enabled pods + +If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.: + +``` +helm install falco-gvisor falcosecurity/falco \ + --create-namespace \ + --namespace falco-gvisor \ + -f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml +``` + +Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual: + +``` +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=ebpf +``` + +The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it. + +##### Falco+gVisor additional resources +An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/). +If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/) + +### About Falco Artifacts +Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md). + +The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one: +* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts; +* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them; + +For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300) ### Deploying Falco in Kubernetes -After the clarification of the different **event sources** and how they are consumed by Falco using the **drivers** and the **plugins**, now lets discuss about how Falco is deployed in Kubernetes. +After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes. The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**. #### Daemonset -When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster. -Using the default values of the helm chart we get Falco deployed with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module). +When using the [drivers](#about-the-driver), Falco is deployed as `daemonset`. By using a `daemonset`, k8s assures that a Falco instance will be running in each of our nodes even when we add new nodes to our cluster. So it is the perfect match when we need to monitor all the nodes in our cluster. -If the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) is desired, we just need to set `driver.kind=ebpf` as as show in the following snippet: +**Kernel module** +To run Falco with the [kernel module](https://falco.org/docs/event-sources/drivers/#kernel-module) you can use the default values of the helm chart: -```yaml -driver: - enabled: true - kind: ebpf +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco +``` + +**eBPF probe** + +To run Falco with the [eBPF probe](https://falco.org/docs/event-sources/drivers/#ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=ebpf ``` -There are other configurations related to the eBPF probe, for more info please check the `values.yaml` file. After you have made your changes to the configuration file you just need to run: + +There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run: ```bash -helm install falco falcosecurity/falco --namespace "your-custom-name-space" --create-namespace +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace "your-custom-name-space" \ + -f "path-to-custom-values.yaml-file" +``` + +**modern eBPF probe** + +To run Falco with the [modern eBPF probe](https://falco.org/docs/event-sources/drivers/#modern-ebpf-probe-experimental) you just need to set `driver.kind=modern_bpf` as shown in the following snippet: + +```bash +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set driver.kind=modern_ebpf ``` #### Deployment -In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service. +In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service. The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins: * need to be reachable when ingesting logs directly from remote services; * need only one active replica, otherwise events will be sent/received to/from different Falco instances; - ## Uninstalling the Chart To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`; @@ -104,7 +208,7 @@ There are many reasons why we would have to inspect the messages emitted by the ```bash kubectl logs -n falco falco-pod-name ``` -where `falco-pods-name` is the name of the Falco pod running in your cluster. +where `falco-pods-name` is the name of the Falco pod running in your cluster. The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command: ```bash kubectl logs -f -n falco falco-pod-name @@ -118,8 +222,56 @@ kubectl logs -p -n falco falco-pod-name A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong. ### Enabling real time logs -By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening. -In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in `values.yaml` file. +By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening. +In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file. + +## K8s-metacollector +Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed. +A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it. +The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata +from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers. + +Kubernetes' resources for which metadata will be collected and sent to Falco: +* pods; +* namespaces; +* deployments; +* replicationcontrollers; +* replicasets; +* services; + +### Plugin +Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component +in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin. +The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information +in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the +associated Falco instance is deployed, resulting in node-level granularity. + +### Exported Fields: Old and New +The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the +[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still +available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still +usable and will return meaningful data when the `container runtime collectors` are enabled: +* k8s.pod.name; +* k8s.pod.id; +* k8s.pod.label; +* k8s.pod.labels; +* k8s.pod.ip; +* k8s.pod.cni.json; +* k8s.pod.namespace.name; + +The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new +[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new +`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco. + +### Enabling the k8s-metacollector +The following command will deploy Falco + k8s-metacollector + k8smeta: +```bash +helm install falco falcosecurity/falco \ + --namespace falco \ + --create-namespace \ + --set collectors.kubernetes.enabled=true +``` + ## Loading custom rules Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs. @@ -186,14 +338,41 @@ The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://gi The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin: ```yaml +# -- Disable the drivers since we want to deploy only the k8saudit plugin. driver: enabled: false +# -- Disable the collectors, no syscall events to enrich with metadata. collectors: enabled: false +# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable. controller: kind: deployment + deployment: + # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. + # For more info check the section on Plugins in the README.md file. + replicas: 1 + +falcoctl: + artifact: + install: + # -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects. + enabled: true + follow: + # -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules. + enabled: true + config: + artifact: + install: + # -- Resolve the dependencies for artifacts. + resolveDeps: true + # -- List of artifacts to be installed by the falcoctl init container. + # Only rulesfile, the plugin will be installed as a dependency. + refs: [k8saudit-rules:0.5] + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: [k8saudit-rules:0.5] services: - name: k8saudit-webhook @@ -204,7 +383,7 @@ services: protocol: TCP falco: - rulesFile: + rules_file: - /etc/falco/k8s_audit_rules.yaml - /etc/falco/rules.d plugins: @@ -218,23 +397,30 @@ falco: - name: json library_path: libjson.so init_config: "" + # Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container. load_plugins: [k8saudit, json] + ``` -What the above configuration does is: +Here is the explanation of the above configuration: * disable the drivers by setting `driver.enabled=false`; * disable the collectors by setting `collectors.enabled=false`; -* deploy the Falco using a k8s *deploment* by setting `controller.kind=deployment`; -* makes our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`; +* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`; +* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`; +* enable the `falcoctl-artifact-install` init container; +* configure `falcoctl-artifact-install` to install the required plugins; +* disable the `falcoctl-artifact-follow` sidecar container; * load the correct ruleset for our plugin in `falco.rulesFile`; -* configure the plugins to be loaded, in this case the `k8saudit` and `json`; +* configure the plugins to be loaded, in this case, the `k8saudit` and `json`; * and finally we add our plugins in the `load_plugins` to be loaded by Falco. -The configuration can be found in the `values-k8saudit.yaml` file ready to be used: - +The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used: ```bash #make sure the falco namespace exists -helm install falco falcosecurity/falco --namespace falco -f ./values-k8saudit.yaml --create-namespace +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + -f ./values-k8saudit.yaml ``` After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*: ```bash @@ -252,7 +438,7 @@ Furthermore you can check that Falco logs through *kubectl logs* ```bash kubectl logs -n falco falco-64484d9579-qckms ``` -In the logs you should have something similar to the following, indcating that Falco has loaded the required plugins: +In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins: ```bash Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b) Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml @@ -304,7 +490,7 @@ spec: - Master - content: | # ... paste audit-policy.yaml here ... - # https://raw.githubusercontent.com/falcosecurity/evolution/master/examples/k8s_audit_config/audit-policy.yaml + # https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml name: audit-policy.yaml roles: - Master @@ -325,10 +511,11 @@ The preferred way to use the gRPC is over a Unix socket. To install Falco with gRPC enabled over a **unix socket**, you have to: ```shell -helm install falco \ - --set falco.grpc.enabled=true \ - --set falco.grpc_output.enabled=true \ - falcosecurity/falco +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.grpc.enabled=true \ + --set falco.grpc_output.enabled=true ``` ### gRPC over network @@ -339,24 +526,266 @@ How to generate the certificates is [documented here](https://falco.org/docs/grp To install Falco with gRPC enabled over the **network**, you have to: ```shell -helm install falco \ - --set falco.grpc.enabled=true \ - --set falco.grpcOutput.enabled=true \ - --set falco.grpc.unixSocketPath="" \ - --set-file certs.server.key=/path/to/server.key \ - --set-file certs.server.crt=/path/to/server.crt \ - --set-file certs.ca.crt=/path/to/ca.crt \ - falcosecurity/falco +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.grpc.enabled=true \ + --set falco.grpc_output.enabled=true \ + --set falco.grpc.unixSocketPath="" \ + --set-file certs.server.key=/path/to/server.key \ + --set-file certs.server.crt=/path/to/server.crt \ + --set-file certs.ca.crt=/path/to/ca.crt + ``` +## Enable http_output + +HTTP output enables Falco to send events through HTTP(S) via the following configuration: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.http_output.enabled=true \ + --set falco.http_output.url="http://some.url/some/path/" \ + --set falco.json_output=true \ + --set json_include_output_property=true +``` + +Additionally, you can enable mTLS communication and load HTTP client cryptographic material via: + +```shell +helm install falco falcosecurity/falco \ + --create-namespace \ + --namespace falco \ + --set falco.http_output.enabled=true \ + --set falco.http_output.url="https://some.url/some/path/" \ + --set falco.json_output=true \ + --set json_include_output_property=true \ + --set falco.http_output.mtls=true \ + --set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \ + --set falco.http_output.client_key="/etc/falco/certs/client/client.key" \ + --set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \ + --set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt" +``` + +Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value. + ## Deploy Falcosidekick with Falco [`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`. -All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/falcosidekick#configuration). +All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration). For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`. If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that. ## Configuration -All the configurable parameters of the falco chart and their default values can be found [here](./generated/helm-values.md). +The following table lists the main configurable parameters of the falco chart v4.8.3 and their default values. See [values.yaml](./values.yaml) for full list. + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Affinity constraint for pods' scheduling. | +| certs | object | `{"ca":{"crt":""},"client":{"crt":"","key":""},"existingClientSecret":"","existingSecret":"","server":{"crt":"","key":""}}` | certificates used by webserver and grpc server. paste certificate content or use helm with --set-file or use existing secret containing key, crt, ca as well as pem bundle | +| certs.ca.crt | string | `""` | CA certificate used by gRPC, webserver and AuditSink validation. | +| certs.client.crt | string | `""` | Certificate used by http mTLS client. | +| certs.client.key | string | `""` | Key used by http mTLS client. | +| certs.existingSecret | string | `""` | Existing secret containing the following key, crt and ca as well as the bundle pem. | +| certs.server.crt | string | `""` | Certificate used by gRPC and webserver. | +| certs.server.key | string | `""` | Key used by gRPC and webserver. | +| collectors.containerd.enabled | bool | `true` | Enable ContainerD support. | +| collectors.containerd.socket | string | `"/run/containerd/containerd.sock"` | The path of the ContainerD socket. | +| collectors.crio.enabled | bool | `true` | Enable CRI-O support. | +| collectors.crio.socket | string | `"/run/crio/crio.sock"` | The path of the CRI-O socket. | +| collectors.docker.enabled | bool | `true` | Enable Docker support. | +| collectors.docker.socket | string | `"/var/run/docker.sock"` | The path of the Docker daemon socket. | +| collectors.enabled | bool | `true` | Enable/disable all the metadata collectors. | +| collectors.kubernetes | object | `{"collectorHostname":"","collectorPort":"","enabled":false,"pluginRef":"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0"}` | kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 | +| collectors.kubernetes.collectorHostname | string | `""` | collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override it, make sure to set here the address of the k8s-metacollector. It is used by the k8smeta plugin to connect to the k8s-metacollector. | +| collectors.kubernetes.collectorPort | string | `""` | collectorPort designates the port on which the k8s-metacollector gRPC service listens. If not specified the value of the port named `broker-grpc` in k8s-metacollector.service.ports is used. The default values is 45000. It is used by the k8smeta plugin to connect to the k8s-metacollector. | +| collectors.kubernetes.enabled | bool | `false` | enabled specifies whether the Kubernetes metadata should be collected using the k8smeta plugin and the k8s-metacollector component. It will deploy the k8s-metacollector external component that fetches Kubernetes metadata and pushes them to Falco instances. For more info see: https://github.com/falcosecurity/k8s-metacollector https://github.com/falcosecurity/charts/tree/master/charts/k8s-metacollector When this option is disabled, Falco falls back to the container annotations to grab the metadata. In such a case, only the ID, name, namespace, labels of the pod will be available. | +| collectors.kubernetes.pluginRef | string | `"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0"` | pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0. | +| containerSecurityContext | object | `{}` | Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl" | +| controller.annotations | object | `{}` | | +| controller.daemonset.updateStrategy.type | string | `"RollingUpdate"` | Perform rolling updates by default in the DaemonSet agent ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ | +| controller.deployment.replicas | int | `1` | Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. For more info check the section on Plugins in the README.md file. | +| controller.kind | string | `"daemonset"` | | +| controller.labels | object | `{}` | Extra labels to add to the daemonset or deployment | +| customRules | object | `{}` | Third party rules enabled for Falco. More info on the dedicated section in README.md file. | +| driver.ebpf | object | `{"bufSizePreset":4,"dropFailedExit":false,"hostNetwork":false,"leastPrivileged":false,"path":"${HOME}/.falco/falco-bpf.o"}` | Configuration section for ebpf driver. | +| driver.ebpf.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. | +| driver.ebpf.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. | +| driver.ebpf.hostNetwork | bool | `false` | Needed to enable eBPF JIT at runtime for performance reasons. Can be skipped if eBPF JIT is enabled from outside the container | +| driver.ebpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_SYS_ADMIN, CAP_SYS_PTRACE}. On kernel versions >= 5.8 'CAP_PERFMON' and 'CAP_BPF' could replace 'CAP_SYS_ADMIN' but please pay attention to the 'kernel.perf_event_paranoid' value on your system. Usually 'kernel.perf_event_paranoid>2' means that you cannot use 'CAP_PERFMON' and you should fallback to 'CAP_SYS_ADMIN', but the behavior changes across different distros. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-1 | +| driver.ebpf.path | string | `"${HOME}/.falco/falco-bpf.o"` | path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init container deployed with the chart. | +| driver.enabled | bool | `true` | Set it to false if you want to deploy Falco without the drivers. Always set it to false when using Falco with plugins. | +| driver.gvisor | object | `{"runsc":{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}}` | Gvisor configuration. Based on your system you need to set the appropriate values. Please, remember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes. | +| driver.gvisor.runsc | object | `{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}` | Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods. | +| driver.gvisor.runsc.config | string | `"/run/containerd/runsc/config.toml"` | Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. | +| driver.gvisor.runsc.path | string | `"/home/containerd/usr/local/sbin"` | Absolute path of the `runsc` binary in the k8s nodes. | +| driver.gvisor.runsc.root | string | `"/run/containerd/runsc"` | Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; | +| driver.kind | string | `"auto"` | kind tells Falco which driver to use. Available options: kmod (kernel driver), ebpf (eBPF probe), modern_ebpf (modern eBPF probe). | +| driver.kmod | object | `{"bufSizePreset":4,"dropFailedExit":false}` | kmod holds the configuration for the kernel module. | +| driver.kmod.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. | +| driver.kmod.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. | +| driver.loader | object | `{"enabled":true,"initContainer":{"args":[],"env":[],"image":{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falco-driver-loader","tag":""},"resources":{},"securityContext":{}}}` | Configuration for the Falco init container. | +| driver.loader.enabled | bool | `true` | Enable/disable the init container. | +| driver.loader.initContainer.args | list | `[]` | Arguments to pass to the Falco driver loader init container. | +| driver.loader.initContainer.env | list | `[]` | Extra environment variables that will be pass onto Falco driver loader init container. | +| driver.loader.initContainer.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| driver.loader.initContainer.image.registry | string | `"docker.io"` | The image registry to pull from. | +| driver.loader.initContainer.image.repository | string | `"falcosecurity/falco-driver-loader"` | The image repository to pull from. | +| driver.loader.initContainer.resources | object | `{}` | Resources requests and limits for the Falco driver loader init container. | +| driver.loader.initContainer.securityContext | object | `{}` | Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`. | +| driver.modernEbpf.bufSizePreset | int | `4` | bufSizePreset determines the size of the shared space between Falco and its drivers. This shared space serves as a temporary storage for syscall events. | +| driver.modernEbpf.cpusForEachBuffer | int | `2` | cpusForEachBuffer is the index that controls how many CPUs to assign to a single syscall buffer. | +| driver.modernEbpf.dropFailedExit | bool | `false` | dropFailedExit if set true drops failed system call exit events before pushing them to userspace. | +| driver.modernEbpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the modern bpf driver is enabled (i.e., setting the `driver.kind` option to `modern-bpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_BPF, CAP_PERFMON, CAP_SYS_PTRACE}. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 | +| extra.args | list | `[]` | Extra command-line arguments. | +| extra.env | list | `[]` | Extra environment variables that will be pass onto Falco containers. | +| extra.initContainers | list | `[]` | Additional initContainers for Falco pods. | +| falco.base_syscalls | object | `{"custom_set":[],"repair":false}` | - [Suggestions] NOTE: setting `base_syscalls.repair: true` automates the following suggestions for you. These suggestions are subject to change as Falco and its state engine evolve. For execve* events: Some Falco fields for an execve* syscall are retrieved from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning a new process. The `close` syscall is used to purge file descriptors from Falco's internal thread / process cache table and is necessary for rules relating to file descriptors (e.g. open, openat, openat2, socket, connect, accept, accept4 ... and many more) Consider enabling the following syscalls in `base_syscalls.custom_set` for process rules: [clone, clone3, fork, vfork, execve, execveat, close] For networking related events: While you can log `connect` or `accept*` syscalls without the socket syscall, the log will not contain the ip tuples. Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also necessary. We recommend the following as the minimum set for networking-related rules: [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, getsockopt] Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process when the running process opens a file or makes a network connection, consider adding the following to the above recommended syscall sets: ... setresuid, setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, fchdir ... | +| falco.buffered_outputs | bool | `false` | Enabling buffering for the output queue can offer performance optimization, efficient resource usage, and smoother data flow, resulting in a more reliable output mechanism. By default, buffering is disabled (false). | +| falco.config_files[0] | string | `"/etc/falco/config.d"` | | +| falco.falco_libs.thread_table_size | int | `262144` | | +| falco.file_output | object | `{"enabled":false,"filename":"./events.txt","keep_alive":false}` | When appending Falco alerts to a file, each new alert will be added to a new line. It's important to note that Falco does not perform log rotation for this file. If the `keep_alive` option is set to `true`, the file will be opened once and continuously written to, else the file will be reopened for each output message. Furthermore, the file will be closed and reopened if Falco receives the SIGUSR1 signal. | +| falco.grpc | object | `{"bind_address":"unix:///run/falco/falco.sock","enabled":false,"threadiness":0}` | gRPC server using a local unix socket | +| falco.grpc.threadiness | int | `0` | When the `threadiness` value is set to 0, Falco will automatically determine the appropriate number of threads based on the number of online cores in the system. | +| falco.grpc_output | object | `{"enabled":false}` | Use gRPC as an output service. gRPC is a modern and high-performance framework for remote procedure calls (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC output in Falco provides a modern and efficient way to integrate with other systems. By default the setting is turned off. Enabling this option stores output events in memory until they are consumed by a gRPC client. Ensure that you have a consumer for the output events or leave it disabled. | +| falco.http_output | object | `{"ca_bundle":"","ca_cert":"","ca_path":"/etc/falco/certs/","client_cert":"/etc/falco/certs/client/client.crt","client_key":"/etc/falco/certs/client/client.key","compress_uploads":false,"echo":false,"enabled":false,"insecure":false,"keep_alive":false,"mtls":false,"url":"","user_agent":"falcosecurity/falco"}` | Send logs to an HTTP endpoint or webhook. | +| falco.http_output.ca_bundle | string | `""` | Path to a specific file that will be used as the CA certificate store. | +| falco.http_output.ca_cert | string | `""` | Path to the CA certificate that can verify the remote server. | +| falco.http_output.ca_path | string | `"/etc/falco/certs/"` | Path to a folder that will be used as the CA certificate store. CA certificate need to be stored as indivitual PEM files in this directory. | +| falco.http_output.client_cert | string | `"/etc/falco/certs/client/client.crt"` | Path to the client cert. | +| falco.http_output.client_key | string | `"/etc/falco/certs/client/client.key"` | Path to the client key. | +| falco.http_output.compress_uploads | bool | `false` | compress_uploads whether to compress data sent to http endpoint. | +| falco.http_output.echo | bool | `false` | Whether to echo server answers to stdout | +| falco.http_output.insecure | bool | `false` | Tell Falco to not verify the remote server. | +| falco.http_output.keep_alive | bool | `false` | keep_alive whether to keep alive the connection. | +| falco.http_output.mtls | bool | `false` | Tell Falco to use mTLS | +| falco.json_include_output_property | bool | `true` | When using JSON output in Falco, you have the option to include the "output" property itself in the generated JSON output. The "output" property provides additional information about the purpose of the rule. To reduce the logging volume, it is recommended to turn it off if it's not necessary for your use case. | +| falco.json_include_tags_property | bool | `true` | When using JSON output in Falco, you have the option to include the "tags" field of the rules in the generated JSON output. The "tags" field provides additional metadata associated with the rule. To reduce the logging volume, if the tags associated with the rule are not needed for your use case or can be added at a later stage, it is recommended to turn it off. | +| falco.json_output | bool | `false` | When enabled, Falco will output alert messages and rules file loading/validation results in JSON format, making it easier for downstream programs to process and consume the data. By default, this option is disabled. | +| falco.libs_logger | object | `{"enabled":false,"severity":"debug"}` | The `libs_logger` setting in Falco determines the minimum log level to include in the logs related to the functioning of the software of the underlying `libs` library, which Falco utilizes. This setting is independent of the `priority` field of rules and the `log_level` setting that controls Falco's operational logs. It allows you to specify the desired log level for the `libs` library specifically, providing more granular control over the logging behavior of the underlying components used by Falco. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". It is not recommended for production use. | +| falco.load_plugins | list | `[]` | Add here all plugins and their configuration. Please consult the plugins documentation for more info. Remember to add the plugins name in "load_plugins: []" in order to load them in Falco. | +| falco.log_level | string | `"info"` | The `log_level` setting determines the minimum log level to include in Falco's logs related to the functioning of the software. This setting is separate from the `priority` field of rules and specifically controls the log level of Falco's operational logging. By specifying a log level, you can control the verbosity of Falco's operational logs. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". | +| falco.log_stderr | bool | `true` | Send information logs to stderr. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. | +| falco.log_syslog | bool | `true` | Send information logs to syslog. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. | +| falco.metrics | object | `{"convert_memory_to_mb":true,"enabled":false,"include_empty_values":false,"interval":"1h","kernel_event_counters_enabled":true,"libbpf_stats_enabled":true,"output_rule":true,"resource_utilization_enabled":true,"rules_counters_enabled":true,"state_counters_enabled":true}` | - [Usage] `enabled`: Disabled by default. `interval`: The stats interval in Falco follows the time duration definitions used by Prometheus. https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations Time durations are specified as a number, followed immediately by one of the following units: ms - millisecond s - second m - minute h - hour d - day - assuming a day has always 24h w - week - assuming a week has always 7d y - year - assuming a year has always 365d Example of a valid time duration: 1h30m20s10ms A minimum interval of 100ms is enforced for metric collection. However, for production environments, we recommend selecting one of the following intervals for optimal monitoring: 15m 30m 1h 4h 6h `output_rule`: To enable seamless metrics and performance monitoring, we recommend emitting metrics as the rule "Falco internal: metrics snapshot". This option is particularly useful when Falco logs are preserved in a data lake. Please note that to use this option, the Falco rules config `priority` must be set to `info` at a minimum. `output_file`: Append stats to a `jsonl` file. Use with caution in production as Falco does not automatically rotate the file. `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. `rules_counters_enabled`: Emit counts for each rule. `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. `state_counters_enabled`: Emit counters related to Falco's state engine, including added, removed threads or file descriptors (fds), and failed lookup, store, or retrieve actions in relation to Falco's underlying process cache table (threadtable). We also log the number of currently cached containers if applicable. `kernel_event_counters_enabled`: Emit kernel side event and drop counters, as an alternative to `syscall_event_drops`, but with some differences. These counters reflect monotonic values since Falco's start and are exported at a constant stats interval. `libbpf_stats_enabled`: Exposes statistics similar to `bpftool prog show`, providing information such as the number of invocations of each BPF program attached by Falco and the time spent in each program measured in nanoseconds. To enable this feature, the kernel must be >= 5.1, and the kernel configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, or an equivalent statistics feature, is not available for non `*bpf*` drivers. Additionally, please be aware that the current implementation of `libbpf` does not support granularity of statistics at the bpf tail call level. `include_empty_values`: When the option is set to true, fields with an empty numeric value will be included in the output. However, this rule does not apply to high-level fields such as `n_evts` or `n_drops`; they will always be included in the output even if their value is empty. This option can be beneficial for exploring the data schema and ensuring that fields with empty values are included in the output. todo: prometheus export option todo: syscall_counters_enabled option | +| falco.output_timeout | int | `2000` | The `output_timeout` parameter specifies the duration, in milliseconds, to wait before considering the deadline exceeded. By default, the timeout is set to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block the Falco output channel for up to 2 seconds without triggering a timeout error. Falco actively monitors the performance of output channels. With this setting the timeout error can be logged, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. It's important to note that Falco outputs will not be discarded from the output queue. This means that if an output channel becomes blocked indefinitely, it indicates a potential issue that needs to be addressed by the user. | +| falco.outputs | object | `{"max_burst":1000,"rate":0}` | A throttling mechanism, implemented as a token bucket, can be used to control the rate of Falco outputs. Each event source has its own rate limiter, ensuring that alerts from one source do not affect the throttling of others. The following options control the mechanism: - rate: the number of tokens (i.e. right to send a notification) gained per second. When 0, the throttling mechanism is disabled. Defaults to 0. - max_burst: the maximum number of tokens outstanding. Defaults to 1000. For example, setting the rate to 1 allows Falco to send up to 1000 notifications initially, followed by 1 notification per second. The burst capacity is fully restored after 1000 seconds of no activity. Throttling can be useful in various scenarios, such as preventing notification floods, managing system load, controlling event processing, or complying with rate limits imposed by external systems or APIs. It allows for better resource utilization, avoids overwhelming downstream systems, and helps maintain a balanced and controlled flow of notifications. With the default settings, the throttling mechanism is disabled. | +| falco.outputs_queue | object | `{"capacity":0}` | Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter allows you to customize the queue capacity. Please refer to the official documentation: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. On a healthy system with optimized Falco rules, the queue should not fill up. If it does, it is most likely happening due to the entire event flow being too slow, indicating that the server is under heavy load. `capacity`: the maximum number of items allowed in the queue is determined by this value. Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. In other words, when this configuration is set to 0, the number of allowed items is effectively set to the largest possible long value, disabling this setting. In the case of an unbounded queue, if the available memory on the system is consumed, the Falco process would be OOM killed. When using this option and setting the capacity, the current event would be dropped, and the event loop would continue. This behavior mirrors kernel-side event drops when the buffer between kernel space and user space is full. | +| falco.plugins | list | `[{"init_config":null,"library_path":"libk8saudit.so","name":"k8saudit","open_params":"http://:9765/k8s-audit"},{"library_path":"libcloudtrail.so","name":"cloudtrail"},{"init_config":"","library_path":"libjson.so","name":"json"}]` | Customize subsettings for each enabled plugin. These settings will only be applied when the corresponding plugin is enabled using the `load_plugins` option. | +| falco.priority | string | `"debug"` | Any rule with a priority level more severe than or equal to the specified minimum level will be loaded and run by Falco. This allows you to filter and control the rules based on their severity, ensuring that only rules of a certain priority or higher are active and evaluated by Falco. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug" | +| falco.program_output | object | `{"enabled":false,"keep_alive":false,"program":"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"}` | Redirect the output to another program or command. Possible additional things you might want to do with program output: - send to a slack webhook: program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - logging (alternate method than syslog): program: logger -t falco-test - send over a network connection: program: nc host.example.com 80 If `keep_alive` is set to `true`, the program will be started once and continuously written to, with each output message on its own line. If `keep_alive` is set to `false`, the program will be re-spawned for each output message. Furthermore, the program will be re-spawned if Falco receives the SIGUSR1 signal. | +| falco.rule_matching | string | `"first"` | - [Examples] Only enable two rules: rules: - disable: rule: "*" - enable: rule: Netcat Remote Code Execution in Container - enable: rule: Delete or rename shell history Disable all rules with a specific tag: rules: - disable: tag: network [Incubating] `rule_matching` - Falco has to be performant when evaluating rules against events. To quickly understand which rules could trigger on a specific event, Falco maintains buckets of rules sharing the same event type in a map. Then, the lookup in each bucket is performed through linear search. The `rule_matching` configuration key's values are: - "first": when evaluating conditions of rules in a bucket, Falco will stop to evaluate rules if it finds a matching rules. Since rules are stored in buckets in the order they are defined in the rules files, this option could prevent other rules to trigger even if their condition is met, causing a shadowing problem. - "all": with this value Falco will continue evaluating all the rules stored in the bucket, so that multiple rules could be triggered upon one event. | +| falco.rules_files | list | `["/etc/falco/falco_rules.yaml","/etc/falco/falco_rules.local.yaml","/etc/falco/rules.d"]` | The location of the rules files that will be consumed by Falco. | +| falco.stdout_output | object | `{"enabled":true}` | Redirect logs to standard output. | +| falco.syscall_event_drops | object | `{"actions":["log","alert"],"max_burst":1,"rate":0.03333,"simulate_drops":false,"threshold":0.1}` | For debugging/testing it is possible to simulate the drops using the `simulate_drops: true`. In this case the threshold does not apply. | +| falco.syscall_event_drops.actions | list | `["log","alert"]` | Actions to be taken when system calls were dropped from the circular buffer. | +| falco.syscall_event_drops.max_burst | int | `1` | Max burst of messages emitted. | +| falco.syscall_event_drops.rate | float | `0.03333` | Rate at which log/alert messages are emitted. | +| falco.syscall_event_drops.simulate_drops | bool | `false` | Flag to enable drops for debug purposes. | +| falco.syscall_event_drops.threshold | float | `0.1` | The messages are emitted when the percentage of dropped system calls with respect the number of events in the last second is greater than the given threshold (a double in the range [0, 1]). | +| falco.syscall_event_timeouts | object | `{"max_consecutives":1000}` | Generates Falco operational logs when `log_level=notice` at minimum Falco utilizes a shared buffer between the kernel and userspace to receive events, such as system call information, in userspace. However, there may be cases where timeouts occur in the underlying libraries due to issues in reading events or the need to skip a particular event. While it is uncommon for Falco to experience consecutive event timeouts, it has the capability to detect such situations. You can configure the maximum number of consecutive timeouts without an event after which Falco will generate an alert, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. The default value is set to 1000 consecutive timeouts without receiving any events. The mapping of this value to a time interval depends on the CPU frequency. | +| falco.syslog_output | object | `{"enabled":true}` | Send logs to syslog. | +| falco.time_format_iso_8601 | bool | `false` | When enabled, Falco will display log and output messages with times in the ISO 8601 format. By default, times are shown in the local time zone determined by the /etc/localtime configuration. | +| falco.watch_config_files | bool | `true` | Watch config file and rules files for modification. When a file is modified, Falco will propagate new config, by reloading itself. | +| falco.webserver | object | `{"enabled":true,"k8s_healthz_endpoint":"/healthz","listen_port":8765,"prometheus_metrics_enabled":false,"ssl_certificate":"/etc/falco/falco.pem","ssl_enabled":false,"threadiness":0}` | Falco supports an embedded webserver that runs within the Falco process, providing a lightweight and efficient way to expose web-based functionalities without the need for an external web server. The following endpoints are exposed: - /healthz: designed to be used for checking the health and availability of the Falco application (the name of the endpoint is configurable). - /versions: responds with a JSON object containing the version numbers of the internal Falco components (similar output as `falco --version -o json_output=true`). Please note that the /versions endpoint is particularly useful for other Falco services, such as `falcoctl`, to retrieve information about a running Falco instance. If you plan to use `falcoctl` locally or with Kubernetes, make sure the Falco webserver is enabled. The behavior of the webserver can be controlled with the following options, which are enabled by default: The `ssl_certificate` option specifies a combined SSL certificate and corresponding key that are contained in a single file. You can generate a key/cert as follows: $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem $ cat certificate.pem key.pem > falco.pem $ sudo cp falco.pem /etc/falco/falco.pem | +| falcoctl.artifact.follow | object | `{"args":["--log-format=json"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible with the running version of Falco before installing it. | +| falcoctl.artifact.follow.args | list | `["--log-format=json"]` | Arguments to pass to the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.follow.securityContext | object | `{}` | Security context for the falcoctl-artifact-follow sidecar container. | +| falcoctl.artifact.install | object | `{"args":["--log-format=json"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before Falco starts. It provides them to Falco by using an emptyDir volume. | +| falcoctl.artifact.install.args | list | `["--log-format=json"]` | Arguments to pass to the falcoctl-artifact-install init container. | +| falcoctl.artifact.install.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-install init container. | +| falcoctl.artifact.install.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-install init container. | +| falcoctl.artifact.install.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-install init container. | +| falcoctl.artifact.install.securityContext | object | `{}` | Security context for the falcoctl init container. | +| falcoctl.config | object | `{"artifact":{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}},"indexes":[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]}` | Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. | +| falcoctl.config.artifact | object | `{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}}` | Configuration used by the artifact commands. | +| falcoctl.config.artifact.allowedTypes | list | `["rulesfile","plugin"]` | List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained in the list it will refuse to downloade and install that artifact. | +| falcoctl.config.artifact.follow.every | string | `"6h"` | How often the tool checks for new versions of the followed artifacts. | +| falcoctl.config.artifact.follow.falcoversions | string | `"http://localhost:8765/versions"` | HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible with the running Falco instance. | +| falcoctl.config.artifact.follow.pluginsDir | string | `"/plugins"` | See the fields of the artifact.install section. | +| falcoctl.config.artifact.follow.refs | list | `["falco-rules:3"]` | List of artifacts to be followed by the falcoctl sidecar container. | +| falcoctl.config.artifact.follow.rulesfilesDir | string | `"/rulesfiles"` | See the fields of the artifact.install section. | +| falcoctl.config.artifact.install.pluginsDir | string | `"/plugins"` | Same as the one above but for the artifacts. | +| falcoctl.config.artifact.install.refs | list | `["falco-rules:3"]` | List of artifacts to be installed by the falcoctl init container. | +| falcoctl.config.artifact.install.resolveDeps | bool | `true` | Resolve the dependencies for artifacts. | +| falcoctl.config.artifact.install.rulesfilesDir | string | `"/rulesfiles"` | Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir mounted also by the Falco pod. | +| falcoctl.config.indexes | list | `[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]` | List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview | +| falcoctl.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| falcoctl.image.registry | string | `"docker.io"` | The image registry to pull from. | +| falcoctl.image.repository | string | `"falcosecurity/falcoctl"` | The image repository to pull from. | +| falcoctl.image.tag | string | `"0.9.0"` | The image tag to pull. | +| falcosidekick | object | `{"enabled":false,"fullfqdn":false,"listenPort":""}` | For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml | +| falcosidekick.enabled | bool | `false` | Enable falcosidekick deployment. | +| falcosidekick.fullfqdn | bool | `false` | Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used). | +| falcosidekick.listenPort | string | `""` | Listen port. Default value: 2801 | +| fullnameOverride | string | `""` | Same as nameOverride but for the fullname. | +| healthChecks | object | `{"livenessProbe":{"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | Parameters used | +| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | Tells the kubelet that it should wait X seconds before performing the first probe. | +| healthChecks.livenessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. | +| healthChecks.livenessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. | +| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | Tells the kubelet that it should wait X seconds before performing the first probe. | +| healthChecks.readinessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. | +| healthChecks.readinessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. | +| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| image.registry | string | `"docker.io"` | The image registry to pull from. | +| image.repository | string | `"falcosecurity/falco-no-driver"` | The image repository to pull from | +| image.tag | string | `""` | The image tag to pull. Overrides the image tag whose default is the chart appVersion. | +| imagePullSecrets | list | `[]` | Secrets containing credentials when pulling from private/secure registries. | +| metrics | object | `{"convertMemoryToMB":true,"enabled":false,"includeEmptyValues":false,"interval":"1h","kernelEventCountersEnabled":true,"libbpfStatsEnabled":true,"outputRule":false,"resourceUtilizationEnabled":true,"rulesCountersEnabled":true,"service":{"create":true,"ports":{"metrics":{"port":8765,"protocol":"TCP","targetPort":8765}},"type":"ClusterIP"},"stateCountersEnabled":true}` | metrics configures Falco to enable and expose the metrics. | +| metrics.convertMemoryToMB | bool | `true` | convertMemoryToMB specifies whether the memory should be converted to mb. | +| metrics.enabled | bool | `false` | enabled specifies whether the metrics should be enabled. | +| metrics.includeEmptyValues | bool | `false` | includeEmptyValues specifies whether the empty values should be included in the metrics. | +| metrics.interval | string | `"1h"` | interval is stats interval in Falco follows the time duration definitions used by Prometheus. https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations Time durations are specified as a number, followed immediately by one of the following units: ms - millisecond s - second m - minute h - hour d - day - assuming a day has always 24h w - week - assuming a week has always 7d y - year - assuming a year has always 365d Example of a valid time duration: 1h30m20s10ms A minimum interval of 100ms is enforced for metric collection. However, for production environments, we recommend selecting one of the following intervals for optimal monitoring: 15m 30m 1h 4h 6h | +| metrics.libbpfStatsEnabled | bool | `true` | libbpfStatsEnabled exposes statistics similar to `bpftool prog show`, providing information such as the number of invocations of each BPF program attached by Falco and the time spent in each program measured in nanoseconds. To enable this feature, the kernel must be >= 5.1, and the kernel configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, or an equivalent statistics feature, is not available for non `*bpf*` drivers. Additionally, please be aware that the current implementation of `libbpf` does not support granularity of statistics at the bpf tail call level. | +| metrics.outputRule | bool | `false` | outputRule enables seamless metrics and performance monitoring, we recommend emitting metrics as the rule "Falco internal: metrics snapshot". This option is particularly useful when Falco logs are preserved in a data lake. Please note that to use this option, the Falco rules config `priority` must be set to `info` at a minimum. | +| metrics.resourceUtilizationEnabled | bool | `true` | resourceUtilizationEnabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes when deployed as daemonset, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. Finally, we emit the overall host CPU and memory usages, along with the total number of processes and open file descriptors (fds) on the host, obtained from the proc file system unrelated to Falco's monitoring. These metrics help assess Falco's usage in relation to the server's workload intensity. | +| metrics.rulesCountersEnabled | bool | `true` | rulesCountersEnabled specifies whether the counts for each rule should be emitted. | +| metrics.service | object | `{"create":true,"ports":{"metrics":{"port":8765,"protocol":"TCP","targetPort":8765}},"type":"ClusterIP"}` | service exposes the metrics service to be accessed from within the cluster. ref: https://kubernetes.io/docs/concepts/services-networking/service/ | +| metrics.service.create | bool | `true` | create specifies whether a service should be created. | +| metrics.service.ports | object | `{"metrics":{"port":8765,"protocol":"TCP","targetPort":8765}}` | ports denotes all the ports on which the Service will listen. | +| metrics.service.ports.metrics | object | `{"port":8765,"protocol":"TCP","targetPort":8765}` | metrics denotes a listening service named "metrics". | +| metrics.service.ports.metrics.port | int | `8765` | port is the port on which the Service will listen. | +| metrics.service.ports.metrics.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. | +| metrics.service.ports.metrics.targetPort | int | `8765` | targetPort is the port on which the Pod is listening. | +| metrics.service.type | string | `"ClusterIP"` | type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible from within the cluster. | +| mounts.enforceProcMount | bool | `false` | By default, `/proc` from the host is only mounted into the Falco pod when `driver.enabled` is set to `true`. This flag allows it to override this behaviour for edge cases where `/proc` is needed but syscall data source is not enabled at the same time (e.g. for specific plugins). | +| mounts.volumeMounts | list | `[]` | A list of volumes you want to add to the Falco pods. | +| mounts.volumes | list | `[]` | A list of volumes you want to add to the Falco pods. | +| nameOverride | string | `""` | Put here the new name if you want to override the release name used for Falco components. | +| namespaceOverride | string | `""` | Override the deployment namespace | +| nodeSelector | object | `{}` | Selectors used to deploy Falco on a given node/nodes. | +| podAnnotations | object | `{}` | Add additional pod annotations | +| podLabels | object | `{}` | Add additional pod labels | +| podPriorityClassName | string | `nil` | Set pod priorityClassName | +| podSecurityContext | object | `{}` | Set securityContext for the pods These security settings are overriden by the ones specified for the specific containers when there is overlap. | +| rbac.create | bool | `true` | | +| resources.limits | object | `{"cpu":"1000m","memory":"1024Mi"}` | Maximum amount of resources that Falco container could get. If you are enabling more than one source in falco, than consider to increase the cpu limits. | +| resources.requests | object | `{"cpu":"100m","memory":"512Mi"}` | Although resources needed are subjective on the actual workload we provide a sane defaults ones. If you have more questions or concerns, please refer to #falco slack channel for more info about it. | +| scc.create | bool | `true` | Create OpenShift's Security Context Constraint. | +| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. | +| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. | +| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | +| serviceMonitor | object | `{"create":false,"endpointPort":"metrics","interval":"15s","labels":{},"path":"/metrics","relabelings":[],"scheme":"http","scrapeTimeout":"10s","selector":{},"targetLabels":[],"tlsConfig":{}}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should discover and scrape metrics from the Falco service. | +| serviceMonitor.create | bool | `false` | create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. https://github.com/coreos/prometheus-operator Enable it only if the ServiceMonitor CRD is installed in your cluster. | +| serviceMonitor.endpointPort | string | `"metrics"` | endpointPort is the port in the Falco service that exposes the metrics service. Change the value if you deploy a custom service for Falco's metrics. | +| serviceMonitor.interval | string | `"15s"` | interval specifies the time interval at which Prometheus should scrape metrics from the service. | +| serviceMonitor.labels | object | `{}` | labels set of labels to be applied to the ServiceMonitor resource. If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right label here in order for the ServiceMonitor to be selected for target discovery. | +| serviceMonitor.path | string | `"/metrics"` | path at which the metrics are exposed by Falco. | +| serviceMonitor.relabelings | list | `[]` | relabelings configures the relabeling rules to apply the target’s metadata labels. | +| serviceMonitor.scheme | string | `"http"` | scheme specifies network protocol used by the metrics endpoint. In this case HTTP. | +| serviceMonitor.scrapeTimeout | string | `"10s"` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. | +| serviceMonitor.selector | object | `{}` | selector set of labels that should match the labels on the Service targeted by the current serviceMonitor. | +| serviceMonitor.targetLabels | list | `[]` | targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. | +| serviceMonitor.tlsConfig | object | `{}` | tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when scraping metrics from a service. It allows you to define the details of the TLS connection, such as CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support TLS configuration for the metrics endpoint. | +| services | string | `nil` | Network services configuration (scenario requirement) Add here your services to be deployed together with Falco. | +| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]` | Tolerations to allow Falco to run on Kubernetes masters. | +| tty | bool | `false` | Attach the Falco process to a tty inside the container. Needed to flush Falco logs as soon as they are emitted. Set it to "true" when you need the Falco logs to be immediately displayed. | diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/CHANGELOG.md b/charts/falco/falco/charts/falco/charts/falcosidekick/CHANGELOG.md index 9459ce24b..d7b85f5fe 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/CHANGELOG.md +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/CHANGELOG.md @@ -5,442 +5,673 @@ numbering uses [semantic versioning](http://semver.org). Before release 0.1.20, the helm chart can be found in `falcosidekick` [repository](https://github.com/falcosecurity/falcosidekick/tree/master/deploy/helm/falcosidekick). +## 0.8.5 -## 0.5.3 +- Fix an issue with the by default missing custom CA cert + +## 0.8.4 + +- Fix falcosidekick chart ignoring custom service type for webui redis + +## 0.8.3 + +- Add a condition to create the secrets for the redis only if the webui is deployed + +## 0.8.2 + +- Fix redis-availability check of the UI init-container in case externalRedis is enabled + +## 0.8.1 + +- Allow to set resources, securityContext and image overwrite for wait-redis initContainer + +## 0.8.0 + +- Ugrade to Falcosidekick 2.29.0 +- Allow to set custom labels and annotations to set to all resources +- Allow to use an existing secrets and values for the env vars at the same time +- Fix missing ingressClassName settings in the values.yaml +- Add of an initContainer to check if the redis for falcosidekick-ui is up + +## 0.7.22 + +- Upgrade redis-stack image to 7.2.0-v11 + +## 0.7.21 + +- Fix the Falco Sidekick WEBUI_URL secret value. + +## 0.7.20 + +- Align Web UI service port from values.yaml file with Falco Sidekick WEBUI_URL secret value. + +## 0.7.19 + +- Enhanced the service Monitor to support additional Properties. +- Fix the promql query for prometheusRules: FalcoErrorOutputEventsRateHigh. + +## 0.7.18 + +- Fix PrometheusRule duplicate alert name + +## 0.7.17 + +- Fix the labels for the serviceMonitor + +## 0.7.16 + +- Fix the error with the `NOTES` (`index of untyped nil Use`) when the ingress is enabled to falcosidekick-ui + +## 0.7.15 + +- Fix ServiceMonitor selector labels + +## 0.7.14 + +- Fix duplicate component labels + +## 0.7.13 + +- Fix ServiceMonitor port name and selector labels + +## 0.7.12 + +- Align README values with the values.yaml file + +## 0.7.11 + +- Fix a link in the falcosidekick README to the policy report output documentation + +## 0.7.10 + +- Set Helm recommended labels (`app.kubernetes.io/name`, `app.kubernetes.io/instance`, `app.kubernetes.io/version`, `helm.sh/chart`, `app.kubernetes.io/part-of`, `app.kubernetes.io/managed-by`) using helpers.tpl + +## 0.7.9 + +- noop change to the chart itself. Updated makefile. + +## 0.7.8 + +- Fix the condition for missing cert files + +## 0.7.7 + +- Support extraArgs in the helm chart + +## 0.7.6 + +- Fix the behavior with the `AWS IRSA` with a new value `aws.config.useirsa` +- Add a section in the README to describe how to use a subpath for `Falcosidekick-ui` ingress +- Add a `ServiceMonitor` for prometheus-operator +- Add a `PrometheusRule` for prometheus-operator + +## 0.7.5 + +- noop change just to test the ci + +## 0.7.4 + +- Fix volume mount when `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables are defined. + +## 0.7.3 + +- Allow to set (m)TLS Server cryptographic material via `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables or through `config.tlsserver.existingSecret` variables. + +## 0.7.2 + +- Fix the wrong key of the secret for the user + +## 0.7.1 + +- Allow to set a password `webui.redis.password` for Redis for `Falcosidekick-UI` +- The user for `Falcosidekick-UI` is now set with an env var from a secret + +## 0.7.0 + +- Support configuration of revisionHistoryLimit of the deployments + +## 0.6.3 + +- Update Falcosidekick to 2.28.0 +- Add Mutual TLS Client config +- Add TLS Server config +- Add `bracketreplacer` config +- Add `customseveritymap` to `alertmanager` output +- Add Drop Event config to `alertmanager` output +- Add `customheaders` to `elasticsearch` output +- Add `customheaders` to `loki` output +- Add `customheaders` to `grafana` output +- Add `rolearn` and `externalid` for `aws` outputs +- Add `method` to `webhook` output +- Add `customattributes` to `gcp.pubsub` output +- Add `region` to `pargerduty` output +- Add `topiccreation` and `tls` to `kafka` output +- Add `Grafana OnCall` output +- Add `Redis` output +- Add `Telegram` output +- Add `N8N` output +- Add `OpenObserver` output + +## 0.6.2 + +- Fix interpolation of `SYSLOG_PORT` + +## 0.6.1 + +- Add `webui.allowcors` value for `Falcosidekick-UI` + +## 0.6.0 + +- Change the docker image for the redis pod for falcosidekick-ui + +## 0.5.16 + +- Add `affinity`, `nodeSelector` and `tolerations` values for the Falcosidekick test-connection pod + +## 0.5.15 + +- Set extra labels and annotations for `AlertManager` only if they're not empty + +## 0.5.14 + +- Fix Prometheus extralabels configuration in Falcosidekick + +## 0.5.13 + +- Fix missing quotes in Falcosidekick-UI ttl argument + +## 0.5.12 + +- Fix missing space in Falcosidekick-UI ttl argument + +## 0.5.11 + +- Fix missing space in Falcosidekick-UI arguments + +## 0.5.10 + +- upgrade Falcosidekick image to 2.27.0 +- upgrade Falcosidekick-UI image to 2.1.0 +- Add `Yandex Data Streams` output +- Add `Node-Red` output +- Add `MQTT` output +- Add `Zincsearch` output +- Add `Gotify` output +- Add `Spyderbat` output +- Add `Tekton` output +- Add `TimescaleDB` output +- Add `AWS Security Lake` output +- Add `config.templatedfields` to set templated fields +- Add `config.slack.channel` to override `Slack` channel +- Add `config.alertmanager.extralabels` and `config.alertmanager.extraannotations` for `AlertManager` output +- Add `config.influxdb.token`, `config.influxdb.organization` and `config.influxdb.precision` for `InfluxDB` output +- Add `config.aws.checkidentity` to disallow STS checks +- Add `config.smtp.authmechanism`, `config.smtp.token`, `config.smtp.identity`, `config.smtp.trace` to manage `SMTP` auth +- Update default doc type for `Elastichsearch` +- Add `config.loki.user`, `config.loki.apikey` to manage auth to Grafana Cloud for `Loki` output +- Add `config.kafka.sasl`, `config.kafka.async`, `config.kafka.compression`, `config.kafka.balancer`, `config.kafka.clientid` to manage auth and communication for `Kafka` output +- Add `config.syslog.format` to manage the format of `Syslog` payload +- Add `webui.ttl` to set TTL of keys in Falcosidekick-UI +- Add `webui.loglevel` to set log level in Falcosidekick-UI +- Add `webui.user` to set log user:password in Falcosidekick-UI + +## 0.5.9 + +- Fix: remove `namespace` from `clusterrole` and `clusterrolebinding` metadata + +## 0.5.8 + +- Support `storageEnabled` for `redis` to allow ephemeral installs + +## 0.5.7 + +- Removing unused Kafka config values + +## 0.5.6 + +- Fixing Syslog's port import in `secrets.yaml` + +## 0.5.5 + +- Add `webui.externalRedis` with `enabled`, `url` and `port` to values to set an external Redis database with RediSearch > v2 for the WebUI +- Add `webui.redis.enabled` option to disable the deployment of the database. +- `webui.redis.enabled ` and `webui.externalRedis.enabled` are mutually exclusive + +## 0.5.4 -* Upgrade image to fix Panic of `Prometheus` output when `customfields` is set -* Add `extralabels` for `Loki` and `Prometheus` outputs to set fields to use as labels -* Add `expiresafter` for `AlertManager` output +- Upgrade image to fix Panic of `Prometheus` output when `customfields` is set +- Add `extralabels` for `Loki` and `Prometheus` outputs to set fields to use as labels +- Add `expiresafter` for `AlertManager` output ## 0.5.3 -* Support full configuration of `securityContext` blocks in falcosidekick and falcosidekick-ui deployments, and redis statefulset. +- Support full configuration of `securityContext` blocks in falcosidekick and falcosidekick-ui deployments, and redis statefulset. ## 0.5.2 -* Update Falcosidekick-UI image (fix wrong redirect to localhost when an ingress is used) +- Update Falcosidekick-UI image (fix wrong redirect to localhost when an ingress is used) ## 0.5.1 -* Support `ingressClassName` field in falcosidekick ingresses. +- Support `ingressClassName` field in falcosidekick ingresses. ## 0.5.0 ### Major Changes -* Add `Policy Report` output -* Add `Syslog` output -* Add `AWS Kinesis` output -* Add `Zoho Cliq` output -* Support IRSA for AWS authentication -* Upgrade Falcosidekick-UI to v2.0.1 +- Add `Policy Report` output +- Add `Syslog` output +- Add `AWS Kinesis` output +- Add `Zoho Cliq` output +- Support IRSA for AWS authentication +- Upgrade Falcosidekick-UI to v2.0.1 ### Minor changes -* Allow to set custom Labels for pods +- Allow to set custom Labels for pods ## 0.4.5 -* Allow additional service-ui annotations +- Allow additional service-ui annotations ## 0.4.4 -* Fix output after chart installation when ingress is enable +- Fix output after chart installation when ingress is enable ## 0.4.3 -* Support `annotation` block in service +- Support `annotation` block in service ## 0.4.2 -* Fix: Added the rule to use the podsecuritypolicy -* Fix: Added `ServiceAccountName` to the UI deployment +- Fix: Added the rule to use the podsecuritypolicy +- Fix: Added `ServiceAccountName` to the UI deployment ## 0.4.1 -* Removes duplicate `Fission` keys from secret +- Removes duplicate `Fission` keys from secret ## 0.4.0 ### Major Changes -* Support Ingress API version `networking.k8s.io/v1`, see `ingress.hosts` and `webui.ingress.hosts` in [values.yaml](values.yaml) for a breaking change in the `path` parameter +- Support Ingress API version `networking.k8s.io/v1`, see `ingress.hosts` and `webui.ingress.hosts` in [values.yaml](values.yaml) for a breaking change in the `path` parameter ## 0.3.17 -* Fix: Remove the value for bucket of `Yandex S3`, it enabled the output by default +- Fix: Remove the value for bucket of `Yandex S3`, it enabled the output by default ## 0.3.16 ### Major Changes -* Fix: set correct new image 2.24.0 +- Fix: set correct new image 2.24.0 ## 0.3.15 ### Major Changes -* Add `Fission` output +- Add `Fission` output ## 0.3.14 ### Major Changes -* Add `Grafana` output -* Add `Yandex Cloud S3` output -* Add `Kafka REST` output +- Add `Grafana` output +- Add `Yandex Cloud S3` output +- Add `Kafka REST` output ### Minor changes -* Docker image is now available on AWS ECR Public Gallery (`--set image.registry=public.ecr.aws`) +- Docker image is now available on AWS ECR Public Gallery (`--set image.registry=public.ecr.aws`) ## 0.3.13 ### Minor changes -* Enable extra volumes and volumemounts for `falcosidekick` via values +- Enable extra volumes and volumemounts for `falcosidekick` via values ## 0.3.12 -* Add AWS configuration field `config.aws.rolearn` +- Add AWS configuration field `config.aws.rolearn` ## 0.3.11 ### Minor changes -* Make image registries for `falcosidekick` and `falcosidekick-ui` configurable +- Make image registries for `falcosidekick` and `falcosidekick-ui` configurable ## 0.3.10 ### Minor changes -* Fix table formatting in `README.md` +- Fix table formatting in `README.md` ## 0.3.9 ### Fixes -* Add missing `imagePullSecrets` in `falcosidekick/templates/deployment-ui.yaml` +- Add missing `imagePullSecrets` in `falcosidekick/templates/deployment-ui.yaml` ## 0.3.8 ### Major Changes -* Add `GCP Cloud Run` output -* Add `GCP Cloud Functions` output -* Add `Wavefront` output -* Allow MutualTLS for some outputs -* Add basic auth for Elasticsearch output +- Add `GCP Cloud Run` output +- Add `GCP Cloud Functions` output +- Add `Wavefront` output +- Allow MutualTLS for some outputs +- Add basic auth for Elasticsearch output ## 0.3.7 ### Minor changes -* Fix table formatting in `README.md` -* Fix `config.azure.eventHub` parameter name in `README.md` +- Fix table formatting in `README.md` +- Fix `config.azure.eventHub` parameter name in `README.md` ## 0.3.6 ### Fixes -* Point to the correct name of aadpodidentnity +- Point to the correct name of aadpodidentnity ## 0.3.5 ### Minor Changes -* Fix link to Falco in the `README.md` +- Fix link to Falco in the `README.md` ## 0.3.4 ### Major Changes -* Bump up version (`v1.0.1`) of image for `falcosidekick-ui` +- Bump up version (`v1.0.1`) of image for `falcosidekick-ui` ## 0.3.3 ### Minor Changes -* Set default values for `OpenFaaS` output type parameters -* Fixes of documentation +- Set default values for `OpenFaaS` output type parameters +- Fixes of documentation ## 0.3.2 ### Fixes -* Add config checksum annotation to deployment pods to restart pods on config change -* Fix statsd config options in the secret to make them match the docs +- Add config checksum annotation to deployment pods to restart pods on config change +- Fix statsd config options in the secret to make them match the docs ## 0.3.1 ### Fixes -* Fix for `s3.bucket`, it should be empty +- Fix for `s3.bucket`, it should be empty ## 0.3.0 ### Major Changes -* Add `AWS S3` output -* Add `GCP Storage` output -* Add `RabbitMQ` output -* Add `OpenFaas` output +- Add `AWS S3` output +- Add `GCP Storage` output +- Add `RabbitMQ` output +- Add `OpenFaas` output ## 0.2.9 ### Major Changes -* Updated falcosidekuck-ui default image version to `v0.2.0` +- Updated falcosidekuck-ui default image version to `v0.2.0` ## 0.2.8 ### Fixes -* Fixed to specify `kafka.hostPort` instead of `kafka.url` +- Fixed to specify `kafka.hostPort` instead of `kafka.url` ## 0.2.7 ### Fixes -* Fixed missing hyphen in podidentity +- Fixed missing hyphen in podidentity ## 0.2.6 ### Fixes -* Fix repo and tag for `ui` image +- Fix repo and tag for `ui` image ## 0.2.5 ### Major Changes -* Add `CLOUDEVENTS` output -* Add `WEBUI` output +- Add `CLOUDEVENTS` output +- Add `WEBUI` output ### Minor Changes -* Add details about syntax for adding `custom_fields` +- Add details about syntax for adding `custom_fields` ## 0.2.4 ### Minor Changes -* Add `DATADOG_HOST` to secret +- Add `DATADOG_HOST` to secret ## 0.2.3 ### Minor Changes -* Allow additional pod annotations -* Remove namespace condition in aad-pod-identity +- Allow additional pod annotations +- Remove namespace condition in aad-pod-identity ## 0.2.2 ### Major Changes -* Add `Kubeless` output +- Add `Kubeless` output ## 0.2.1 ### Major Changes -* Add `PagerDuty` output +- Add `PagerDuty` output ## 0.2.0 ### Major Changes -* Add option to use an existing secret -* Add option to add extra environment variables -* Add `Stan` output +- Add option to use an existing secret +- Add option to add extra environment variables +- Add `Stan` output ### Minor Changes -* Use the Existing secret resource and add all possible variables to there, and make it simpler to read and less error-prone in the deployment resource +- Use the Existing secret resource and add all possible variables to there, and make it simpler to read and less error-prone in the deployment resource ## 0.1.37 ### Minor Changes -* Fix aws keys not being added to the deployment +- Fix aws keys not being added to the deployment ## 0.1.36 ### Minor Changes -* Fix helm test +- Fix helm test ## 0.1.35 ### Major Changes -* Update image to use release 2.19.1 +- Update image to use release 2.19.1 ## 0.1.34 -* New outputs can be set : `Kafka`, `AWS CloudWatchLogs` +- New outputs can be set : `Kafka`, `AWS CloudWatchLogs` ## 0.1.33 ### Minor Changes -* Fixed GCP Pub/Sub values references in `deployment.yaml` +- Fixed GCP Pub/Sub values references in `deployment.yaml` ## 0.1.32 ### Major Changes -* Support release namespace configuration +- Support release namespace configuration ## 0.1.31 ### Major Changes -* New outputs can be set : `Googlechat` +- New outputs can be set : `Googlechat` ## 0.1.30 ### Major changes -* New output can be set : `GCP PubSub` -* Custom Headers can be set for `Webhook` output -* Fix typo `aipKey` for OpsGenie output +- New output can be set : `GCP PubSub` +- Custom Headers can be set for `Webhook` output +- Fix typo `aipKey` for OpsGenie output ## 0.1.29 -* Fix falcosidekick configuration table to use full path of configuration properties in the `README.md` +- Fix falcosidekick configuration table to use full path of configuration properties in the `README.md` ## 0.1.28 ### Major changes -* New output can be set : `AWS SNS` -* Metrics in `prometheus` format can be scrapped from `/metrics` URI +- New output can be set : `AWS SNS` +- Metrics in `prometheus` format can be scrapped from `/metrics` URI ## 0.1.27 ### Minor Changes -* Replace extensions apiGroup/apiVersion because of deprecation +- Replace extensions apiGroup/apiVersion because of deprecation ## 0.1.26 ### Minor Changes -* Allow the creation of a PodSecurityPolicy, disabled by default +- Allow the creation of a PodSecurityPolicy, disabled by default ## 0.1.25 ### Minor Changes -* Allow the configuration of the Pod securityContext, set default runAsUser and fsGroup values +- Allow the configuration of the Pod securityContext, set default runAsUser and fsGroup values ## 0.1.24 ### Minor Changes -* Remove duplicated `webhook` block in `values.yaml` +- Remove duplicated `webhook` block in `values.yaml` ## 0.1.23 -* fake release for triggering CI for auto-publishing +- fake release for triggering CI for auto-publishing ## 0.1.22 ### Major Changes -* Add `imagePullSecrets` +- Add `imagePullSecrets` ## 0.1.21 ### Minor Changes -* Fix `Azure Indentity` case sensitive value +- Fix `Azure Indentity` case sensitive value ## 0.1.20 ### Major Changes -* New outputs can be set : `Azure Event Hubs`, `Discord` +- New outputs can be set : `Azure Event Hubs`, `Discord` ### Minor Changes -* Fix wrong port name in output +- Fix wrong port name in output ## 0.1.17 ### Major Changes -* New outputs can be set : `Mattermost`, `Rocketchat` +- New outputs can be set : `Mattermost`, `Rocketchat` ## 0.1.11 ### Major Changes -* Add Pod Security Policy +- Add Pod Security Policy ## 0.1.11 ### Minor Changes -* Fix wrong value reference for Elasticsearch output in deployment.yaml +- Fix wrong value reference for Elasticsearch output in deployment.yaml ## 0.1.10 ### Major Changes -* New output can be set : `DogStatsD` +- New output can be set : `DogStatsD` ## 0.1.9 ### Major Changes -* New output can be set : `StatsD` +- New output can be set : `StatsD` ## 0.1.7 ### Major Changes -* New output can be set : `Opsgenie` +- New output can be set : `Opsgenie` ## 0.1.6 ### Major Changes -* New output can be set : `NATS` +- New output can be set : `NATS` ## 0.1.5 ### Major Changes -* `Falcosidekick` and its chart are now part of `falcosecurity` organization +- `Falcosidekick` and its chart are now part of `falcosecurity` organization ## 0.1.4 ### Minor Changes -* Use more recent image with `Golang` 1.14 +- Use more recent image with `Golang` 1.14 ## 0.1.3 ### Major Changes -* New output can be set : `Loki` +- New output can be set : `Loki` ## 0.1.2 ### Major Changes -* New output can be set : `SMTP` +- New output can be set : `SMTP` ## 0.1.1 ### Major Changes -* New outputs can be set : `AWS Lambda`, `AWS SQS`, `Teams` +- New outputs can be set : `AWS Lambda`, `AWS SQS`, `Teams` ## 0.1.0 ### Major Changes -* Initial release of Falcosidekick Helm Chart +- Initial release of Falcosidekick Helm Chart diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/Chart.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/Chart.yaml index 87b807d24..b059fb03e 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/Chart.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/Chart.yaml @@ -1,5 +1,5 @@ apiVersion: v1 -appVersion: 2.26.0 +appVersion: 2.29.0 description: Connect Falco to your ecosystem home: https://github.com/falcosecurity/falcosidekick icon: https://raw.githubusercontent.com/falcosecurity/falcosidekick/master/imgs/falcosidekick_color.png @@ -13,4 +13,4 @@ maintainers: name: falcosidekick sources: - https://github.com/falcosecurity/falcosidekick -version: 0.5.4 +version: 0.8.5 diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/README.gotmpl b/charts/falco/falco/charts/falco/charts/falcosidekick/README.gotmpl new file mode 100644 index 000000000..3d3b89bba --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/README.gotmpl @@ -0,0 +1,187 @@ +# Falcosidekick + +![falcosidekick](https://github.com/falcosecurity/falcosidekick/raw/master/imgs/falcosidekick_color.png) + +![release](https://flat.badgen.net/github/release/falcosecurity/falcosidekick/latest?color=green) ![last commit](https://flat.badgen.net/github/last-commit/falcosecurity/falcosidekick) ![licence](https://flat.badgen.net/badge/license/MIT/blue) ![docker pulls](https://flat.badgen.net/docker/pulls/falcosecurity/falcosidekick?icon=docker) + +## Description + +A simple daemon for connecting [`Falco`](https://github.com/falcosecurity/falco) to your ecossytem. It takes a `Falco`'s events and +forward them to different outputs in a fan-out way. + +It works as a single endpoint for as many as you want `Falco` instances : + +![falco_with_falcosidekick](https://github.com/falcosecurity/falcosidekick/raw/master/imgs/falco_with_falcosidekick.png) + +## Outputs + +`Falcosidekick` manages a large variety of outputs with different purposes. + +> **Note** +Follow the links to get the configuration of each output. + +### Chat + +- [**Slack**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/slack.md) +- [**Rocketchat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rocketchat.md) +- [**Mattermost**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mattermost.md) +- [**Teams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/teams.md) +- [**Discord**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/discord.md) +- [**Google Chat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/googlechat.md) +- [**Zoho Cliq**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cliq.md) +- [**Telegram**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/telegram.md) + +### Metrics / Observability + +- [**Datadog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/datadog.md) +- [**Influxdb**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/influxdb.md) +- [**StatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/statsd.md) (for monitoring of `falcosidekick`) +- [**DogStatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dogstatsd.md) (for monitoring of `falcosidekick`) +- [**Prometheus**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/prometheus.md) (for both events and monitoring of `falcosidekick`) +- [**Wavefront**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/wavefront.md) +- [**Spyderbat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/spyderbat.md) +- [**TimescaleDB**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/timescaledb.md) +- [**Dynatrace**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dynatrace.md) + +### Alerting + +- [**AlertManager**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/alertmanager.md) +- [**Opsgenie**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/opsgenie.md) +- [**PagerDuty**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/pagerduty.md) +- [**Grafana OnCall**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana_oncall.md) + +### Logs + +- [**Elasticsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/elasticsearch.md) +- [**Loki**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/loki.md) +- [**AWS CloudWatchLogs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_cloudwatch_logs.md) +- [**Grafana**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana.md) +- [**Syslog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/syslog.md) +- [**Zincsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs//zincsearch.md) +- [**OpenObserve**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openobserve.md) + +### Object Storage + +- [**AWS S3**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_s3.md) +- [**GCP Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_storage.md) +- [**Yandex S3 Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_s3.md) + +### FaaS / Serverless + +- [**AWS Lambda**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_lambda.md) +- [**GCP Cloud Run**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_run.md) +- [**GCP Cloud Functions**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_functions.md) +- [**Fission**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/fission.md) +- [**KNative (CloudEvents)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cloudevents.md) +- [**Kubeless**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kubeless.md) +- [**OpenFaaS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openfaas.md) +- [**Tekton**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/tekton.md) + +### Message queue / Streaming + +- [**NATS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nats.md) +- [**STAN (NATS Streaming)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/stan.md) +- [**AWS SQS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sqs.md) +- [**AWS SNS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sns.md) +- [**AWS Kinesis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_kinesis.md) +- [**GCP PubSub**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_pub_sub.md) +- [**Apache Kafka**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafka.md) +- [**Kafka Rest Proxy**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafkarest.md) +- [**RabbitMQ**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rabbitmq.md) +- [**Azure Event Hubs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/azure_event_hub.md) +- [**Yandex Data Streams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_datastreams.md) +- [**MQTT**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mqtt.md) +- [**Gotify**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gotify.md) + +### Email + +- [**SMTP**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/smtp.md) + +### Database + +- [**Redis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/redis.md) + +### Web + +- [**Webhook**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/webhook.md) +- [**Node-RED**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nodered.md) +- [**WebUI (Falcosidekick UI)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md) + +### SIEM + +- [**AWS Security Lake**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_security_lake.md) + +### Workflow + +- [**n8n**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/n8n.md) + +### Other +- [**Policy Report**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/policy_report.md) + +## Adding `falcosecurity` repository + +Prior to install the chart, add the `falcosecurity` charts repository: + +```bash +helm repo add falcosecurity https://falcosecurity.github.io/charts +helm repo update +``` + +## Installing the Chart + +### Install Falco + Falcosidekick + Falcosidekick-ui + +To install the chart with the release name `falcosidekick` run: + +```bash +helm install falcosidekick falcosecurity/falcosidekick --set webui.enabled=true +``` + +### With Helm chart of Falco + +`Falco`, `Falcosidekick` and `Falcosidekick-ui` can be installed together in one command. All values to configure `Falcosidekick` will have to be +prefixed with `falcosidekick.`. + +```bash +helm install falco falcosecurity/falco --set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true +``` + +After a few seconds, Falcosidekick should be running. + +> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment + +## Minimum Kubernetes version + +The minimum Kubernetes version required is 1.17.x + +## Uninstalling the Chart + +To uninstall the `falcosidekick` deployment: + +```bash +helm uninstall falcosidekick +``` + +The command removes all the Kubernetes components associated with the chart and deletes the release. + +## Configuration + +The following table lists the main configurable parameters of the Falcosidekick chart and their default values. See `values.yaml` for full list. + +{{ template "chart.valuesSection" . }} + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Metrics + +A `prometheus` endpoint can be scrapped at `/metrics`. + +## Access Falcosidekick UI through an Ingress and a subpath + +You may want to access the `WebUI (Falcosidekick UI)`](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md) dashboard not from `/` but from `/subpath` and use an Ingress, here's an example of annotations to add to the Ingress for `nginx-ingress controller`: + +```yaml +nginx.ingress.kubernetes.io/rewrite-target: /$2 +nginx.ingress.kubernetes.io/use-regex: "true" +``` diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/README.md b/charts/falco/falco/charts/falco/charts/falcosidekick/README.md index 7023d0c8e..598d140c2 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/README.md +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/README.md @@ -17,78 +17,106 @@ It works as a single endpoint for as many as you want `Falco` instances : `Falcosidekick` manages a large variety of outputs with different purposes. +> **Note** +Follow the links to get the configuration of each output. + ### Chat -- [**Slack**](https://slack.com) -- [**Rocketchat**](https://rocket.chat/) -- [**Mattermost**](https://mattermost.com/) -- [**Teams**](https://products.office.com/en-us/microsoft-teams/group-chat-software) -- [**Discord**](https://www.discord.com/) -- [**Google Chat**](https://workspace.google.com/products/chat/) -- [**Zoho Cliq**](https://www.zoho.com/cliq/) +- [**Slack**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/slack.md) +- [**Rocketchat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rocketchat.md) +- [**Mattermost**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mattermost.md) +- [**Teams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/teams.md) +- [**Discord**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/discord.md) +- [**Google Chat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/googlechat.md) +- [**Zoho Cliq**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cliq.md) +- [**Telegram**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/telegram.md) ### Metrics / Observability -- [**Datadog**](https://www.datadoghq.com/) -- [**Influxdb**](https://www.influxdata.com/products/influxdb-overview/) -- [**StatsD**](https://github.com/statsd/statsd) (for monitoring of `falcosidekick`) -- [**DogStatsD**](https://docs.datadoghq.com/developers/dogstatsd/?tab=go) (for monitoring of `falcosidekick`) -- [**Prometheus**](https://prometheus.io/) (for both events and monitoring of `falcosidekick`) -- [**Wavefront**](https://www.wavefront.com) +- [**Datadog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/datadog.md) +- [**Influxdb**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/influxdb.md) +- [**StatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/statsd.md) (for monitoring of `falcosidekick`) +- [**DogStatsD**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dogstatsd.md) (for monitoring of `falcosidekick`) +- [**Prometheus**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/prometheus.md) (for both events and monitoring of `falcosidekick`) +- [**Wavefront**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/wavefront.md) +- [**Spyderbat**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/spyderbat.md) +- [**TimescaleDB**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/timescaledb.md) +- [**Dynatrace**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/dynatrace.md) ### Alerting -- [**AlertManager**](https://prometheus.io/docs/alerting/alertmanager/) -- [**Opsgenie**](https://www.opsgenie.com/) -- [**PagerDuty**](https://pagerduty.com/) +- [**AlertManager**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/alertmanager.md) +- [**Opsgenie**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/opsgenie.md) +- [**PagerDuty**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/pagerduty.md) +- [**Grafana OnCall**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana_oncall.md) ### Logs -- [**Elasticsearch**](https://www.elastic.co/) -- [**Loki**](https://grafana.com/oss/loki) -- [**AWS CloudWatchLogs**](https://aws.amazon.com/cloudwatch/features/) -- [**Grafana**](https://grafana.com/) (annotations) -- **Syslog** +- [**Elasticsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/elasticsearch.md) +- [**Loki**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/loki.md) +- [**AWS CloudWatchLogs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_cloudwatch_logs.md) +- [**Grafana**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/grafana.md) +- [**Syslog**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/syslog.md) +- [**Zincsearch**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs//zincsearch.md) +- [**OpenObserve**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openobserve.md) ### Object Storage -- [**AWS S3**](https://aws.amazon.com/s3/features/) -- [**GCP Storage**](https://cloud.google.com/storage) -- [**Yandex S3 Storage**](https://cloud.yandex.com/en-ru/services/storage) +- [**AWS S3**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_s3.md) +- [**GCP Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_storage.md) +- [**Yandex S3 Storage**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_s3.md) ### FaaS / Serverless -- [**AWS Lambda**](https://aws.amazon.com/lambda/features/) -- [**Kubeless**](https://kubeless.io/) -- [**OpenFaaS**](https://www.openfaas.com) -- [**GCP Cloud Run**](https://cloud.google.com/run) -- [**GCP Cloud Functions**](https://cloud.google.com/functions) -- [**Fission**](https://fission.io) +- [**AWS Lambda**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_lambda.md) +- [**GCP Cloud Run**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_run.md) +- [**GCP Cloud Functions**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_cloud_functions.md) +- [**Fission**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/fission.md) +- [**KNative (CloudEvents)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/cloudevents.md) +- [**Kubeless**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kubeless.md) +- [**OpenFaaS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/openfaas.md) +- [**Tekton**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/tekton.md) ### Message queue / Streaming -- [**NATS**](https://nats.io/) -- [**STAN (NATS Streaming)**](https://docs.nats.io/nats-streaming-concepts/intro) -- [**AWS SQS**](https://aws.amazon.com/sqs/features/) -- [**AWS SNS**](https://aws.amazon.com/sns/features/) -- [**AWS Kinesis**](https://aws.amazon.com/kinesis/) -- [**GCP PubSub**](https://cloud.google.com/pubsub) -- [**Apache Kafka**](https://kafka.apache.org/) -- [**Kafka Rest Proxy**](https://docs.confluent.io/platform/current/kafka-rest/index.html) -- [**RabbitMQ**](https://www.rabbitmq.com/) -- [**Azure Event Hubs**](https://azure.microsoft.com/en-in/services/event-hubs/) +- [**NATS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nats.md) +- [**STAN (NATS Streaming)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/stan.md) +- [**AWS SQS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sqs.md) +- [**AWS SNS**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_sns.md) +- [**AWS Kinesis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_kinesis.md) +- [**GCP PubSub**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gcp_pub_sub.md) +- [**Apache Kafka**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafka.md) +- [**Kafka Rest Proxy**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/kafkarest.md) +- [**RabbitMQ**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/rabbitmq.md) +- [**Azure Event Hubs**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/azure_event_hub.md) +- [**Yandex Data Streams**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/yandex_datastreams.md) +- [**MQTT**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/mqtt.md) +- [**Gotify**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/gotify.md) ### Email -- **SMTP** +- [**SMTP**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/smtp.md) + +### Database + +- [**Redis**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/redis.md) ### Web -- **Webhook** -- [**WebUI**](https://github.com/falcosecurity/falcosidekick-ui) (a Web UI for displaying latest events in real time) +- [**Webhook**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/webhook.md) +- [**Node-RED**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/nodered.md) +- [**WebUI (Falcosidekick UI)**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md) + +### SIEM + +- [**AWS Security Lake**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/aws_security_lake.md) + +### Workflow + +- [**n8n**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/n8n.md) ### Other -- [**Policy Report**](https://github.com/kubernetes-sigs/wg-policy-prototypes/tree/master/policy-report/falco-adapter) +- [**Policy Report**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/policy_report.md) ## Adding `falcosecurity` repository @@ -122,7 +150,7 @@ After a few seconds, Falcosidekick should be running. > **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment -## Minumiun Kubernetes version +## Minimum Kubernetes version The minimum Kubernetes version required is 1.17.x @@ -140,303 +168,540 @@ The command removes all the Kubernetes components associated with the chart and The following table lists the main configurable parameters of the Falcosidekick chart and their default values. See `values.yaml` for full list. -| Parameter | Description | Default | -| ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | -| `config.extraEnv` | Extra environment variables | `[]` | -| `config.existingSecret` | Existing secret with configuration | `""` | -| `config.debug` | DEBUG environment variable | `false` | -| `config.customfields` | a list of escaped comma separated custom fields to add to falco events, syntax is "key:value\,key:value" | `""` | -| `config.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.alertmanager.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.alertmanager.endpoint` | alertmanager endpoint on which falcosidekick posts alerts, choice is: `"/api/v1/alerts" or "/api/v2/alerts" , default is "/api/v1/alerts"` | `"/api/v1/alerts"` | -| `config.alertmanager.hostport` | AlertManager , if not `empty`, AlertManager is *enabled* | `""` | -| `config.alertmanager.expiresafter` | if set to a non-zero value, alert expires after that time in seconds (default: 0) | `"0"` | -| `config.alertmanager.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.alertmanager.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.aws.cloudwatchlogs.loggroup` | AWS CloudWatch Logs Group name, if not empty, CloudWatch Logs output is *enabled* | `""` | -| `config.aws.cloudwatchlogs.logstream` | AWS CloudWatch Logs Stream name, if empty, Falcosidekick will try to create a log stream | `debug` | -| `config.aws.cloudwatchlogs.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.aws.kinesis.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.aws.kinesis.streamname` | AWS Kinesis Stream Name, if not empty, Kinesis output is *enabled* | `""` | -| `config.aws.lambda.functionname` | AWS Lambda Function Name, if not empty, AWS Lambda output is *enabled* | `""` | -| `config.aws.lambda.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.aws.accesskeyid` | AWS Access Key Id (optionnal if you use EC2 Instance Profile) | `""` | -| `config.aws.region` | AWS Region (optionnal if you use EC2 Instance Profile) | `""` | -| `config.aws.rolearn` | AWS IAM role ARN for falcosidekick service account to associate with (optionnal if you use EC2 Instance Profile) | `""` | -| `config.aws.s3.bucket` | AWS S3, bucket name | `""` | -| `config.aws.s3.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.aws.s3.prefix` | AWS S3, name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json | `""` | -| `config.aws.secretaccesskey` | AWS Secret Access Key (optionnal if you use EC2 Instance Profile) | `""` | -| `config.aws.sns.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.aws.sns.rawjson` | Send RawJSON from `falco` or parse it to AWS SNS | `false` | -| `config.aws.sns.topicarn` | AWS SNS TopicARN, if not empty, AWS SNS output is *enabled* | `""` | -| `config.aws.sqs.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.aws.sqs.url` | AWS SQS Queue URL, if not empty, AWS SQS output is *enabled* | `""` | -| `config.azure.eventHub.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.azure.eventHub.name` | Name of the Hub, if not empty, EventHub is *enabled* | `""` | -| `config.azure.eventHub.namespace` | Name of the space the Hub is in | `""` | -| `config.azure.podIdentityClientID` | Azure Identity Client ID | `""` | -| `config.azure.podIdentityName` | Azure Identity name | `""` | -| `config.azure.resourceGroupName` | Azure Resource Group name | `""` | -| `config.azure.subscriptionID` | Azure Subscription ID | `""` | -| `config.cliq.icon` | Cliq icon (avatar) | `""` | -| `config.cliq.message format` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `cliq.outputformat`. If empty, no Text is displayed before sections. | `""` | -| `config.cliq.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.cliq.outputformat` | `all` (default), `text` (only text is displayed in Cliq), `fields` (only fields are displayed in Cliq) | `all` | -| `config.cliq.useemoji` | Prefix message text with an emoji | `true` | -| `config.cliq.webhookurl` | Zoho Cliq Channel URL (ex: ), if not empty, Cliq Chat output is *enabled* | `""` | -| `config.cloudevents.address` | CloudEvents consumer http address, if not empty, CloudEvents output is *enabled* | `""` | -| `config.cloudevents.extension` | Extensions to add in the outbound Event, useful for routing | `""` | -| `config.cloudevents.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.datadog.apikey` | Datadog API Key, if not `empty`, Datadog output is *enabled* | | -| `config.datadog.host` | Datadog host. Override if you are on the Datadog EU site. Defaults to american site with "" | `https://api.datadoghq.com` | -| `config.datadog.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.discord.icon` | Discord icon (avatar) | `` | -| `config.discord.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.discord.webhookurl` | Discord WebhookURL (ex: ...), if not empty, Discord output is *enabled* | `""` | -| `config.dogstatsd.forwarder` | The address for the DogStatsD forwarder, in the form , if not empty DogStatsD is *enabled* | `""` | -| `config.dogstatsd.namespace` | A prefix for all metrics | `falcosidekick` | -| `config.dogstatsd.tags` | A comma-separated list of tags to add to all metrics | `""` | -| `config.elasticsearch.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.elasticsearch.hostport` | Elasticsearch , if not `empty`, Elasticsearch is *enabled* | `""` | -| `config.elasticsearch.index` | Elasticsearch index | `falco` | -| `config.elasticsearch.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.elasticsearch.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.elasticsearch.password` | use this password to authenticate to Elasticsearch if the password is not empty | `""` | -| `config.elasticsearch.type` | Elasticsearch document type | `event` | -| `config.elasticsearch.username` | use this username to authenticate to Elasticsearch if the username is not empty | `""` | -| `config.fission.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.fission.function` | Name of Fission function, if not empty, Fission is *enabled* | `""` | -| `config.fission.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.fission.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.fission.routernamespace` | Namespace of Fission Router | `fission` | -| `config.fission.routerport` | Port of service of Fission Router | `80` | -| `config.fission.routerservice` | Service of Fission Router | `router` | -| `config.gcp.cloudfunctions.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `""` | -| `config.gcp.cloudfunctions.name` | The name of the Cloud Function which is in form `projects//locations//functions/` | `""` | -| `config.gcp.cloudrun.enpoint` | the URL of the Cloud Run function | `""` | -| `config.gcp.cloudrun.jwt` | JWT for the private access to Cloud Run function | `""` | -| `config.gcp.cloudrun.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `""` | -| `config.gcp.credentials` | Base64 encoded JSON key file for the GCP service account | `""` | -| `config.gcp.eventhub.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.gcp.pubsub.projectid` | ID of the GCP project | `""` | -| `config.gcp.pubsub.topic` | Name of the Pub/Sub topic | `""` | -| `config.gcp.storage.bucket` | The name of the bucket | `""` | -| `config.gcp.storage.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.gcp.storage.prefix` | Name of prefix, keys will have format: gs:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json | `""` | -| `config.googlechat.messageformat` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `config.googlechat.outputformat`. If empty, no Text is displayed before Attachment | `""` | -| `config.googlechat.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.googlechat.outputformat` | `all` (default), `text` (only text is displayed in Google chat) | `all` | -| `config.googlechat.webhookurl` | Google Chat Webhook URL (ex: ), if not `empty`, Google Chat output is *enabled* | `""` | -| `config.grafana.allfieldsastags` | if true, all custom fields are added as tags | `false` | -| `config.grafana.apikey` | API Key to authenticate to Grafana, if not empty, Grafana output is *enabled* | `""` | -| `config.grafana.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.grafana.dashboardid` | annotations are scoped to a specific dashboard. Optionnal. | `""` | -| `config.grafana.hostport` | or ip}:{port}, if not empty, Grafana output is *enabled* | `""` | -| `config.grafana.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.grafana.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.grafana.panelid` | annotations are scoped to a specific panel. Optionnal. | `""` | -| `config.influxdb.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.influxdb.database` | Influxdb database | `falco` | -| `config.influxdb.hostport` | Influxdb , if not `empty`, Influxdb is *enabled* | `""` | -| `config.influxdb.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.influxdb.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.influxdb.password` | Password to use if auth is *enabled* in Influxdb | `""` | -| `config.influxdb.user` | User to use if auth is *enabled* in Influxdb | `""` | -| `config.kafka.hostport` | The Host:Port of the Kafka (ex: kafka:9092). if not empty, Kafka output is *enabled* | `""` | -| `config.kafka.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.kafka.partition` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `config.googlechat.outputformat`. If empty, no Text is displayed before Attachment | `"0"` | -| `config.kafka.topic` | `all` (default), `text` (only text is displayed in Google chat) | `all` | -| `config.kafkarest.address` | The full URL to the topic (example "http://kafkarest:8082/topics/test") | `""` | -| `config.kafkarest.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.kafkarest.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.kafkarest.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.kafkarest.version` | Kafka Rest Proxy API version 2 | `2` | -| `config.kubeless.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.kubeless.function` | Name of Kubeless function, if not empty, EventHub is *enabled* | `""` | -| `config.kubeless.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.kubeless.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.kubeless.namespace` | Namespace of Kubeless function (mandatory) | `""` | -| `config.kubeless.port` | Port of service of Kubeless function. Default is `8080` | `8080` | -| `config.loki.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.loki.endpoint` | Loki endpoint URL path, default is "/api/prom/push" more info: | `""` | -| `config.loki.hostport` | Loki , if not `empty`, Loki is *enabled* | `""` | -| `config.loki.extralabels` | comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields | `""` | -| `config.loki.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.loki.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.loki.tenant` | Loki tenant, if not `empty`, Loki tenant is *enabled* | `""` | -| `config.prometheus.extralabels` | comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields | `""` | -| `config.mattermost.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.mattermost.footer` | Mattermost Footer | `` | -| `config.mattermost.icon` | Mattermost icon (avatar) | `` | -| `config.mattermost.messageformat` | a Go template to format Mattermost Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment | `""` | -| `config.mattermost.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.mattermost.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.mattermost.outputformat` | `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Mattermost) | `all` | -| `config.mattermost.username` | Mattermost username | `falcosidekick` | -| `config.mattermost.webhookurl` | Mattermost Webhook URL (ex: ), if not `empty`, Mattermost output is *enabled* | `""` | -| `config.mutualtlsfilespath` | folder which will used to store client.crt, client.key and ca.crt files for mutual tls | `/etc/certs` | -| `config.nats.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.nats.hostport` | NATS "nats://host:port", if not `empty`, NATS is *enabled* | `""` | -| `config.nats.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.nats.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.openfaas.checkcert` | check if ssl certificate of the output is valid | `true` | | `openfaas` -| `config.openfaas.functionname` | Name of OpenFaaS function, if not empty, OpenFaaS is *enabled* | `""` | -| `config.openfaas.functionnamespace` | Namespace of OpenFaaS function, "openfaas-fn" (default) | `openfaas-fn` | -| `config.openfaas.gatewaynamespace` | Namespace of OpenFaaS Gateway, "openfaas" (default) | `openfaas` | -| `config.openfaas.gatewayport` | Port of service of OpenFaaS Gateway Default is `8080` | `8080` | -| `config.openfaas.gatewayservice` | Service of OpenFaaS Gateway, "gateway" (default) | `gateway` | -| `config.openfaas.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.openfaas.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | | `openfaas` -| `config.opsgenie.apikey` | Opsgenie API Key, if not empty, Opsgenie output is *enabled* | `""` | -| `config.opsgenie.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.opsgenie.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.opsgenie.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.opsgenie.region` | (`us` or `eu`) region of your domain | `us` | -| `config.pagerduty.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.pagerduty.routingkey` | Pagerduty Routing Key, if not empty, Pagerduty output is *enabled* | `""` | -| `config.policyreport.enabled` | if true; policyreport output is *enabled* | `false` | -| `config.policyreport.kubeconfig` | Kubeconfig file to use (only if falcosidekick is running outside the cluster) | `~/.kube/config` | -| `config.policyreport.maxevents` | the max number of events that can be in a policyreport | `1000` | -| `config.policyreport.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.policyreport.prunebypriority` | if true; the events with lowest severity are pruned first, in FIFO order | `false` | -| `config.rabbitmq.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.rabbitmq.queue` | Rabbitmq Queue name | `""` | -| `config.rabbitmq.url` | Rabbitmq URL, if not empty, Rabbitmq output is *enabled* | `""` | -| `config.rockerchat.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.rockerchat.messageformat` | a Go template to format Rocketchat Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment | `""` | -| `config.rockerchat.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.rocketchat.icon` | Rocketchat icon (avatar) | `` | -| `config.rocketchat.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.rocketchat.outputformat` | `all` (default), `text` (only text is displayed in Rocketcaht), `fields` (only fields are displayed in Rocketchat) | `all` | -| `config.rocketchat.username` | Rocketchat username | `falcosidekick` | -| `config.rocketchat.webhookurl` | Rocketchat Webhook URL (ex: ), if not `empty`, Rocketchat output is *enabled* | `""` | -| `config.slack.footer` | Slack Footer | `` | -| `config.slack.icon` | Slack icon (avatar) | `` | -| `config.slack.messageformat` | a Go template to format Slack Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment | `""` | -| `config.slack.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.slack.outputformat` | `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Slack) | `all` | -| `config.slack.username` | Slack username | `falcosidekick` | -| `config.slack.webhookurl` | Slack Webhook URL (ex: ), if not `empty`, Slack output is *enabled* | `""` | -| `config.smtp.from` | Sender address (mandatory if SMTP output is *enabled*) | `""` | -| `config.smtp.hostport` | "host:port" address of SMTP server, if not empty, SMTP output is *enabled* | `""` | -| `config.smtp.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.smtp.outputformat` | html, text | `html` | -| `config.smtp.password` | password to access SMTP server | `""` | -| `config.smtp.to` | comma-separated list of Recipident addresses, can't be empty (mandatory if SMTP output is *enabled*) | `""` | -| `config.smtp.user` | user to access SMTP server | `""` | -| `config.stan.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.stan.clientid` | Client ID, if not empty, STAN output is *enabled* | `""` | -| `config.stan.clusterid` | Cluster name, if not empty, STAN output is *enabled* | `debug` | -| `config.stan.hostport` | Stan nats://{domain or ip}:{port}, if not empty, STAN output is *enabled* | `""` | -| `config.stan.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.stan.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `false` | -| `config.statsd.forwarder` | The address for the StatsD forwarder, in the form , if not empty StatsD is *enabled* | `""` | -| `config.statsd.namespace` | A prefix for all metrics | `falcosidekick` | -| `config.syslog.host` | Syslog Host, if not empty, Syslog output is *enabled* | `""` | -| `config.syslog.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.syslog.port` | Syslog endpoint port number | `""` | -| `config.syslog.protocol` | Syslog transport protocol. It can be either "tcp" or "udp" | `tcp` | -| `config.teams.activityimage` | Teams section image | `` | -| `config.teams.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.teams.outputformat` | `all` (default), `text` (only text is displayed in Teams), `facts` (only facts are displayed in Teams) | `all` | -| `config.teams.webhookurl` | Teams Webhook URL (ex: "), if not `empty`, Teams output is *enabled* | `""` | -| `config.wavefront.batchsize` | Wavefront batch size. If empty uses the default 10000. Only used when endpointtype is 'direct' | `10000` | -| `config.wavefront.endpointhost` | Wavefront endpoint address (only the host). If not empty, with endpointhost, Wavefront output is *enabled* | `""` | -| `config.wavefront.endpointmetricport` | Port to send metrics. Only used when endpointtype is 'proxy' | `2878` | -| `config.wavefront.endpointtoken` | Wavefront token. Must be used only when endpointtype is 'direct' | `""` | -| `config.wavefront.endpointtype` | Wavefront endpoint type, must be 'direct' or 'proxy'. If not empty, with endpointhost, Wavefront output is *enabled* | `""` | -| `config.wavefront.flushintervalseconds` | Wavefront flush interval in seconds. Defaults to 1 | `1` | -| `config.wavefront.metricname` | Metric to be created in Wavefront. Defaults to falco.alert | `falco.alert` | -| `config.wavefront.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.webhook.address` | Webhook address, if not empty, Webhook output is *enabled* | `""` | -| `config.webhook.checkcert` | check if ssl certificate of the output is valid | `true` | -| `config.webhook.customHeaders` | a list of comma separated custom headers to add, syntax is "key:value\,key:value" | `""` | -| `config.webhook.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.webhook.mutualtls` | if true, checkcert flag will be ignored (server cert will always be checked) | `""` | -| `config.yandex.accesskeyid` | yandex access key | `""` | -| `config.yandex.region` | yandex storage region | `ru-central-1` | -| `config.yandex.s3.bucket` | Yandex storage, bucket name | `falcosidekick` | -| `config.yandex.s3.endpoint` | yandex storage endpoint (default: ) | `""` | -| `config.yandex.s3.minimumpriority` | minimum priority of event for using use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | `debug` | -| `config.yandex.s3.prefix` | name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json | `""` | -| `config.yandex.secretaccesskey` | yandex secret access key | `""` | -| `affinity` | Affinity for the Sidekick pods | `{}` | -| `extraVolumeMounts` | Extra volume mounts for sidekick deployment | `[]` | -| `extraVolumes` | Extra volumes for sidekick deployment | `[]` | -| `fullnameOverride` | Override the name | `""` | -| `image.pullPolicy` | The image pull policy | `IfNotPresent` | -| `image.registry` | The image registry to pull from | `docker.io` | -| `image.repository` | The image repository to pull from | `falcosecurity/falcosidekick` | -| `image.tag` | The image tag to pull | `2.23.1` | -| `imagePullSecrets` | Secrets for the registry | `[]` | -| `ingress.enabled` | Whether to create the ingress | `false` | -| `ingress.annotations` | Ingress annotations | `{}` | -| `ingress.hosts` | Ingress hosts | `- host: falcosidekick.local`
    `paths:`
    `- path: /` | -| `ingress.tls` | Ingress TLS configuration | `[]` | -| `nameOverride` | Override name | `""` | -| `nodeSelector` | Sidekick nodeSelector field | `{}` | -| `podAnnotations` | additions annotations on the pods | `{}` | -| `podLabels` | additions labels on the pods | `{}` | -| `podSecurityContext` | Sidekick pod securityContext | `fsGroup:1234`
`runAsUser:1234` | -| `podSecurityPolicy.create` | Whether to create a podSecurityPolicy | `false` | -| `priorityClassName` | Name of the priority class to be used by the Sidekickpods, priority class needs to be created beforehand | `""` | -| `replicaCount` | number of running pods | `1` | -| `resources` | the resources for falcosdekick pods | `2801` | -| `securityContext` | Sidekick container securityContext | `{}` | -| `service.annotations` | Service annotations | `{}` | -| `service.port` | Service port | `2801` | -| `service.type` | Service type | `"ClusterIP"` | -| `tolerations` | Tolerations for pod assignment | `[]` | -| `webui.affinity` | Affinity for the Web UI pods | `{}` | -| `webui.enabled` | enable Falcosidekick-UI | `false` | -| `webui.image.pullPolicy` | The web UI image pull policy | `IfNotPresent` | -| `webui.image.registry` | The web UI image registry to pull from | `docker.io` | -| `webui.image.repository` | The web UI image repository to pull from | `falcosecurity/falcosidekick-ui` | -| `webui.image.tag` | The web UI image tag to pull | `v2.0.2` | -| `webui.ingress.annotations` | Web UI ingress annotations | `{}` | -| `webui.ingress.enabled` | Whether to create the Web UI ingress | `false` | -| `webui.ingress.hosts` | Web UI ingress hosts configuration | `- host: falcosidekick.local`
    `paths:`
    `- path: /` | -| `webui.ingress.tls` | Web UI ingress TLS configuration | `[]` | -| `webui.nodeSelector` | Web UI nodeSelector field | `{}` | -| `webui.podAnnotations` | additions annotations on the pods web UI | `{}` | -| `webui.podLabels` | additions labels on the pods web UI | `{}` | -| `webui.podSecurityContext` | Web UI pod securityContext | `fsGroup: 1234`
`runAsUser:1234` | -| `webui.priorityClassName` | Name of the priority class to be used by the Web UI pods, priority class needs to be created beforehand | `""` | -| `webui.replicaCount` | number of running pods | `1` | -| `webui.resources` | The resources for the web UI pods | `{}` | -| `webui.securityContext` | Web UI container securityContext | `{}` | -| `webui.service.annotations` | The web UI service annotations (use this to set a internal LB, for example.) | `{}` | -| `webui.service.nodePort` | The web UI service nodePort | `30282` | -| `webui.service.port` | The web UI service port dor the falcosidekick-ui | `2802` | -| `webui.service.targetPort` | The web UI service targetPort | `2802` | -| `webui.service.type` | The web UI service type (i. e: LoadBalancer) | `ClusterIP` | -| `webui.tolerations` | Tolerations for pod assignment | `[]` | -| `webui.redis.affinity` | Affinity for the Web UI Redis pods | `{}` | -| `webui.redis.image.pullPolicy` | The web UI image pull policy | `IfNotPresent` | -| `webui.redis.image.registry` | The web UI image registry to pull from | `docker.io` | -| `webui.redis.image.repository` | The web UI image repository to pull from | `falcosecurity/falcosidekick-ui` | -| `webui.redis.image.tag` | The web UI image tag to pull | `"2.2.4"` | -| `webui.redis.nodeSelector` | Web UI Redis nodeSelector field | `{}` | -| `webui.redis.podAnnotations` | additions annotations on the pods | `{}` | -| `webui.redis.podLabels` | additions labels on the pods | `{}` | -| `webui.redis.podSecurityContext` | Web UI Redis pod securityContext | `{}` | -| `webui.redis.priorityClassName` | Name of the priority class to be used by the Web UI Redis pods, priority class needs to be created beforehand | `""` | -| `webui.redis.resources` | The resources for the redis pod | `{}` | -| `webui.redis.securityContext` | Web UI Redis container securityContext | `{}` | -| `webui.redis.service.annotations` | The web UI service annotations (use this to set a internal LB, for example.) | `{}` | -| `webui.redis.service.port` | The web UI service port dor the falcosidekick-ui | `6379` | -| `webui.redis.service.targetPort` | The web UI service targetPort | `6379` | -| `webui.redis.service.type` | The web UI service type (i. e: LoadBalancer) | `ClusterIP` | -| `webui.redis.storageClass` | Storage class of the PVC for the redis pod | `""` | -| `webui.redis.storageSize` | Size of the PVC for the redis pod | `1Gi` | -| `webui.redis.tolerations` | Tolerations for pod assignment | `[]` | - - -Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, - +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Affinity for the Sidekick pods | +| config.alertmanager.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.alertmanager.customseveritymap | string | `""` | comma separated list of tuple composed of a ':' separated Falco priority and Alertmanager severity that is used to override the severity label associated to the priority level of falco event. Example: debug:value_1,critical:value2. Default mapping: emergency:critical,alert:critical,critical:critical,error:warning,warning:warning,notice:information,informational:information,debug:information. | +| config.alertmanager.dropeventdefaultpriority | string | `"critical"` | default priority of dropped events, values are emergency|alert|critical|error|warning|notice|informational|debug | +| config.alertmanager.dropeventthresholds | string | `"10000:critical, 1000:critical, 100:critical, 10:warning, 1:warning"` | comma separated list of priority re-evaluation thresholds of dropped events composed of a ':' separated integer threshold and string priority. Example: `10000:critical, 100:warning, 1:informational` | +| config.alertmanager.endpoint | string | `"/api/v1/alerts"` | alertmanager endpoint on which falcosidekick posts alerts, choice is: `"/api/v1/alerts" or "/api/v2/alerts" , default is "/api/v1/alerts"` | +| config.alertmanager.expireafter | string | `""` | if set to a non-zero value, alert expires after that time in seconds (default: 0) | +| config.alertmanager.extraannotations | string | `""` | comma separated list of annotations composed of a ':' separated name and value that is added to the Alerts. Example: my_annotation_1:my_value_1, my_annotation_1:my_value_2 | +| config.alertmanager.extralabels | string | `""` | comma separated list of labels composed of a ':' separated name and value that is added to the Alerts. Example: my_label_1:my_value_1, my_label_1:my_value_2 | +| config.alertmanager.hostport | string | `""` | AlertManager , if not `empty`, AlertManager is *enabled* | +| config.alertmanager.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.alertmanager.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.aws.accesskeyid | string | `""` | AWS Access Key Id (optionnal if you use EC2 Instance Profile) | +| config.aws.checkidentity | bool | `true` | check the identity credentials, set to false for locale developments | +| config.aws.cloudwatchlogs.loggroup | string | `""` | AWS CloudWatch Logs Group name, if not empty, CloudWatch Logs output is *enabled* | +| config.aws.cloudwatchlogs.logstream | string | `""` | AWS CloudWatch Logs Stream name, if empty, Falcosidekick will try to create a log stream | +| config.aws.cloudwatchlogs.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.externalid | string | `""` | External id for the role to assume (optional if you use EC2 Instance Profile) | +| config.aws.kinesis.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.kinesis.streamname | string | `""` | AWS Kinesis Stream Name, if not empty, Kinesis output is *enabled* | +| config.aws.lambda.functionname | string | `""` | AWS Lambda Function Name, if not empty, AWS Lambda output is *enabled* | +| config.aws.lambda.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.region | string | `""` | AWS Region (optionnal if you use EC2 Instance Profile) | +| config.aws.rolearn | string | `""` | AWS IAM role ARN for falcosidekick service account to associate with (optionnal if you use EC2 Instance Profile) | +| config.aws.s3.bucket | string | `""` | AWS S3, bucket name | +| config.aws.s3.endpoint | string | `""` | Endpoint URL that overrides the default generated endpoint, use this for S3 compatible APIs | +| config.aws.s3.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.s3.objectcannedacl | string | `"bucket-owner-full-control"` | Canned ACL (x-amz-acl) to use when creating the object | +| config.aws.s3.prefix | string | `""` | AWS S3, name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json | +| config.aws.secretaccesskey | string | `""` | AWS Secret Access Key (optionnal if you use EC2 Instance Profile) | +| config.aws.securitylake.accountid | string | `""` | Account ID | +| config.aws.securitylake.batchsize | int | `1000` | Max number of events by parquet file | +| config.aws.securitylake.bucket | string | `""` | Bucket for AWS SecurityLake data, if not empty, AWS SecurityLake output is enabled | +| config.aws.securitylake.interval | int | `5` | Time in minutes between two puts to S3 (must be between 5 and 60min) | +| config.aws.securitylake.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.securitylake.prefix | string | `""` | Prefix for keys | +| config.aws.securitylake.region | string | `""` | Bucket Region | +| config.aws.sns.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.sns.rawjson | bool | `false` | Send RawJSON from `falco` or parse it to AWS SNS | +| config.aws.sns.topicarn | string | `""` | AWS SNS TopicARN, if not empty, AWS SNS output is *enabled* | +| config.aws.sqs.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.aws.sqs.url | string | `""` | AWS SQS Queue URL, if not empty, AWS SQS output is *enabled* | +| config.aws.useirsa | bool | `true` | Use IRSA, if true, the rolearn value will be used to set the ServiceAccount annotations and not the env var | +| config.azure.eventHub.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.azure.eventHub.name | string | `""` | Name of the Hub, if not empty, EventHub is *enabled* | +| config.azure.eventHub.namespace | string | `""` | Name of the space the Hub is in | +| config.azure.podIdentityClientID | string | `""` | Azure Identity Client ID | +| config.azure.podIdentityName | string | `""` | Azure Identity name | +| config.azure.resourceGroupName | string | `""` | Azure Resource Group name | +| config.azure.subscriptionID | string | `""` | Azure Subscription ID | +| config.bracketreplacer | string | `""` | if not empty, the brackets in keys of Output Fields are replaced | +| config.cliq.icon | string | `""` | Cliq icon (avatar) | +| config.cliq.messageformat | string | `""` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `cliq.outputformat`. If empty, no Text is displayed before sections. | +| config.cliq.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.cliq.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Cliq), `fields` (only fields are displayed in Cliq) | +| config.cliq.useemoji | bool | `true` | Prefix message text with an emoji | +| config.cliq.webhookurl | string | `""` | Zoho Cliq Channel URL (ex: ), if not empty, Cliq Chat output is *enabled* | +| config.cloudevents.address | string | `""` | CloudEvents consumer http address, if not empty, CloudEvents output is *enabled* | +| config.cloudevents.extension | string | `""` | Extensions to add in the outbound Event, useful for routing | +| config.cloudevents.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.customfields | string | `""` | a list of escaped comma separated custom fields to add to falco events, syntax is "key:value\,key:value" | +| config.datadog.apikey | string | `""` | Datadog API Key, if not `empty`, Datadog output is *enabled* | +| config.datadog.host | string | `""` | Datadog host. Override if you are on the Datadog EU site. Defaults to american site with "" | +| config.datadog.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.debug | bool | `false` | DEBUG environment variable | +| config.discord.icon | string | `""` | Discord icon (avatar) | +| config.discord.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.discord.webhookurl | string | `""` | Discord WebhookURL (ex: ...), if not empty, Discord output is *enabled* | +| config.dogstatsd.forwarder | string | `""` | The address for the DogStatsD forwarder, in the form , if not empty DogStatsD is *enabled* | +| config.dogstatsd.namespace | string | `"falcosidekick."` | A prefix for all metrics | +| config.dogstatsd.tags | string | `""` | A comma-separated list of tags to add to all metrics | +| config.dynatrace.apitoken | string | `""` | Dynatrace API token with the "logs.ingest" scope, more info : https://dt-url.net/8543sda, if not empty, Dynatrace output is enabled | +| config.dynatrace.apiurl | string | `""` | Dynatrace API url, use https://ENVIRONMENTID.live.dynatrace.com/api for Dynatrace SaaS and https://YOURDOMAIN/e/ENVIRONMENTID/api for Dynatrace Managed, more info : https://dt-url.net/ej43qge | +| config.dynatrace.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.dynatrace.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" | +| config.elasticsearch.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.elasticsearch.createindextemplate | bool | `false` | Create an index template (default: false) | +| config.elasticsearch.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.elasticsearch.flattenfields | bool | `false` | Replace . by _ to avoid mapping conflicts, force to true if createindextemplate==true (default: false) | +| config.elasticsearch.hostport | string | `""` | Elasticsearch , if not `empty`, Elasticsearch is *enabled* | +| config.elasticsearch.index | string | `"falco"` | Elasticsearch index | +| config.elasticsearch.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.elasticsearch.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.elasticsearch.numberofreplicas | int | `3` | Number of replicas set by the index template (default: 3) | +| config.elasticsearch.numberofshards | int | `3` | Number of shards set by the index template (default: 3) | +| config.elasticsearch.password | string | `""` | use this password to authenticate to Elasticsearch if the password is not empty | +| config.elasticsearch.suffix | string | `"daily"` | | +| config.elasticsearch.type | string | `"_doc"` | Elasticsearch document type | +| config.elasticsearch.username | string | `""` | use this username to authenticate to Elasticsearch if the username is not empty | +| config.existingSecret | string | `""` | Existing secret with configuration | +| config.extraArgs | list | `[]` | Extra command-line arguments | +| config.extraEnv | list | `[]` | Extra environment variables | +| config.fission.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.fission.function | string | `""` | Name of Fission function, if not empty, Fission is enabled | +| config.fission.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.fission.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.fission.routernamespace | string | `"fission"` | Namespace of Fission Router, "fission" (default) | +| config.fission.routerport | int | `80` | Port of service of Fission Router | +| config.fission.routerservice | string | `"router"` | Service of Fission Router, "router" (default) | +| config.gcp.cloudfunctions.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.gcp.cloudfunctions.name | string | `""` | The name of the Cloud Function which is in form `projects//locations//functions/` | +| config.gcp.cloudrun.endpoint | string | `""` | the URL of the Cloud Run function | +| config.gcp.cloudrun.jwt | string | `""` | JWT for the private access to Cloud Run function | +| config.gcp.cloudrun.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.gcp.credentials | string | `""` | Base64 encoded JSON key file for the GCP service account | +| config.gcp.pubsub.customattributes | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.gcp.pubsub.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.gcp.pubsub.projectid | string | `""` | The GCP Project ID containing the Pub/Sub Topic | +| config.gcp.pubsub.topic | string | `""` | Name of the Pub/Sub topic | +| config.gcp.storage.bucket | string | `""` | The name of the bucket | +| config.gcp.storage.minimumpriority | string | `"debug"` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.gcp.storage.prefix | string | `""` | Name of prefix, keys will have format: gs:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json | +| config.googlechat.messageformat | string | `""` | a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `config.googlechat.outputformat`. If empty, no Text is displayed before Attachment | +| config.googlechat.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.googlechat.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Google chat) | +| config.googlechat.webhookurl | string | `""` | Google Chat Webhook URL (ex: ), if not `empty`, Google Chat output is *enabled* | +| config.gotify.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.gotify.format | string | `"markdown"` | Format of the messages (plaintext, markdown, json) | +| config.gotify.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, Gotify output is enabled | +| config.gotify.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.gotify.token | string | `""` | API Token | +| config.grafana.allfieldsastags | bool | `false` | if true, all custom fields are added as tags (default: false) | +| config.grafana.apikey | string | `""` | API Key to authenticate to Grafana, if not empty, Grafana output is *enabled* | +| config.grafana.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.grafana.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.grafana.dashboardid | string | `""` | annotations are scoped to a specific dashboard. Optionnal. | +| config.grafana.hostport | string | `""` | or ip}:{port}, if not empty, Grafana output is *enabled* | +| config.grafana.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.grafana.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.grafana.panelid | string | `""` | annotations are scoped to a specific panel. Optionnal. | +| config.grafanaoncall.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.grafanaoncall.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.grafanaoncall.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.grafanaoncall.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.grafanaoncall.webhookurl | string | `""` | if not empty, Grafana OnCall output is enabled | +| config.influxdb.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.influxdb.database | string | `"falco"` | Influxdb database | +| config.influxdb.hostport | string | `""` | Influxdb , if not `empty`, Influxdb is *enabled* | +| config.influxdb.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.influxdb.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.influxdb.organization | string | `""` | Influxdb organization | +| config.influxdb.password | string | `""` | Password to use if auth is *enabled* in Influxdb | +| config.influxdb.precision | string | `"ns"` | write precision | +| config.influxdb.token | string | `""` | API token to use if auth in enabled in Influxdb (disables user and password) | +| config.influxdb.user | string | `""` | User to use if auth is *enabled* in Influxdb | +| config.kafka.async | bool | `false` | produce messages without blocking | +| config.kafka.balancer | string | `"round_robin"` | partition balancing strategy when producing | +| config.kafka.clientid | string | `""` | specify a client.id when communicating with the broker for tracing | +| config.kafka.compression | string | `"NONE"` | enable message compression using this algorithm, no compression (GZIP|SNAPPY|LZ4|ZSTD|NONE) | +| config.kafka.hostport | string | `""` | comma separated list of Apache Kafka bootstrap nodes for establishing the initial connection to the cluster (ex: localhost:9092,localhost:9093). Defaults to port 9092 if no port is specified after the domain, if not empty, Kafka output is *enabled* | +| config.kafka.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.kafka.password | string | `""` | use this password to authenticate to Kafka via SASL | +| config.kafka.requiredacks | string | `"NONE"` | number of acknowledges from partition replicas required before receiving | +| config.kafka.sasl | string | `""` | SASL authentication mechanism, if empty, no authentication (PLAIN|SCRAM_SHA256|SCRAM_SHA512) | +| config.kafka.tls | bool | `false` | Use TLS for the connections | +| config.kafka.topic | string | `""` | Name of the topic, if not empty, Kafka output is enabled | +| config.kafka.topiccreation | bool | `false` | auto create the topic if it doesn't exist | +| config.kafka.username | string | `""` | use this username to authenticate to Kafka via SASL | +| config.kafkarest.address | string | `""` | The full URL to the topic (example "http://kafkarest:8082/topics/test") | +| config.kafkarest.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.kafkarest.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.kafkarest.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.kafkarest.version | int | `2` | Kafka Rest Proxy API version 2|1 (default: 2) | +| config.kubeless.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.kubeless.function | string | `""` | Name of Kubeless function, if not empty, EventHub is *enabled* | +| config.kubeless.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.kubeless.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.kubeless.namespace | string | `""` | Namespace of Kubeless function (mandatory) | +| config.kubeless.port | int | `8080` | Port of service of Kubeless function. Default is `8080` | +| config.loki.apikey | string | `""` | API Key for Grafana Logs | +| config.loki.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.loki.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.loki.endpoint | string | `"/loki/api/v1/push"` | Loki endpoint URL path, more info: | +| config.loki.extralabels | string | `""` | comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields | +| config.loki.hostport | string | `""` | Loki , if not `empty`, Loki is *enabled* | +| config.loki.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.loki.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.loki.tenant | string | `""` | Loki tenant, if not `empty`, Loki tenant is *enabled* | +| config.loki.user | string | `""` | user for Grafana Logs | +| config.mattermost.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.mattermost.footer | string | `""` | Mattermost Footer | +| config.mattermost.icon | string | `""` | Mattermost icon (avatar) | +| config.mattermost.messageformat | string | `""` | a Go template to format Mattermost Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment | +| config.mattermost.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.mattermost.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.mattermost.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Mattermost) | +| config.mattermost.username | string | `""` | Mattermost username | +| config.mattermost.webhookurl | string | `""` | Mattermost Webhook URL (ex: ), if not `empty`, Mattermost output is *enabled* | +| config.mqtt.broker | string | `""` | Broker address, can start with tcp:// or ssl://, if not empty, MQTT output is enabled | +| config.mqtt.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.mqtt.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.mqtt.password | string | `""` | Password if the authentication is enabled in the broker | +| config.mqtt.qos | int | `0` | QOS for messages | +| config.mqtt.retained | bool | `false` | If true, messages are retained | +| config.mqtt.topic | string | `"falco/events"` | Topic for messages | +| config.mqtt.user | string | `""` | User if the authentication is enabled in the broker | +| config.mutualtlsclient.cacertfile | string | `""` | CA certification file for server certification for mutual TLS authentication, takes priority over mutualtlsfilespath if not empty | +| config.mutualtlsclient.certfile | string | `""` | client certification file for mutual TLS client certification, takes priority over mutualtlsfilespath if not empty | +| config.mutualtlsclient.keyfile | string | `""` | client key file for mutual TLS client certification, takes priority over mutualtlsfilespath if not empty | +| config.mutualtlsfilespath | string | `"/etc/certs"` | folder which will used to store client.crt, client.key and ca.crt files for mutual tls for outputs, will be deprecated in the future (default: "/etc/certs") | +| config.n8n.address | string | `""` | N8N address, if not empty, N8N output is enabled | +| config.n8n.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.n8n.headerauthname | string | `""` | Header Auth Key to authenticate with N8N | +| config.n8n.headerauthvalue | string | `""` | Header Auth Value to authenticate with N8N | +| config.n8n.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" | +| config.n8n.password | string | `""` | Password to authenticate with N8N in basic auth | +| config.n8n.user | string | `""` | Username to authenticate with N8N in basic auth | +| config.nats.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.nats.hostport | string | `""` | NATS "nats://host:port", if not `empty`, NATS is *enabled* | +| config.nats.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.nats.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.nodered.address | string | `""` | Node-RED address, if not empty, Node-RED output is enabled | +| config.nodered.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.nodered.customheaders | string | `""` | Custom headers to add in POST, useful for Authentication, syntax is "key:value\,key:value" | +| config.nodered.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.nodered.password | string | `""` | Password if Basic Auth is enabled for 'http in' node in Node-RED | +| config.nodered.user | string | `""` | User if Basic Auth is enabled for 'http in' node in Node-RED | +| config.openfaas.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.openfaas.functionname | string | `""` | Name of OpenFaaS function, if not empty, OpenFaaS is *enabled* | +| config.openfaas.functionnamespace | string | `"openfaas-fn"` | Namespace of OpenFaaS function, "openfaas-fn" (default) | +| config.openfaas.gatewaynamespace | string | `"openfaas"` | Namespace of OpenFaaS Gateway, "openfaas" (default) | +| config.openfaas.gatewayport | int | `8080` | Port of service of OpenFaaS Gateway Default is `8080` | +| config.openfaas.gatewayservice | string | `"gateway"` | Service of OpenFaaS Gateway, "gateway" (default) | +| config.openfaas.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.openfaas.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.openobserve.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.openobserve.customheaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.openobserve.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, OpenObserve output is enabled | +| config.openobserve.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" | +| config.openobserve.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.openobserve.organizationname | string | `"default"` | Organization name | +| config.openobserve.password | string | `""` | use this password to authenticate to OpenObserve if the password is not empty | +| config.openobserve.streamname | string | `"falco"` | Stream name | +| config.openobserve.username | string | `""` | use this username to authenticate to OpenObserve if the username is not empty | +| config.opsgenie.apikey | string | `""` | Opsgenie API Key, if not empty, Opsgenie output is *enabled* | +| config.opsgenie.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.opsgenie.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.opsgenie.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.opsgenie.region | `us` or `eu` | `""` | region of your domain | +| config.otlp.traces.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.otlp.traces.duration | int | `1000` | Artificial span duration in milliseconds (default: 1000) | +| config.otlp.traces.endpoint | string | `""` | OTLP endpoint in the form of http://{domain or ip}:4318/v1/traces, if not empty, OTLP Traces output is enabled | +| config.otlp.traces.extraenvvars | object | `{}` | Extra env vars (override the other settings) | +| config.otlp.traces.headers | string | `""` | OTLP headers: list of headers to apply to all outgoing traces in the form of "some-key=some-value,other-key=other-value" (default: "") | +| config.otlp.traces.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" | +| config.otlp.traces.protocol | string | `""` | OTLP protocol http/json, http/protobuf, grpc (default: "" which uses SDK default: http/json) | +| config.otlp.traces.synced | bool | `false` | Set to true if you want traces to be sent synchronously (default: false) | +| config.otlp.traces.timeout | string | `""` | OTLP timeout: timeout value in milliseconds (default: "" which uses SDK default: 10000) | +| config.outputFieldFormat | string | `""` | | +| config.pagerduty.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.pagerduty.region | string | `"us"` | Pagerduty Region, can be 'us' or 'eu' | +| config.pagerduty.routingkey | string | `""` | Pagerduty Routing Key, if not empty, Pagerduty output is *enabled* | +| config.policyreport.enabled | bool | `false` | if true; policyreport output is *enabled* | +| config.policyreport.kubeconfig | string | `"~/.kube/config"` | Kubeconfig file to use (only if falcosidekick is running outside the cluster) | +| config.policyreport.maxevents | int | `1000` | the max number of events that can be in a policyreport | +| config.policyreport.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.policyreport.prunebypriority | bool | `false` | if true; the events with lowest severity are pruned first, in FIFO order | +| config.prometheus.extralabels | string | `""` | comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields | +| config.quickwit.apiendpoint | string | `"/api/v1"` | API endpoint (containing the API version, overideable in case of quickwit behind a reverse proxy with URL rewriting) | +| config.quickwit.autocreateindex | bool | `false` | Autocreate a falco index mapping if it doesn't exists | +| config.quickwit.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.quickwit.customHeaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value,key:value" | +| config.quickwit.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, Quickwit output is enabled | +| config.quickwit.index | string | `"falco"` | Index | +| config.quickwit.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.quickwit.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.quickwit.version | string | `"0.7"` | Version of quickwi | +| config.rabbitmq.minimumpriority | string | `"debug"` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.rabbitmq.queue | string | `""` | Rabbitmq Queue name | +| config.rabbitmq.url | string | `""` | Rabbitmq URL, if not empty, Rabbitmq output is *enabled* | +| config.redis.address | string | `""` | Redis address, if not empty, Redis output is enabled | +| config.redis.database | int | `0` | Redis database number | +| config.redis.key | string | `"falco"` | Redis storage key name for hashmap, list | +| config.redis.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" | +| config.redis.password | string | `""` | Password to authenticate with Redis | +| config.redis.storagetype | string | `"list"` | Redis storage type: hashmap or list | +| config.rocketchat.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.rocketchat.icon | string | `""` | Rocketchat icon (avatar) | +| config.rocketchat.messageformat | string | `""` | a Go template to format Rocketchat Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment | +| config.rocketchat.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.rocketchat.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.rocketchat.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Rocketcaht), `fields` (only fields are displayed in Rocketchat) | +| config.rocketchat.username | string | `""` | Rocketchat username | +| config.rocketchat.webhookurl | string | `""` | Rocketchat Webhook URL (ex: ), if not `empty`, Rocketchat output is *enabled* | +| config.slack.channel | string | `""` | Slack channel (optionnal) | +| config.slack.footer | string | `""` | Slack Footer | +| config.slack.icon | string | `""` | Slack icon (avatar) | +| config.slack.messageformat | string | `""` | a Go template to format Slack Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment | +| config.slack.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.slack.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Slack) | +| config.slack.username | string | `""` | Slack username | +| config.slack.webhookurl | string | `""` | Slack Webhook URL (ex: ), if not `empty`, Slack output is *enabled* | +| config.smtp.authmechanism | string | `"plain"` | SASL Mechanisms : plain, oauthbearer, external, anonymous or "" (disable SASL) | +| config.smtp.from | string | `""` | Sender address (mandatory if SMTP output is *enabled*) | +| config.smtp.hostport | string | `""` | "host:port" address of SMTP server, if not empty, SMTP output is *enabled* | +| config.smtp.identity | string | `""` | identity string for Plain and External Mechanisms | +| config.smtp.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.smtp.outputformat | string | `"html"` | html, text | +| config.smtp.password | string | `""` | password to access SMTP server | +| config.smtp.tls | bool | `true` | use TLS connection (true/false) | +| config.smtp.to | string | `""` | comma-separated list of Recipident addresses, can't be empty (mandatory if SMTP output is *enabled*) | +| config.smtp.token | string | `""` | OAuthBearer token for OAuthBearer Mechanism | +| config.smtp.trace | string | `""` | trace string for Anonymous Mechanism | +| config.smtp.user | string | `""` | user to access SMTP server | +| config.spyderbat.apikey | string | `""` | Spyderbat API key with access to the organization | +| config.spyderbat.apiurl | string | `"https://api.spyderbat.com"` | Spyderbat API url | +| config.spyderbat.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.spyderbat.orguid | string | `""` | Organization to send output to, if not empty, Spyderbat output is enabled | +| config.spyderbat.source | string | `"falcosidekick"` | Spyderbat source ID, max 32 characters | +| config.spyderbat.sourcedescription | string | `""` | Spyderbat source description and display name if not empty, max 256 characters | +| config.stan.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.stan.clientid | string | `""` | Client ID, if not empty, STAN output is *enabled* | +| config.stan.clusterid | string | `""` | Cluster name, if not empty, STAN output is *enabled* | +| config.stan.hostport | string | `""` | Stan nats://{domain or ip}:{port}, if not empty, STAN output is *enabled* | +| config.stan.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.stan.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.statsd.forwarder | string | `""` | The address for the StatsD forwarder, in the form , if not empty StatsD is *enabled* | +| config.statsd.namespace | string | `"falcosidekick."` | A prefix for all metrics | +| config.sumologic.checkcert | bool | `true` | check if ssl certificate of the output is valid (default: true) | +| config.sumologic.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) | +| config.sumologic.name | string | `""` | Override the default Sumologic Source Name | +| config.sumologic.receiverURL | string | `""` | Sumologic HTTP Source URL, if not empty, Sumologic output is enabled | +| config.sumologic.sourceCategory | string | `""` | Override the default Sumologic Source Category | +| config.sumologic.sourceHost | string | `""` | Override the default Sumologic Source Host | +| config.syslog.format | string | `"json"` | Syslog payload format. It can be either "json" or "cef" | +| config.syslog.host | string | `""` | Syslog Host, if not empty, Syslog output is *enabled* | +| config.syslog.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.syslog.port | string | `""` | Syslog endpoint port number | +| config.syslog.protocol | string | `"tcp"` | Syslog transport protocol. It can be either "tcp" or "udp" | +| config.talon.address | string | `""` | Talon address, if not empty, Talon output is enabled | +| config.talon.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.talon.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.teams.activityimage | string | `""` | Teams section image | +| config.teams.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.teams.outputformat | string | `"all"` | `all` (default), `text` (only text is displayed in Teams), `facts` (only facts are displayed in Teams) | +| config.teams.webhookurl | string | `""` | Teams Webhook URL (ex: "), if not `empty`, Teams output is *enabled* | +| config.tekton.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.tekton.eventlistener | string | `""` | EventListener address, if not empty, Tekton output is enabled | +| config.tekton.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.telegram.chatid | string | `""` | telegram Identifier of the shared chat | +| config.telegram.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.telegram.minimumpriority | string | `""` | minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" | +| config.telegram.token | string | `""` | telegram bot authentication token | +| config.templatedfields | string | `""` | a list of escaped comma separated Go templated fields to add to falco events, syntax is "key:template\,key:template" | +| config.timescaledb.database | string | `""` | TimescaleDB database used | +| config.timescaledb.host | string | `""` | TimescaleDB host, if not empty, TImescaleDB output is enabled | +| config.timescaledb.hypertablename | string | `"falco_events"` | Hypertable to store data events (default: falco_events) See TimescaleDB setup for more info | +| config.timescaledb.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.timescaledb.password | string | `"postgres"` | Password to authenticate with TimescaleDB | +| config.timescaledb.port | int | `5432` | TimescaleDB port (default: 5432) | +| config.timescaledb.user | string | `"postgres"` | Username to authenticate with TimescaleDB | +| config.tlsclient.cacertfile | string | `""` | CA certificate file for server certification on TLS connections, appended to the system CA pool if not empty | +| config.tlsserver.cacertfile | string | `"/etc/certs/server/ca.crt"` | CA certification file path for client certification if mutualtls is true | +| config.tlsserver.cacrt | string | `""` | | +| config.tlsserver.certfile | string | `"/etc/certs/server/server.crt"` | server certification file path for TLS Server | +| config.tlsserver.deploy | bool | `false` | if true TLS server will be deployed instead of HTTP | +| config.tlsserver.existingSecret | string | `""` | existing secret with server.crt, server.key and ca.crt files for TLS Server | +| config.tlsserver.keyfile | string | `"/etc/certs/server/server.key"` | server key file path for TLS Server | +| config.tlsserver.mutualtls | bool | `false` | if true mutual TLS server will be deployed instead of TLS, deploy also has to be true | +| config.tlsserver.notlspaths | string | `"/ping"` | a comma separated list of endpoints, if not empty, and tlsserver.deploy is true, a separate http server will be deployed for the specified endpoints (/ping endpoint needs to be notls for Kubernetes to be able to perform the healthchecks) | +| config.tlsserver.notlsport | int | `2810` | port to serve http server serving selected endpoints | +| config.tlsserver.servercrt | string | `""` | server.crt file for TLS Server | +| config.tlsserver.serverkey | string | `""` | server.key file for TLS Server | +| config.wavefront.batchsize | int | `10000` | Wavefront batch size. If empty uses the default 10000. Only used when endpointtype is 'direct' | +| config.wavefront.endpointhost | string | `""` | Wavefront endpoint address (only the host). If not empty, with endpointhost, Wavefront output is *enabled* | +| config.wavefront.endpointmetricport | int | `2878` | Port to send metrics. Only used when endpointtype is 'proxy' | +| config.wavefront.endpointtoken | string | `""` | Wavefront token. Must be used only when endpointtype is 'direct' | +| config.wavefront.endpointtype | string | `""` | Wavefront endpoint type, must be 'direct' or 'proxy'. If not empty, with endpointhost, Wavefront output is *enabled* | +| config.wavefront.flushintervalseconds | int | `1` | Wavefront flush interval in seconds. Defaults to 1 | +| config.wavefront.metricname | string | `"falco.alert"` | Metric to be created in Wavefront. Defaults to falco.alert | +| config.wavefront.minimumpriority | string | `"debug"` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.webhook.address | string | `""` | Webhook address, if not empty, Webhook output is *enabled* | +| config.webhook.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.webhook.customHeaders | string | `""` | a list of comma separated custom headers to add, syntax is "key:value\,key:value" | +| config.webhook.method | string | `"POST"` | HTTP method: POST or PUT | +| config.webhook.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.webhook.mutualtls | bool | `false` | if true, checkcert flag will be ignored (server cert will always be checked) | +| config.yandex.accesskeyid | string | `""` | yandex access key | +| config.yandex.datastreams.endpoint | string | `""` | yandex data streams endpoint (default: https://yds.serverless.yandexcloud.net) | +| config.yandex.datastreams.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.yandex.datastreams.streamname | string | `""` | stream name in format /${region}/${folder_id}/${ydb_id}/${stream_name} | +| config.yandex.region | string | `""` | yandex storage region (default: ru-central-1) | +| config.yandex.s3.bucket | string | `""` | Yandex storage, bucket name | +| config.yandex.s3.endpoint | string | `""` | yandex storage endpoint (default: https://storage.yandexcloud.net) | +| config.yandex.s3.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.yandex.s3.prefix | string | `""` | name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json | +| config.yandex.secretaccesskey | string | `""` | yandex secret access key | +| config.zincsearch.checkcert | bool | `true` | check if ssl certificate of the output is valid | +| config.zincsearch.hostport | string | `""` | http://{domain or ip}:{port}, if not empty, ZincSearch output is enabled | +| config.zincsearch.index | string | `"falco"` | index | +| config.zincsearch.minimumpriority | string | `""` | minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` | +| config.zincsearch.password | string | `""` | use this password to authenticate to ZincSearch | +| config.zincsearch.username | string | `""` | use this username to authenticate to ZincSearch | +| customAnnotations | object | `{}` | custom annotations to add to all resources | +| customLabels | object | `{}` | custom labels to add to all resources | +| extraVolumeMounts | list | `[]` | Extra volume mounts for sidekick deployment | +| extraVolumes | list | `[]` | Extra volumes for sidekick deployment | +| fullnameOverride | string | `""` | Override the name | +| image | object | `{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falcosidekick","tag":"2.29.0"}` | number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) revisionHistoryLimit: 1 | +| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy | +| image.registry | string | `"docker.io"` | The image registry to pull from | +| image.repository | string | `"falcosecurity/falcosidekick"` | The image repository to pull from | +| image.tag | string | `"2.29.0"` | The image tag to pull | +| imagePullSecrets | list | `[]` | Secrets for the registry | +| ingress.annotations | object | `{}` | Ingress annotations | +| ingress.enabled | bool | `false` | Whether to create the ingress | +| ingress.hosts | list | `[{"host":"falcosidekick.local","paths":[{"path":"/"}]}]` | Ingress hosts | +| ingress.ingressClassName | string | `""` | ingress class name | +| ingress.tls | list | `[]` | Ingress TLS configuration | +| nameOverride | string | `""` | Override name | +| nodeSelector | object | `{}` | Sidekick nodeSelector field | +| podAnnotations | object | `{}` | additions annotations on the pods | +| podLabels | object | `{}` | additions labels on the pods | +| podSecurityContext | object | `{"fsGroup":1234,"runAsUser":1234}` | Sidekick pod securityContext | +| podSecurityPolicy | object | `{"create":false}` | podSecurityPolicy | +| podSecurityPolicy.create | bool | `false` | Whether to create a podSecurityPolicy | +| priorityClassName | string | `""` | Name of the priority class to be used by the Sidekickpods, priority class needs to be created beforehand | +| prometheusRules.alerts.additionalAlerts | object | `{}` | | +| prometheusRules.alerts.alert.enabled | bool | `true` | enable the high rate rule for the alert events | +| prometheusRules.alerts.alert.rate_interval | string | `"5m"` | rate interval for the high rate rule for the alert events | +| prometheusRules.alerts.alert.threshold | int | `0` | threshold for the high rate rule for the alert events | +| prometheusRules.alerts.critical.enabled | bool | `true` | enable the high rate rule for the critical events | +| prometheusRules.alerts.critical.rate_interval | string | `"5m"` | rate interval for the high rate rule for the critical events | +| prometheusRules.alerts.critical.threshold | int | `0` | threshold for the high rate rule for the critical events | +| prometheusRules.alerts.emergency.enabled | bool | `true` | enable the high rate rule for the emergency events | +| prometheusRules.alerts.emergency.rate_interval | string | `"5m"` | rate interval for the high rate rule for the emergency events | +| prometheusRules.alerts.emergency.threshold | int | `0` | threshold for the high rate rule for the emergency events | +| prometheusRules.alerts.error.enabled | bool | `true` | enable the high rate rule for the error events | +| prometheusRules.alerts.error.rate_interval | string | `"5m"` | rate interval for the high rate rule for the error events | +| prometheusRules.alerts.error.threshold | int | `0` | threshold for the high rate rule for the error events | +| prometheusRules.alerts.output.enabled | bool | `true` | enable the high rate rule for the errors with the outputs | +| prometheusRules.alerts.output.rate_interval | string | `"5m"` | rate interval for the high rate rule for the errors with the outputs | +| prometheusRules.alerts.output.threshold | int | `0` | threshold for the high rate rule for the errors with the outputs | +| prometheusRules.alerts.warning.enabled | bool | `true` | enable the high rate rule for the warning events | +| prometheusRules.alerts.warning.rate_interval | string | `"5m"` | rate interval for the high rate rule for the warning events | +| prometheusRules.alerts.warning.threshold | int | `0` | threshold for the high rate rule for the warning events | +| prometheusRules.enabled | bool | `false` | enable the creation of PrometheusRules for alerting | +| replicaCount | int | `2` | number of running pods | +| resources | object | `{}` | The resources for falcosdekick pods | +| securityContext | object | `{}` | Sidekick container securityContext | +| service.annotations | object | `{}` | Service annotations | +| service.port | int | `2801` | Service port | +| service.type | string | `"ClusterIP"` | Service type | +| serviceMonitor.additionalLabels | object | `{}` | specify Additional labels to be added on the Service Monitor. | +| serviceMonitor.additionalProperties | object | `{}` | allows setting additional properties on the endpoint such as relabelings, metricRelabelings etc. | +| serviceMonitor.enabled | bool | `false` | enable the deployment of a Service Monitor for the Prometheus Operator. | +| serviceMonitor.interval | string | `""` | specify a user defined interval. When not specified Prometheus default interval is used. | +| serviceMonitor.scrapeTimeout | string | `""` | specify a user defined scrape timeout. When not specified Prometheus default scrape timeout is used. | +| testConnection.affinity | object | `{}` | Affinity for the test connection pod | +| testConnection.nodeSelector | object | `{}` | test connection nodeSelector field | +| testConnection.tolerations | list | `[]` | Tolerations for pod assignment | +| tolerations | list | `[]` | Tolerations for pod assignment | +| webui.affinity | object | `{}` | Affinity for the Web UI pods | +| webui.allowcors | bool | `false` | Allow CORS | +| webui.disableauth | bool | `false` | Disable the basic auth | +| webui.enabled | bool | `false` | enable Falcosidekick-UI | +| webui.existingSecret | string | `""` | Existing secret with configuration | +| webui.externalRedis.enabled | bool | `false` | Enable or disable the usage of an external Redis. Is mutually exclusive with webui.redis.enabled. | +| webui.externalRedis.port | int | `6379` | The port of the external Redis database with RediSearch > v2 | +| webui.externalRedis.url | string | `""` | The URL of the external Redis database with RediSearch > v2 | +| webui.image.pullPolicy | string | `"IfNotPresent"` | The web UI image pull policy | +| webui.image.registry | string | `"docker.io"` | The web UI image registry to pull from | +| webui.image.repository | string | `"falcosecurity/falcosidekick-ui"` | The web UI image repository to pull from | +| webui.image.tag | string | `"2.2.0"` | The web UI image tag to pull | +| webui.ingress.annotations | object | `{}` | Web UI ingress annotations | +| webui.ingress.enabled | bool | `false` | Whether to create the Web UI ingress | +| webui.ingress.hosts | list | `[{"host":"falcosidekick-ui.local","paths":[{"path":"/"}]}]` | Web UI ingress hosts configuration | +| webui.ingress.ingressClassName | string | `""` | ingress class name | +| webui.ingress.tls | list | `[]` | Web UI ingress TLS configuration | +| webui.initContainer | object | `{"image":{"registry":"docker.io","repository":"busybox","tag":1.31},"resources":{},"securityContext":{}}` | Web UI wait-redis initContainer | +| webui.initContainer.image.registry | string | `"docker.io"` | wait-redis initContainer image registry to pull from | +| webui.initContainer.image.repository | string | `"busybox"` | wait-redis initContainer image repository to pull from | +| webui.initContainer.image.tag | float | `1.31` | wait-redis initContainer image tag to pull | +| webui.initContainer.resources | object | `{}` | wait-redis initContainer resources | +| webui.initContainer.securityContext | object | `{}` | wait-redis initContainer securityContext | +| webui.loglevel | string | `"info"` | Log level ("debug", "info", "warning", "error") | +| webui.nodeSelector | object | `{}` | Web UI nodeSelector field | +| webui.podAnnotations | object | `{}` | additions annotations on the pods web UI | +| webui.podLabels | object | `{}` | additions labels on the pods web UI | +| webui.podSecurityContext | object | `{"fsGroup":1234,"runAsUser":1234}` | Web UI pod securityContext | +| webui.priorityClassName | string | `""` | Name of the priority class to be used by the Web UI pods, priority class needs to be created beforehand | +| webui.redis.affinity | object | `{}` | Affinity for the Web UI Redis pods | +| webui.redis.customAnnotations | object | `{}` | custom annotations to add to all resources | +| webui.redis.customLabels | object | `{}` | custom labels to add to all resources | +| webui.redis.enabled | bool | `true` | Is mutually exclusive with webui.externalRedis.enabled | +| webui.redis.existingSecret | string | `""` | Existing secret with configuration | +| webui.redis.image.pullPolicy | string | `"IfNotPresent"` | The web UI image pull policy | +| webui.redis.image.registry | string | `"docker.io"` | The web UI Redis image registry to pull from | +| webui.redis.image.repository | string | `"redis/redis-stack"` | The web UI Redis image repository to pull from | +| webui.redis.image.tag | string | `"7.2.0-v11"` | The web UI Redis image tag to pull from | +| webui.redis.nodeSelector | object | `{}` | Web UI Redis nodeSelector field | +| webui.redis.password | string | `""` | Set a password for Redis | +| webui.redis.podAnnotations | object | `{}` | additions annotations on the pods | +| webui.redis.podLabels | object | `{}` | additions labels on the pods | +| webui.redis.podSecurityContext | object | `{}` | Web UI Redis pod securityContext | +| webui.redis.priorityClassName | string | `""` | Name of the priority class to be used by the Web UI Redis pods, priority class needs to be created beforehand | +| webui.redis.resources | object | `{}` | The resources for the redis pod | +| webui.redis.securityContext | object | `{}` | Web UI Redis container securityContext | +| webui.redis.service.annotations | object | `{}` | The web UI Redis service annotations (use this to set a internal LB, for example.) | +| webui.redis.service.port | int | `6379` | The web UI Redis service port dor the falcosidekick-ui | +| webui.redis.service.targetPort | int | `6379` | The web UI Redis service targetPort | +| webui.redis.service.type | string | `"ClusterIP"` | The web UI Redis service type (i. e: LoadBalancer) | +| webui.redis.storageClass | string | `""` | Storage class of the PVC for the redis pod | +| webui.redis.storageEnabled | bool | `true` | Enable the PVC for the redis pod | +| webui.redis.storageSize | string | `"1Gi"` | Size of the PVC for the redis pod | +| webui.redis.tolerations | list | `[]` | Tolerations for pod assignment | +| webui.replicaCount | int | `2` | number of running pods | +| webui.resources | object | `{}` | The resources for the web UI pods | +| webui.securityContext | object | `{}` | Web UI container securityContext | +| webui.service.annotations | object | `{}` | The web UI service annotations (use this to set a internal LB, for example.) | +| webui.service.nodePort | int | `30282` | The web UI service nodePort | +| webui.service.port | int | `2802` | The web UI service port dor the falcosidekick-ui | +| webui.service.targetPort | int | `2802` | The web UI service targetPort | +| webui.service.type | string | `"ClusterIP"` | The web UI service type | +| webui.tolerations | list | `[]` | Tolerations for pod assignment | +| webui.ttl | int | `0` | TTL for keys, the syntax in X, with : s, m, d, w (0 for no ttl) | +| webui.user | string | `"admin:admin"` | User in format : | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. > **Tip**: You can use the default [values.yaml](values.yaml) ## Metrics A `prometheus` endpoint can be scrapped at `/metrics`. + +## Access Falcosidekick UI through an Ingress and a subpath + +You may want to access the `WebUI (Falcosidekick UI)`](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/falcosidekick-ui.md) dashboard not from `/` but from `/subpath` and use an Ingress, here's an example of annotations to add to the Ingress for `nginx-ingress controller`: + +```yaml +nginx.ingress.kubernetes.io/rewrite-target: /$2 +nginx.ingress.kubernetes.io/use-regex: "true" +``` diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/NOTES.txt b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/NOTES.txt index bee1a2193..d49b5e5ae 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/NOTES.txt +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/NOTES.txt @@ -22,7 +22,7 @@ 2. Get the URL for Falcosidekick-UI (WebUI) by running these commands: {{- if .Values.webui.ingress.enabled }} {{- range $host := .Values.webui.ingress.hosts }} - http{{ if $.Values.webui.ingress.tls }}s{{ end }}://{{ $host.host }}{{ index .paths 0 }} + http{{ if $.Values.webui.ingress.tls }}s{{ end }}://{{ $host.host }}{{ index $host.paths 0 }} {{- end }} {{- else if contains "NodePort" .Values.webui.service.type }} export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "falcosidekick.fullname" . }})-ui diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/_helpers.tpl b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/_helpers.tpl index bfb7a9cf4..b5290e89c 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/_helpers.tpl +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/_helpers.tpl @@ -44,6 +44,27 @@ Return the appropriate apiVersion for ingress. {{- end -}} {{- end -}} +{{/* +Common labels +*/}} +{{- define "falcosidekick.labels" -}} +helm.sh/chart: {{ include "falcosidekick.chart" . }} +{{ include "falcosidekick.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/part-of: {{ include "falcosidekick.name" . }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "falcosidekick.selectorLabels" -}} +app.kubernetes.io/name: {{ include "falcosidekick.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + {{/* Return if ingress is stable. */}} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/aadpodidentity.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/aadpodidentity.yaml index 39c961cbc..9a10dfcf5 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/aadpodidentity.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/aadpodidentity.yaml @@ -3,6 +3,16 @@ apiVersion: "aadpodidentity.k8s.io/v1" kind: AzureIdentity metadata: + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} spec: @@ -13,6 +23,8 @@ spec: apiVersion: "aadpodidentity.k8s.io/v1" kind: AzureIdentityBinding metadata: + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} name: {{ include "falcosidekick.fullname" . }} spec: azureIdentity: {{ include "falcosidekick.fullname" . }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/certs-secret.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/certs-secret.yaml new file mode 100644 index 000000000..ab1ef16dc --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/certs-secret.yaml @@ -0,0 +1,26 @@ +{{- if and .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "falcosidekick.fullname" . }}-certs + namespace: {{ .Release.Namespace }} + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +type: Opaque +data: + {{ $key := .Values.config.tlsserver.serverkey }} + server.key: {{ $key | b64enc | quote }} + {{ $crt := .Values.config.tlsserver.servercrt }} + server.crt: {{ $crt | b64enc | quote }} + falcosidekick.pem: {{ print $key $crt | b64enc | quote }} + ca.crt: {{ .Values.config.tlsserver.cacrt | b64enc | quote }} + ca.pem: {{ .Values.config.tlsserver.cacrt | b64enc | quote }} +{{- end }} \ No newline at end of file diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/clusterrole.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/clusterrole.yaml index 256d10e34..a2747ee95 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/clusterrole.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/clusterrole.yaml @@ -5,10 +5,15 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: {{ template "falcosidekick.fullname" .}} labels: - app: {{ template "falcosidekick.fullname" . }} - chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" - release: "{{ .Release.Name }}" - heritage: "{{ .Release.Service }}" + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} rules: - apiGroups: - policy diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment-ui.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment-ui.yaml index 533b2bd67..705e823c4 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment-ui.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment-ui.yaml @@ -1,4 +1,9 @@ {{- if .Values.webui.enabled }} +{{- if and .Values.webui.redis.enabled .Values.webui.externalRedis.enabled }} + {{ fail "Both webui.redis and webui.externalRedis modules are enabled. Please disable one of them." }} +{{- else if and (not .Values.webui.redis.enabled) (not .Values.webui.externalRedis.enabled) }} + {{ fail "Neither the included Redis nor the external Redis is enabled. Please enable one of them." }} +{{- end }} --- apiVersion: apps/v1 kind: Deployment @@ -6,21 +11,29 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }}-ui - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.webui.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} spec: replicas: {{ .Values.webui.replicaCount }} + {{- if .Values.webui.revisionHistoryLimit }} + revisionHistoryLimit: {{ .Values.webui.revisionHistoryLimit }} + {{- end }} selector: matchLabels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - app.kubernetes.io/instance: {{ .Release.Name }}-ui + {{- include "falcosidekick.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: ui template: metadata: labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - app.kubernetes.io/instance: {{ .Release.Name }}-ui + {{- include "falcosidekick.labels" . | nindent 8 }} + app.kubernetes.io/component: ui {{- if .Values.webui.podLabels }} {{ toYaml .Values.webui.podLabels | indent 8 }} {{- end }} @@ -43,13 +56,54 @@ spec: securityContext: {{- toYaml .Values.webui.podSecurityContext | nindent 8}} {{- end }} + initContainers: + - name: wait-redis + image: "{{ .Values.webui.initContainer.image.registry }}/{{ .Values.webui.initContainer.image.repository }}:{{ .Values.webui.initContainer.image.tag }}" + {{- if .Values.webui.redis.enabled }} + command: ['sh', '-c', 'echo -e "Checking for the availability of the Redis Server"; while ! nc -z {{ include "falcosidekick.fullname" . }}-ui-redis 6379; do sleep 1; done; echo -e "Redis Server has started";'] + {{- else if .Values.webui.externalRedis.enabled }} + command: ['sh', '-c', 'echo -e "Checking for the availability of the Redis Server"; while ! nc -z {{ required "External Redis is enabled. Please set the URL to the database." .Values.webui.externalRedis.url }} {{ .Values.webui.externalRedis.port | default "6379" }}; do sleep 1; done; echo -e "Redis Server has started";'] + {{- end}} + {{- if .Values.webui.initContainer.resources }} + resources: + {{- toYaml .Values.webui.initContainer.resources | nindent 12 }} + {{- end }} + {{- if .Values.webui.initContainer.securityContext }} + securityContext: + {{- toYaml .Values.webui.initContainer.securityContext | nindent 12}} + {{- end }} containers: - name: {{ .Chart.Name }}-ui image: "{{ .Values.webui.image.registry }}/{{ .Values.webui.image.repository }}:{{ .Values.webui.image.tag }}" imagePullPolicy: {{ .Values.webui.image.pullPolicy }} + envFrom: + - secretRef: + name: {{ include "falcosidekick.fullname" . }}-ui + {{- if .Values.webui.existingSecret }} + - secretRef: + name: {{ .Values.webui.existingSecret }} + {{- end }} args: - "-r" + {{- if .Values.webui.redis.enabled }} - {{ include "falcosidekick.fullname" . }}-ui-redis{{ if .Values.webui.redis.fullfqdn }}.{{ .Release.Namespace }}.svc.cluster.local{{ end }}:{{ .Values.webui.redis.service.port | default "6379" }} + {{- else if .Values.webui.externalRedis.enabled }} + - "{{ required "External Redis is enabled. Please set the URL to the database." .Values.webui.externalRedis.url }}:{{ .Values.webui.externalRedis.port | default "6379" }}" + {{- end}} + {{- if .Values.webui.ttl }} + - "-t" + - {{ .Values.webui.ttl | quote }} + {{- end}} + {{- if .Values.webui.loglevel }} + - "-l" + - {{ .Values.webui.loglevel }} + {{- end}} + {{- if .Values.webui.allowcors }} + - "-x" + {{- end}} + {{- if .Values.webui.disableauth }} + - "-d" + {{- end}} ports: - name: http containerPort: 2802 @@ -84,6 +138,7 @@ spec: tolerations: {{- toYaml . | nindent 8 }} {{- end }} +{{- if .Values.webui.redis.enabled }} --- apiVersion: apps/v1 kind: StatefulSet @@ -91,22 +146,20 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui-redis namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui-redis - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }}-ui - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui-redis spec: replicas: 1 serviceName: {{ include "falcosidekick.fullname" . }}-ui-redis selector: matchLabels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui-redis - app.kubernetes.io/instance: {{ .Release.Name }}-ui-redis + {{- include "falcosidekick.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: ui-redis template: metadata: labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui-redis - app.kubernetes.io/instance: {{ .Release.Name }}-ui-redis + {{- include "falcosidekick.labels" . | nindent 8 }} + app.kubernetes.io/component: ui-redis {{- if .Values.webui.redis.podLabels }} {{ toYaml .Values.webui.redis.podLabels | indent 8 }} {{- end }} @@ -133,34 +186,45 @@ spec: - name: redis image: "{{ .Values.webui.redis.image.registry }}/{{ .Values.webui.redis.image.repository }}:{{ .Values.webui.redis.image.tag }}" imagePullPolicy: {{ .Values.webui.redis.image.pullPolicy }} + {{- if .Values.webui.redis.password }} + envFrom: + - secretRef: + {{- if .Values.webui.redis.existingSecret }} + name: {{ .Values.webui.redis.existingSecret }} + {{- else }} + name: {{ include "falcosidekick.fullname" . }}-ui-redis + {{- end }} + {{- end}} args: [] ports: - name: redis containerPort: 6379 protocol: TCP - livenessProbe: + livenessProbe: tcpSocket: port: 6379 - initialDelaySeconds: 5 - periodSeconds: 5 - timeoutSeconds: 2 - successThreshold: 1 + initialDelaySeconds: 5 + periodSeconds: 5 + timeoutSeconds: 2 + successThreshold: 1 failureThreshold: 3 readinessProbe: tcpSocket: port: 6379 - initialDelaySeconds: 5 - periodSeconds: 5 - timeoutSeconds: 2 - successThreshold: 1 + initialDelaySeconds: 5 + periodSeconds: 5 + timeoutSeconds: 2 + successThreshold: 1 failureThreshold: 3 {{- if .Values.webui.redis.securityContext }} securityContext: {{- toYaml .Values.webui.redis.securityContext | nindent 12 }} {{- end }} + {{- if .Values.webui.redis.storageEnabled }} volumeMounts: - name: {{ include "falcosidekick.fullname" . }}-ui-redis-data mountPath: /data + {{- end }} resources: {{- toYaml .Values.webui.redis.resources | nindent 12 }} {{- with .Values.webui.redis.nodeSelector }} @@ -175,6 +239,7 @@ spec: tolerations: {{- toYaml . | nindent 8 }} {{- end }} + {{- if .Values.webui.redis.storageEnabled }} volumeClaimTemplates: - metadata: name: {{ include "falcosidekick.fullname" . }}-ui-redis-data @@ -186,4 +251,6 @@ spec: {{- if .Values.webui.redis.storageClass }} storageClassName: {{ .Values.webui.redis.storageClass }} {{- end }} -{{- end }} \ No newline at end of file + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment.yaml index 430c8f95a..7d8791a7d 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/deployment.yaml @@ -5,32 +5,40 @@ metadata: name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} spec: replicas: {{ .Values.replicaCount }} + {{- if .Values.revisionHistoryLimit }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + {{- end }} selector: matchLabels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - app.kubernetes.io/instance: {{ .Release.Name }} + {{- include "falcosidekick.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: core template: metadata: labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - app.kubernetes.io/instance: {{ .Release.Name }} + {{- include "falcosidekick.labels" . | nindent 8 }} + app.kubernetes.io/component: core {{- if and .Values.config.azure.podIdentityClientID .Values.config.azure.podIdentityName }} aadpodidbinding: {{ include "falcosidekick.fullname" . }} {{- end }} - {{- if .Values.podLabels }} -{{ toYaml .Values.podLabels | indent 8 }} - {{- end }} + {{- if .Values.podLabels }} + {{ toYaml .Values.podLabels | nindent 8 }} + {{- end }} annotations: checksum/config: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }} - {{- if .Values.podAnnotations }} -{{ toYaml .Values.podAnnotations | indent 8 }} - {{- end }} + {{- if .Values.podAnnotations }} + {{ toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} spec: {{- if .Values.imagePullSecrets }} imagePullSecrets: @@ -54,45 +62,102 @@ spec: - name: http containerPort: 2801 protocol: TCP + {{- if .Values.config.tlsserver.deploy }} + - name: http-notls + containerPort: 2810 + protocol: TCP + {{- end }} livenessProbe: httpGet: path: /ping + {{- if .Values.config.tlsserver.deploy }} + port: http-notls + {{- else }} port: http + {{- end }} initialDelaySeconds: 10 periodSeconds: 5 readinessProbe: httpGet: path: /ping + {{- if .Values.config.tlsserver.deploy }} + port: http-notls + {{- else }} port: http + {{- end }} initialDelaySeconds: 10 periodSeconds: 5 {{- if .Values.securityContext }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} {{- end }} + {{- if .Values.config.extraArgs }} + args: + {{ toYaml .Values.config.extraArgs | nindent 12 }} + {{- end }} envFrom: - secretRef: - {{- if .Values.config.existingSecret }} - name: {{ .Values.config.existingSecret }} - {{- else }} name: {{ include "falcosidekick.fullname" . }} - {{- end }} + {{- if .Values.config.existingSecret }} + - secretRef: + name: {{ .Values.config.existingSecret }} + {{- end }} env: - name: DEBUG value: {{ .Values.config.debug | quote }} - name: CUSTOMFIELDS value: {{ .Values.config.customfields | quote }} + - name: TEMPLATEDFIELDS + value: {{ .Values.config.templatedfields | quote }} + - name: OUTPUTFIELDFORMAT + value: {{ .Values.config.outputFieldFormat | quote }} + - name: BRACKETREPLACER + value: {{ .Values.config.bracketreplacer | quote }} - name: MUTUALTLSFILESPATH value: {{ .Values.config.mutualtlsfilespath | quote }} + - name: MUTUALTLSCLIENT_CERTFILE + value: {{ .Values.config.mutualtlsclient.certfile | quote }} + - name: MUTUALTLSCLIENT_KEYFILE + value: {{ .Values.config.mutualtlsclient.keyfile | quote }} + - name: MUTUALTLSCLIENT_CACERTFILE + value: {{ .Values.config.mutualtlsclient.cacertfile | quote }} + - name: TLSCLIENT_CACERTFILE + value: {{ .Values.config.tlsclient.cacertfile | quote }} + {{- if .Values.config.tlsserver.deploy }} + - name: TLSSERVER_DEPLOY + value: {{ .Values.config.tlsserver.deploy | quote }} + - name: TLSSERVER_CERTFILE + value: {{ .Values.config.tlsserver.certfile | quote }} + - name: TLSSERVER_KEYFILE + value: {{ .Values.config.tlsserver.keyfile | quote }} + - name: TLSSERVER_CACERTFILE + value: {{ .Values.config.tlsserver.cacertfile | quote }} + - name: TLSSERVER_MUTUALTLS + value: {{ .Values.config.tlsserver.mutualtls | quote }} + - name: TLSSERVER_NOTLSPORT + value: {{ .Values.config.tlsserver.notlsport | quote }} + - name: TLSSERVER_NOTLSPATHS + value: {{ .Values.config.tlsserver.notlspaths | quote }} + {{- end }} + {{- if .Values.config.otlp.traces.extraenvvars }} + {{ toYaml .Values.config.otlp.traces.extraenvvars | nindent 12 }} + {{- end }} {{- if .Values.config.extraEnv }} {{ toYaml .Values.config.extraEnv | nindent 12 }} {{- end }} resources: {{- toYaml .Values.resources | nindent 12 }} - {{- if .Values.extraVolumeMounts }} + {{- if or .Values.extraVolumeMounts (and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt)) }} volumeMounts: + {{- if and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt) }} + - mountPath: /etc/certs/server + name: certs-volume + readOnly: true + {{- end }} + {{- if or .Values.extraVolumeMounts }} {{ toYaml .Values.extraVolumeMounts | indent 12 }} - {{- end }} + {{- end }} + {{- end }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} @@ -105,8 +170,18 @@ spec: tolerations: {{- toYaml . | nindent 8 }} {{- end }} - {{- if .Values.extraVolumes }} + {{- if or .Values.extraVolumes (and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt)) }} volumes: + {{- if and .Values.config.tlsserver.deploy (or .Values.config.tlsserver.existingSecret .Values.config.tlsserver.serverkey .Values.config.tlsserver.servercrt .Values.config.tlsserver.cacrt) }} + - name: certs-volume + secret: + {{- if .Values.config.tlsserver.existingSecret }} + secretName: {{.Values.config.tlsserver.existingSecret }} + {{- else }} + secretName: {{ include "falcosidekick.fullname" . }}-certs + {{- end }} + {{- end }} + {{- if or .Values.extraVolumes }} {{ toYaml .Values.extraVolumes | indent 8 }} {{- end }} - + {{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress-ui.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress-ui.yaml index 1d4bd3de0..c092eea63 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress-ui.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress-ui.yaml @@ -9,14 +9,18 @@ metadata: name: {{ $fullName }}-ui namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }}-ui - app.kubernetes.io/managed-by: {{ .Release.Service }} - {{- with .Values.webui.ingress.annotations }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} annotations: + {{- with .Values.webui.customAnnotations }} {{- toYaml . | nindent 4 }} - {{- end }} + {{- end }} + {{- with .Values.webui.ingress.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} spec: {{- if .Values.webui.ingress.ingressClassName }} ingressClassName: {{ .Values.webui.ingress.ingressClassName }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress.yaml index 2c1f6c058..f7a5482bc 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/ingress.yaml @@ -9,14 +9,18 @@ metadata: name: {{ $fullName }} namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} - {{- with .Values.ingress.annotations }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} annotations: + {{- with .Values.customAnnotations }} {{- toYaml . | nindent 4 }} - {{- end }} + {{- end }} + {{- with .Values.ingress.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} spec: {{- if .Values.ingress.ingressClassName }} ingressClassName: {{ .Values.ingress.ingressClassName }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/podsecuritypolicy.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/podsecuritypolicy.yaml index fe1d8cb25..7949c39c7 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/podsecuritypolicy.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/podsecuritypolicy.yaml @@ -4,10 +4,15 @@ kind: PodSecurityPolicy metadata: name: {{ template "falcosidekick.fullname" . }} labels: - app: {{ template "falcosidekick.fullname" . }} - chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" - release: "{{ .Release.Name }}" - heritage: "{{ .Release.Service }}" + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} spec: privileged: false allowPrivilegeEscalation: false diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/prometheusrule.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/prometheusrule.yaml new file mode 100644 index 000000000..6afe287ad --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/prometheusrule.yaml @@ -0,0 +1,99 @@ +{{- if and .Values.prometheusRules.enabled .Values.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + name: {{ include "falcosidekick.fullname" . }} + {{- if .Values.prometheusRules.namespace }} + namespace: {{ .Values.prometheusRules.namespace }} + {{- end }} + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- if .Values.prometheusRules.additionalLabels }} + {{- toYaml .Values.prometheusRules.additionalLabels | nindent 4 }} + {{- end }} + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + groups: + - name: falcosidekick + rules: + {{- if .Values.prometheusRules.enabled }} + - alert: FalcosidekickAbsent + expr: absent(up{job="{{- include "falcosidekick.fullname" . }}"}) + for: 10m + annotations: + summary: Falcosidekick has dissapeared from Prometheus service discovery. + description: No metrics are being scraped from falcosidekick. No events will trigger any alerts. + labels: + severity: critical + {{- end }} + {{- if .Values.prometheusRules.alerts.warning.enabled }} + - alert: FalcoWarningEventsRateHigh + annotations: + summary: Falco is experiencing high rate of warning events + description: A high rate of warning events are being detected by Falco + expr: rate(falco_events{priority="4"}[{{ .Values.prometheusRules.alerts.warning.rate_interval }}]) > {{ .Values.prometheusRules.alerts.warning.threshold }} + for: 15m + labels: + severity: warning + {{- end }} + {{- if .Values.prometheusRules.alerts.error.enabled }} + - alert: FalcoErrorEventsRateHigh + annotations: + summary: Falco is experiencing high rate of error events + description: A high rate of error events are being detected by Falco + expr: rate(falco_events{priority="3"}[{{ .Values.prometheusRules.alerts.error.rate_interval }}]) > {{ .Values.prometheusRules.alerts.error.threshold }} + for: 15m + labels: + severity: warning + {{- end }} + {{- if .Values.prometheusRules.alerts.critical.enabled }} + - alert: FalcoCriticalEventsRateHigh + annotations: + summary: Falco is experiencing high rate of critical events + description: A high rate of critical events are being detected by Falco + expr: rate(falco_events{priority="2"}[{{ .Values.prometheusRules.alerts.critical.rate_interval }}]) > {{ .Values.prometheusRules.alerts.critical.threshold }} + for: 15m + labels: + severity: critical + {{- end }} + {{- if .Values.prometheusRules.alerts.alert.enabled }} + - alert: FalcoAlertEventsRateHigh + annotations: + summary: Falco is experiencing high rate of alert events + description: A high rate of alert events are being detected by Falco + expr: rate(falco_events{priority="1"}[{{ .Values.prometheusRules.alerts.alert.rate_interval }}]) > {{ .Values.prometheusRules.alerts.alert.threshold }} + for: 5m + labels: + severity: critical + {{- end }} + {{- if .Values.prometheusRules.alerts.emergency.enabled }} + - alert: FalcoEmergencyEventsRateHigh + annotations: + summary: Falco is experiencing high rate of emergency events + description: A high rate of emergency events are being detected by Falco + expr: rate(falco_events{priority="0"}[{{ .Values.prometheusRules.alerts.emergency.rate_interval }}]) > {{ .Values.prometheusRules.alerts.emergency.threshold }} + for: 1m + labels: + severity: critical + {{- end }} + {{- if .Values.prometheusRules.alerts.output.enabled }} + - alert: FalcoErrorOutputEventsRateHigh + annotations: + summary: Falcosidekick is experiencing high rate of errors for an output + description: A high rate of errors are being detecting for an output + expr: sum by (destination) (rate(falcosidekick_outputs{status="error"}[{{ .Values.prometheusRules.alerts.output.rate_interval }}])) > {{ .Values.prometheusRules.alerts.output.threshold }} + for: 1m + labels: + severity: warning + {{- end }} + {{- with .Values.prometheusRules.additionalAlerts }} + {{ . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac-ui.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac-ui.yaml index 8acb6920a..8bd543dfc 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac-ui.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac-ui.yaml @@ -1,3 +1,4 @@ +{{- if .Values.webui.enabled -}} --- apiVersion: v1 kind: ServiceAccount @@ -5,10 +6,15 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.webui.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role @@ -16,10 +22,15 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.webui.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} rules: [] --- apiVersion: rbac.authorization.k8s.io/v1 @@ -28,10 +39,8 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui roleRef: apiGroup: rbac.authorization.k8s.io kind: Role @@ -39,3 +48,4 @@ roleRef: subjects: - kind: ServiceAccount name: {{ include "falcosidekick.fullname" . }}-ui +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac.yaml index 3d17488cc..96d84d5fb 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/rbac.yaml @@ -4,15 +4,19 @@ kind: ServiceAccount metadata: name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} - {{- if .Values.config.aws.rolearn }} + {{- if and .Values.config.aws.useirsa .Values.config.aws.rolearn }} + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} eks.amazonaws.com/role-arn: {{ .Values.config.aws.rolearn }} {{- end }} - labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role @@ -20,10 +24,15 @@ metadata: name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} rules: - apiGroups: - "" @@ -48,10 +57,8 @@ metadata: name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core roleRef: apiGroup: rbac.authorization.k8s.io kind: Role @@ -65,12 +72,9 @@ apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: {{ include "falcosidekick.fullname" . }} - namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core rules: - apiGroups: - "wgpolicyk8s.io" @@ -87,12 +91,9 @@ apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: {{ include "falcosidekick.fullname" . }} - namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole @@ -101,4 +102,4 @@ subjects: - kind: ServiceAccount namespace: {{ .Release.Namespace }} name: {{ include "falcosidekick.fullname" . }} -{{- end }} \ No newline at end of file +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets-ui.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets-ui.yaml new file mode 100644 index 000000000..49a7bf87d --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets-ui.yaml @@ -0,0 +1,49 @@ +{{- if .Values.webui.enabled -}} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "falcosidekick.fullname" . }}-ui + namespace: {{ .Release.Namespace }} + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.webui.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +type: Opaque +data: + {{- if .Values.webui.user }} + FALCOSIDEKICK_UI_USER: "{{ .Values.webui.user | b64enc}}" + {{- end }} + {{- if .Values.webui.redis.password }} + FALCOSIDEKICK_UI_REDIS_PASSWORD: "{{ .Values.webui.redis.password | b64enc}}" + {{- end }} +{{- if eq .Values.webui.redis.existingSecret "" }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "falcosidekick.fullname" . }}-ui-redis + namespace: {{ .Release.Namespace }} + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.webui.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +type: Opaque +data: + {{- if .Values.webui.redis.password }} + REDIS_ARGS: "{{ printf "--requirepass %s" .Values.webui.redis.password | b64enc}}" + {{- end }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets.yaml index f900bedc3..13c211f75 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/secrets.yaml @@ -1,4 +1,3 @@ -{{- if eq .Values.config.existingSecret "" }} {{- $fullName := include "falcosidekick.fullname" . -}} --- apiVersion: v1 @@ -7,14 +6,20 @@ metadata: name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} type: Opaque data: # Slack Output SLACK_WEBHOOKURL: "{{ .Values.config.slack.webhookurl | b64enc }}" + SLACK_CHANNEL: "{{ .Values.config.slack.channel | b64enc }}" SLACK_OUTPUTFORMAT: "{{ .Values.config.slack.outputformat | b64enc }}" SLACK_FOOTER: "{{ .Values.config.slack.footer | b64enc }}" SLACK_ICON: "{{ .Values.config.slack.icon | b64enc }}" @@ -58,6 +63,21 @@ data: ALERTMANAGER_HOSTPORT: "{{ .Values.config.alertmanager.hostport | b64enc }}" ALERTMANAGER_ENDPOINT: "{{ .Values.config.alertmanager.endpoint | b64enc }}" ALERTMANAGER_EXPIRESAFTER: "{{ .Values.config.alertmanager.expireafter | b64enc }}" + {{- if .Values.config.alertmanager.extralabels }} + ALERTMANAGER_EXTRALABELS: "{{ .Values.config.alertmanager.extralabels | b64enc }}" + {{- end }} + {{- if .Values.config.alertmanager.extraannotations }} + ALERTMANAGER_EXTRAANNOTATIONS: "{{ .Values.config.alertmanager.extraannotations | b64enc }}" + {{- end }} + {{- if .Values.config.alertmanager.customseveritymap }} + ALERTMANAGER_CUSTOMSEVERITYMAP: "{{ .Values.config.alertmanager.customseveritymap | b64enc }}" + {{- end }} + {{- if .Values.config.alertmanager.dropeventdefaultpriority }} + ALERTMANAGER_DROPEVENTDEFAULTPRIORITY: "{{ .Values.config.alertmanager.dropeventdefaultpriority | b64enc }}" + {{- end }} + {{- if .Values.config.alertmanager.dropeventthresholds }} + ALERTMANAGER_DROPEVENTTHRESHOLDS: "{{ .Values.config.alertmanager.dropeventthresholds | b64enc }}" + {{- end }} ALERTMANAGER_MINIMUMPRIORITY: "{{ .Values.config.alertmanager.minimumpriority | b64enc }}" ALERTMANAGER_MUTUALTLS: "{{ .Values.config.alertmanager.mutualtls | printf "%t" | b64enc }}" ALERTMANAGER_CHECKCERT: "{{ .Values.config.alertmanager.checkcert | printf "%t" | b64enc }}" @@ -65,7 +85,10 @@ data: # InfluxDB Output INFLUXDB_USER: "{{ .Values.config.influxdb.user | b64enc }}" INFLUXDB_PASSWORD: "{{ .Values.config.influxdb.password | b64enc }}" + INFLUXDB_TOKEN: "{{ .Values.config.influxdb.token | b64enc }}" INFLUXDB_HOSTPORT: "{{ .Values.config.influxdb.hostport | b64enc }}" + INFLUXDB_ORGANIZATION: "{{ .Values.config.influxdb.organization | b64enc }}" + INFLUXDB_PRECISION: "{{ .Values.config.influxdb.precision | b64enc }}" INFLUXDB_MINIMUMPRIORITY: "{{ .Values.config.influxdb.minimumpriority | b64enc }}" INFLUXDB_DATABASE: "{{ .Values.config.influxdb.database | b64enc }}" INFLUXDB_MUTUALTLS: "{{ .Values.config.influxdb.mutualtls | printf "%t" | b64enc }}" @@ -73,8 +96,13 @@ data: # AWS Output AWS_ACCESSKEYID: "{{ .Values.config.aws.accesskeyid | b64enc }}" + {{- if not .Values.config.aws.useirsa }} + AWS_ROLEARN: "{{ .Values.config.aws.rolearn | b64enc }}" + AWS_EXTERNALID: "{{ .Values.config.aws.externalid | b64enc }}" + {{- end }} AWS_SECRETACCESSKEY: "{{ .Values.config.aws.secretaccesskey | b64enc }}" AWS_REGION: "{{ .Values.config.aws.region | b64enc }}" + AWS_CHECKIDENTITY: "{{ .Values.config.aws.checkidentity | printf "%t" | b64enc }}" AWS_LAMBDA_FUNCTIONNAME: "{{ .Values.config.aws.lambda.functionname | b64enc }}" AWS_LAMBDA_MINIMUMPRIORITY: "{{ .Values.config.aws.lambda.minimumpriority | b64enc }}" AWS_CLOUDWATCHLOGS_LOGGROUP: "{{ .Values.config.aws.cloudwatchlogs.loggroup | b64enc }}" @@ -87,16 +115,30 @@ data: AWS_SQS_MINIMUMPRIORITY: "{{ .Values.config.aws.sqs.minimumpriority | b64enc }}" AWS_S3_BUCKET: "{{ .Values.config.aws.s3.bucket | b64enc }}" AWS_S3_PREFIX: "{{ .Values.config.aws.s3.prefix | b64enc }}" + AWS_S3_ENDPOINT: "{{ .Values.config.aws.s3.endpoint | b64enc }}" + AWS_S3_OBJECTCANNEDACL: "{{ .Values.config.aws.s3.objectcannedacl | b64enc }}" AWS_S3_MINIMUMPRIORITY: "{{ .Values.config.aws.s3.minimumpriority | b64enc }}" AWS_KINESIS_STREAMNAME: "{{ .Values.config.aws.kinesis.streamname | b64enc }}" AWS_KINESIS_MINIMUMPRIORITY: "{{ .Values.config.aws.kinesis.minimumpriority | b64enc }}" + AWS_SECURITYLAKE_BUCKET: "{{ .Values.config.aws.securitylake.bucket | b64enc }}" + AWS_SECURITYLAKE_REGION: "{{ .Values.config.aws.securitylake.region | b64enc }}" + AWS_SECURITYLAKE_PREFIX: "{{ .Values.config.aws.securitylake.prefix | b64enc }}" + AWS_SECURITYLAKE_ACCOUNTID: "{{ .Values.config.aws.securitylake.accountid | b64enc }}" + AWS_SECURITYLAKE_INTERVAL: "{{ .Values.config.aws.securitylake.interval | toString | b64enc }}" + AWS_SECURITYLAKE_BATCHSIZE: "{{ .Values.config.aws.securitylake.batchsize | toString | b64enc }}" + AWS_SECURITYLAKE_MINIMUMPRIORITY: "{{ .Values.config.aws.securitylake.minimumpriority | b64enc }}" # SMTP Output SMTP_USER: "{{ .Values.config.smtp.user | b64enc }}" SMTP_PASSWORD: "{{ .Values.config.smtp.password | b64enc }}" + SMTP_AUTHMECHANISM: "{{ .Values.config.smtp.authmechanism | b64enc }}" + SMTP_TLS: "{{ .Values.config.smtp.tls | printf "%t" | b64enc }}" SMTP_HOSTPORT: "{{ .Values.config.smtp.hostport | b64enc }}" SMTP_FROM: "{{ .Values.config.smtp.from | b64enc }}" SMTP_TO: "{{ .Values.config.smtp.to | b64enc }}" + SMTP_TOKEN: "{{ .Values.config.smtp.token | b64enc }}" + SMTP_IDENTITY: "{{ .Values.config.smtp.identity | b64enc }}" + SMTP_TRACE: "{{ .Values.config.smtp.trace | b64enc }}" SMTP_OUTPUTFORMAT: "{{ .Values.config.smtp.outputformat | b64enc }}" SMTP_MINIMUMPRIORITY: "{{ .Values.config.smtp.minimumpriority | b64enc }}" @@ -116,6 +158,7 @@ data: GCP_CREDENTIALS: "{{ .Values.config.gcp.credentials | b64enc }}" GCP_PUBSUB_PROJECTID: "{{ .Values.config.gcp.pubsub.projectid | b64enc }}" GCP_PUBSUB_TOPIC: "{{ .Values.config.gcp.pubsub.topic | b64enc }}" + GCP_PUBSUB_CUSTOMATTRIBUTES: "{{ .Values.config.gcp.pubsub.customattributes | b64enc }}" GCP_PUBSUB_MINIMUMPRIORITY: "{{ .Values.config.gcp.pubsub.minimumpriority | b64enc }}" GCP_STORAGE_BUCKET: "{{ .Values.config.gcp.storage.bucket | b64enc }}" GCP_STORAGE_PREFIX: "{{ .Values.config.gcp.storage.prefix | b64enc }}" @@ -136,23 +179,32 @@ data: ELASTICSEARCH_HOSTPORT: "{{ .Values.config.elasticsearch.hostport | b64enc }}" ELASTICSEARCH_INDEX: "{{ .Values.config.elasticsearch.index | b64enc }}" ELASTICSEARCH_TYPE: "{{ .Values.config.elasticsearch.type | b64enc }}" + ELASTICSEARCH_SUFFIX: "{{ .Values.config.elasticsearch.suffix | b64enc }}" ELASTICSEARCH_MINIMUMPRIORITY: "{{ .Values.config.elasticsearch.minimumpriority | b64enc }}" ELASTICSEARCH_MUTUALTLS: "{{ .Values.config.elasticsearch.mutualtls | printf "%t" | b64enc }}" ELASTICSEARCH_CHECKCERT: "{{ .Values.config.elasticsearch.checkcert | printf "%t" | b64enc }}" ELASTICSEARCH_USERNAME: "{{ .Values.config.elasticsearch.username | b64enc }}" ELASTICSEARCH_PASSWORD: "{{ .Values.config.elasticsearch.password | b64enc }}" + ELASTICSEARCH_FLATTENFIELDS: "{{ .Values.config.elasticsearch.flattenfields | printf "%t" | b64enc }}" + ELASTICSEARCH_CREATEINDEXTEMPLATE: "{{ .Values.config.elasticsearch.createindextemplate | printf "%t" | b64enc }}" + ELASTICSEARCH_NUMBEROFSHARDS: "{{ .Values.config.elasticsearch.numberofshards | toString | b64enc }}" + ELASTICSEARCH_NUMBEROFREPLICAS: "{{ .Values.config.elasticsearch.numberofreplicas | toString | b64enc }}" + ELASTICSEARCH_CUSTOMHEADERS: "{{ .Values.config.elasticsearch.customheaders | b64enc }}" # Loki Output LOKI_HOSTPORT: "{{ .Values.config.loki.hostport | b64enc }}" LOKI_ENDPOINT: "{{ .Values.config.loki.endpoint | b64enc }}" + LOKI_USER: "{{ .Values.config.loki.user | b64enc }}" + LOKI_APIKEY: "{{ .Values.config.loki.apikey | b64enc }}" LOKI_TENANT: "{{ .Values.config.loki.tenant | b64enc }}" LOKI_EXTRALABELS: "{{ .Values.config.loki.extralabels | b64enc }}" + LOKI_CUSTOMHEADERS: "{{ .Values.config.loki.customheaders | b64enc }}" LOKI_MINIMUMPRIORITY: "{{ .Values.config.loki.minimumpriority | b64enc }}" LOKI_MUTUALTLS: "{{ .Values.config.loki.mutualtls | printf "%t" | b64enc }}" LOKI_CHECKCERT: "{{ .Values.config.loki.checkcert | printf "%t" | b64enc }}" # Prometheus Output - PROMETHEUS_EXTRALABELS: "{{ .Values.config.loki.extralabels | b64enc }}" + PROMETHEUS_EXTRALABELS: "{{ .Values.config.prometheus.extralabels | b64enc }}" # Nats Output NATS_HOSTPORT: "{{ .Values.config.nats.hostport | b64enc }}" @@ -179,6 +231,7 @@ data: # WebHook Output WEBHOOK_ADDRESS: "{{ .Values.config.webhook.address | b64enc }}" + WEBHOOK_METHOD: "{{ .Values.config.webhook.method | b64enc }}" WEBHOOK_CUSTOMHEADERS: "{{ .Values.config.webhook.customHeaders | b64enc }}" WEBHOOK_MINIMUMPRIORITY: "{{ .Values.config.webhook.minimumpriority | b64enc }}" WEBHOOK_MUTUALTLS: "{{ .Values.config.webhook.mutualtls | printf "%t" | b64enc }}" @@ -192,11 +245,21 @@ data: # Kafka Output KAFKA_HOSTPORT: "{{ .Values.config.kafka.hostport | b64enc }}" KAFKA_TOPIC: "{{ .Values.config.kafka.topic | b64enc }}" - KAFKA_PARTITION: "{{ .Values.config.kafka.partition | b64enc }}" + KAFKA_SASL: "{{ .Values.config.kafka.sasl | b64enc }}" + KAFKA_TLS: "{{ .Values.config.kafka.tls | printf "%t" |b64enc }}" + KAFKA_USERNAME: "{{ .Values.config.kafka.username | b64enc }}" + KAFKA_PASSWORD: "{{ .Values.config.kafka.password | b64enc }}" + KAFKA_ASYNC: "{{ .Values.config.kafka.async | printf "%t" | b64enc }}" + KAFKA_REQUIREDACKS: "{{ .Values.config.kafka.requiredacks | b64enc }}" + KAFKA_COMPRESSION: "{{ .Values.config.kafka.compression | b64enc }}" + KAFKA_BALANCER: "{{ .Values.config.kafka.balancer | b64enc }}" + KAFKA_TOPICCREATION: "{{ .Values.config.kafka.topiccreation | printf "%t" | b64enc }}" + KAFKA_CLIENTID: "{{ .Values.config.kafka.clientid | b64enc }}" KAFKA_MINIMUMPRIORITY: "{{ .Values.config.kafka.minimumpriority | b64enc }}" # PagerDuty Output PAGERDUTY_ROUTINGKEY: "{{ .Values.config.pagerduty.routingkey | b64enc }}" + PAGERDUTY_REGION: "{{ .Values.config.pagerduty.region | b64enc }}" PAGERDUTY_MINIMUMPRIORITY: "{{ .Values.config.pagerduty.minimumpriority | b64enc }}" # Kubeless Output @@ -243,10 +306,18 @@ data: GRAFANA_DASHBOARDID: "{{ .Values.config.grafana.dashboardid | toString | b64enc}}" GRAFANA_PANELID: "{{ .Values.config.grafana.panelid | toString | b64enc}}" GRAFANA_ALLFIELDSASTAGS: "{{ .Values.config.grafana.allfieldsastags | printf "%t" | b64enc}}" + GRAFANA_CUSTOMHEADERS: "{{ .Values.config.grafana.customheaders | b64enc}}" GRAFANA_MUTUALTLS: "{{ .Values.config.grafana.mutualtls | printf "%t" | b64enc}}" GRAFANA_CHECKCERT: "{{ .Values.config.grafana.checkcert | printf "%t" | b64enc}}" GRAFANA_MINIMUMPRIORITY: "{{ .Values.config.grafana.minimumpriority | b64enc}}" + # Grafana On Call Output + GRAFANAONCALL_WEBHOOKURL: "{{ .Values.config.grafanaoncall.webhookurl | b64enc}}" + GRAFANAONCALL_CUSTOMHEADERS: "{{ .Values.config.grafanaoncall.customheaders | b64enc}}" + GRAFANAONCALL_CHECKCERT: "{{ .Values.config.grafanaoncall.checkcert | printf "%t" | b64enc}}" + GRAFANAONCALL_MUTUALTLS: "{{ .Values.config.grafanaoncall.mutualtls | printf "%t" | b64enc}}" + GRAFANAONCALL_MINIMUMPRIORITY: "{{ .Values.config.grafanaoncall.minimumpriority | b64enc}}" + # Fission Output FISSION_FUNCTION: "{{ .Values.config.fission.function | b64enc}}" FISSION_ROUTERNAMESPACE: "{{ .Values.config.fission.routernamespace | b64enc}}" @@ -264,19 +335,23 @@ data: YANDEX_S3_BUCKET: "{{ .Values.config.yandex.s3.bucket | b64enc}}" YANDEX_S3_PREFIX: "{{ .Values.config.yandex.s3.prefix | b64enc}}" YANDEX_S3_MINIMUMPRIORITY: "{{ .Values.config.yandex.s3.minimumpriority | b64enc}}" + YANDEX_DATASTREAMS_ENDPOINT: "{{ .Values.config.yandex.datastreams.endpoint | b64enc}}" + YANDEX_DATASTREAMS_STREAMNAME: "{{ .Values.config.yandex.datastreams.streamname | b64enc}}" + YANDEX_DATASTREAMS_MINIMUMPRIORITY: "{{ .Values.config.yandex.datastreams.minimumpriority | b64enc}}" # KafkaRest Output KAFKAREST_ADDRESS: "{{ .Values.config.kafkarest.address | b64enc}}" KAFKAREST_VERSION: "{{ .Values.config.kafkarest.version | toString | b64enc}}" - KAFKAREST_MINIMUMPRIORITY : "{{ .Values.config.kafkarest.minimumpriority | b64enc}}" - KAFKAREST_MUTUALTLS : "{{ .Values.config.kafkarest.mutualtls | printf "%t" | b64enc}}" - KAFKAREST_CHECKCERT : "{{ .Values.config.kafkarest.checkcert | printf "%t" | b64enc}}" + KAFKAREST_MINIMUMPRIORITY: "{{ .Values.config.kafkarest.minimumpriority | b64enc}}" + KAFKAREST_MUTUALTLS: "{{ .Values.config.kafkarest.mutualtls | printf "%t" | b64enc}}" + KAFKAREST_CHECKCERT: "{{ .Values.config.kafkarest.checkcert | printf "%t" | b64enc}}" # Syslog SYSLOG_HOST: "{{ .Values.config.syslog.host | b64enc}}" - SYSLOG_PORT: "{{ .Values.config.syslog.port | printf "%t" | b64enc}}" + SYSLOG_PORT: "{{ .Values.config.syslog.port | toString | b64enc}}" SYSLOG_PROTOCOL: "{{ .Values.config.syslog.protocol | b64enc}}" - SYSLOG_MINIMUMPRIORITY : "{{ .Values.config.syslog.minimumpriority | b64enc}}" + SYSLOG_FORMAT: "{{ .Values.config.syslog.format | b64enc}}" + SYSLOG_MINIMUMPRIORITY: "{{ .Values.config.syslog.minimumpriority | b64enc}}" # Zoho Cliq CLIQ_WEBHOOKURL: "{{ .Values.config.cliq.webhookurl | b64enc}}" @@ -284,18 +359,144 @@ data: CLIQ_USEEMOJI: "{{ .Values.config.cliq.useemoji | printf "%t" | b64enc}}" CLIQ_OUTPUTFORMAT: "{{ .Values.config.cliq.outputformat | b64enc}}" CLIQ_MESSAGEFORMAT: "{{ .Values.config.cliq.messageformat | b64enc}}" - CLIQ_MINIMUMPRIORITY : "{{ .Values.config.cliq.minimumpriority | b64enc}}" + CLIQ_MINIMUMPRIORITY: "{{ .Values.config.cliq.minimumpriority | b64enc}}" # Policy Reporter POLICYREPORT_ENABLED: "{{ .Values.config.policyreport.enabled | printf "%t"| b64enc}}" POLICYREPORT_KUBECONFIG: "{{ .Values.config.policyreport.kubeconfig | b64enc}}" POLICYREPORT_MAXEVENTS: "{{ .Values.config.policyreport.maxevents | toString | b64enc}}" POLICYREPORT_PRUNEBYPRIORITY: "{{ .Values.config.policyreport.prunebypriority | printf "%t" | b64enc}}" - POLICYREPORT_MINIMUMPRIORITY : "{{ .Values.config.policyreport.minimumpriority | b64enc}}" - + POLICYREPORT_MINIMUMPRIORITY: "{{ .Values.config.policyreport.minimumpriority | b64enc}}" + + # Node Red + NODERED_ADDRESS: "{{ .Values.config.nodered.address | b64enc}}" + NODERED_USER: "{{ .Values.config.nodered.user | b64enc}}" + NODERED_PASSWORD: "{{ .Values.config.nodered.password | b64enc}}" + NODERED_CUSTOMHEADERS: "{{ .Values.config.nodered.customheaders | b64enc}}" + NODERED_CHECKCERT: "{{ .Values.config.nodered.checkcert | printf "%t" | b64enc}}" + NODERED_MINIMUMPRIORITY: "{{ .Values.config.nodered.minimumpriority | b64enc}}" + + # MQTT + MQTT_BROKER: "{{ .Values.config.mqtt.broker | b64enc}}" + MQTT_TOPIC: "{{ .Values.config.mqtt.topic | b64enc}}" + MQTT_QOS: "{{ .Values.config.mqtt.qos | toString | b64enc}}" + MQTT_RETAINED: "{{ .Values.config.mqtt.retained | printf "%t" | b64enc}}" + MQTT_USER: "{{ .Values.config.mqtt.user | b64enc}}" + MQTT_PASSWORD: "{{ .Values.config.mqtt.password | b64enc}}" + MQTT_CHECKCERT: "{{ .Values.config.mqtt.checkcert | printf "%t" | b64enc}}" + MQTT_MINIMUMPRIORITY: "{{ .Values.config.mqtt.minimumpriority | b64enc}}" + + # Zincsearch + ZINCSEARCH_HOSTPORT: "{{ .Values.config.zincsearch.hostport | b64enc}}" + ZINCSEARCH_INDEX: "{{ .Values.config.zincsearch.index | b64enc}}" + ZINCSEARCH_USERNAME: "{{ .Values.config.zincsearch.username | b64enc}}" + ZINCSEARCH_PASSWORD: "{{ .Values.config.zincsearch.password | b64enc}}" + ZINCSEARCH_CHECKCERT: "{{ .Values.config.zincsearch.checkcert | printf "%t" | b64enc}}" + ZINCSEARCH_MINIMUMPRIORITY: "{{ .Values.config.zincsearch.minimumpriority | b64enc}}" + + # Gotify + GOTIFY_HOSTPORT: "{{ .Values.config.gotify.hostport | b64enc}}" + GOTIFY_TOKEN: "{{ .Values.config.gotify.token | b64enc}}" + GOTIFY_FORMAT: "{{ .Values.config.gotify.format | b64enc}}" + GOTIFY_CHECKCERT: "{{ .Values.config.gotify.checkcert | printf "%t" | b64enc}}" + GOTIFY_MINIMUMPRIORITY: "{{ .Values.config.gotify.minimumpriority | b64enc}}" + + # Tekton + TEKTON_EVENTLISTENER: "{{ .Values.config.tekton.eventlistener | b64enc}}" + TEKTON_CHECKCERT: "{{ .Values.config.tekton.checkcert | printf "%t" | b64enc}}" + TEKTON_MINIMUMPRIORITY: "{{ .Values.config.tekton.minimumpriority | b64enc}}" + + # Spyderbat + SPYDERBAT_ORGUID: "{{ .Values.config.spyderbat.orguid | b64enc}}" + SPYDERBAT_APIKEY: "{{ .Values.config.spyderbat.apikey | b64enc}}" + SPYDERBAT_APIURL: "{{ .Values.config.spyderbat.apiurl | b64enc}}" + SPYDERBAT_SOURCE: "{{ .Values.config.spyderbat.source | b64enc}}" + SPYDERBAT_SOURCEDESCRIPTION: "{{ .Values.config.spyderbat.sourcedescription | b64enc}}" + SPYDERBAT_MINIMUMPRIORITY: "{{ .Values.config.spyderbat.minimumpriority | b64enc}}" + + # TimescaleDB + TIMESCALEDB_HOST: "{{ .Values.config.timescaledb.host | b64enc}}" + TIMESCALEDB_PORT: "{{ .Values.config.timescaledb.port | toString | b64enc}}" + TIMESCALEDB_USER: "{{ .Values.config.timescaledb.user | b64enc}}" + TIMESCALEDB_PASSWORD: "{{ .Values.config.timescaledb.password | b64enc}}" + TIMESCALEDB_DATABASE: "{{ .Values.config.timescaledb.database | b64enc}}" + TIMESCALEDB_HYPERTABLENAME: "{{ .Values.config.timescaledb.hypertablename | b64enc}}" + TIMESCALEDB_MINIMUMPRIORITY: "{{ .Values.config.timescaledb.minimumpriority | b64enc}}" + + # Redis Output + REDIS_ADDRESS: "{{ .Values.config.redis.address | b64enc}}" + REDIS_PASSWORD: "{{ .Values.config.redis.password | b64enc}}" + REDIS_DATABASE: "{{ .Values.config.redis.database | toString | b64enc}}" + REDIS_KEY: "{{ .Values.config.redis.key | b64enc}}" + REDIS_STORAGETYPE: "{{ .Values.config.redis.storagetype | b64enc}}" + REDIS_MINIMUMPRIORITY: "{{ .Values.config.redis.minimumpriority | b64enc}}" + + # TELEGRAM Output + TELEGRAM_TOKEN: "{{ .Values.config.telegram.token | b64enc}}" + TELEGRAM_CHATID: "{{ .Values.config.telegram.chatid | b64enc}}" + TELEGRAM_MINIMUMPRIORITY: "{{ .Values.config.telegram.minimumpriority | b64enc}}" + TELEGRAM_CHECKCERT: "{{ .Values.config.telegram.checkcert | printf "%t" | b64enc}}" + + # N8N Output + N8N_ADDRESS: "{{ .Values.config.n8n.address | b64enc}}" + N8N_USER: "{{ .Values.config.n8n.user | b64enc}}" + N8N_PASSWORD: "{{ .Values.config.n8n.password | b64enc}}" + N8N_MINIMUMPRIORITY: "{{ .Values.config.n8n.minimumpriority | b64enc}}" + N8N_CHECKCERT: "{{ .Values.config.n8n.checkcert | printf "%t" | b64enc}}" + + # Open Observe Output + OPENOBSERVE_HOSTPORT: "{{ .Values.config.openobserve.hostport | b64enc}}" + OPENOBSERVE_USERNAME: "{{ .Values.config.openobserve.username | b64enc}}" + OPENOBSERVE_PASSWORD: "{{ .Values.config.openobserve.password | b64enc}}" + OPENOBSERVE_CHECKCERT: "{{ .Values.config.openobserve.checkcert | printf "%t" | b64enc}}" + OPENOBSERVE_MUTUALTLS: "{{ .Values.config.openobserve.mutualtls | printf "%t" | b64enc}}" + OPENOBSERVE_CUSTOMHEADERS: "{{ .Values.config.openobserve.customheaders | b64enc}}" + OPENOBSERVE_ORGANIZATIONNAME: "{{ .Values.config.openobserve.organizationname | b64enc}}" + OPENOBSERVE_STREAMNAME: "{{ .Values.config.openobserve.streamname | b64enc}}" + OPENOBSERVE_MINIMUMPRIORITY: "{{ .Values.config.openobserve.minimumpriority | b64enc}}" + + # Dynatrace + DYNATRACE_APITOKEN: "{{ .Values.config.dynatrace.apitoken | b64enc}}" + DYNATRACE_APIURL: "{{ .Values.config.dynatrace.apiurl | b64enc}}" + DYNATRACE_CHECKCERT: "{{ .Values.config.dynatrace.checkcert | printf "%t" | b64enc}}" + DYNATRACE_MINIMUMPRIORITY: "{{ .Values.config.dynatrace.minimumpriority | b64enc}}" + + # OTLP Traces + OTLP_TRACES_ENDPOINT: "{{ .Values.config.otlp.traces.endpoint | b64enc}}" + OTLP_TRACES_PROTOCOL: "{{ .Values.config.otlp.traces.endpoint | b64enc}}" + OTLP_TRACES_TIMEOUT: "{{ .Values.config.otlp.traces.timeout | toString | b64enc}}" + OTLP_TRACES_HEADERS: "{{ .Values.config.otlp.traces.headers | b64enc}}" + OTLP_TRACES_SYNCED: "{{ .Values.config.otlp.traces.synced | printf "%t" | b64enc}}" + OTLP_TRACES_DURATION: "{{ .Values.config.otlp.traces.duration | toString | b64enc}}" + OTLP_TRACES_CHECKCERT: "{{ .Values.config.otlp.traces.checkcert | printf "%t" | b64enc}}" + OTLP_TRACES_MINIMUMPRIORITY: "{{ .Values.config.otlp.traces.minimumpriority | b64enc}}" + + # Sumologic + SUMOLOGIC_RECEIVERURL: "{{ .Values.config.sumologic.receiverURL | b64enc}}" + SUMOLOGIC_SOURCECATEGORY: "{{ .Values.config.sumologic.sourceCategory | b64enc}}" + SUMOLOGIC_SOURCEHOST: "{{ .Values.config.sumologic.sourceHost | b64enc}}" + SUMOLOGIC_NAME: "{{ .Values.config.sumologic.name | b64enc}}" + SUMOLOGIC_CHECKCERT: "{{ .Values.config.sumologic.checkcert | printf "%t" | b64enc}}" + SUMOLOGIC_MINIMUMPRIORITY: "{{ .Values.config.sumologic.minimumpriority | b64enc}}" + + # Quickwit + QUICKWIT_HOSTPORT: "{{ .Values.config.quickwit.hostport | b64enc}}" + QUICKWIT_APIENDPOINT: "{{ .Values.config.quickwit.apiendpoint | b64enc}}" + QUICKWIT_INDEX: "{{ .Values.config.quickwit.index | b64enc}}" + QUICKWIT_AUTOCREATEINDEX: "{{ .Values.config.quickwit.autocreateindex | printf "%t" | b64enc}}" + QUICKWIT_CUSTOMHEADERS: "{{ .Values.config.quickwit.customHeaders | b64enc}}" + QUICKWIT_VERSION: "{{ .Values.config.quickwit.version | b64enc}}" + QUICKWIT_CHECKCERT: "{{ .Values.config.quickwit.checkcert | printf "%t" | b64enc}}" + QUICKWIT_MUTUALTLS: "{{ .Values.config.quickwit.mutualtls | printf "%t" | b64enc}}" + QUICKWIT_MINIMUMPRIORITY: "{{ .Values.config.quickwit.minimumpriority | b64enc}}" + + # Talon + TALON_ADDRESS: "{{ .Values.config.talon.address | b64enc}}" + TALON_CHECKCERT: "{{ .Values.config.talon.checkcert | printf "%t" | b64enc}}" + TALON_MINIMUMPRIORITY: "{{ .Values.config.talon.minimumpriority | b64enc}}" + # WebUI Output {{- if .Values.webui.enabled -}} - {{ $weburl := printf "http://%s-ui:2802" (include "falcosidekick.fullname" .) }} + {{ $weburl := printf "http://%s-ui:%d" (include "falcosidekick.fullname" .) (.Values.webui.service.port | int) }} WEBUI_URL: "{{ $weburl | b64enc }}" {{- end }} -{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service-ui.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service-ui.yaml index 101a6105e..ad32cd69a 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service-ui.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service-ui.yaml @@ -6,14 +6,18 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }}-ui - app.kubernetes.io/managed-by: {{ .Release.Service }} - {{- with .Values.webui.service.annotations }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui + {{- with .Values.webui.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} annotations: + {{- with .Values.webui.customAnnotations }} {{- toYaml . | nindent 4 }} - {{- end }} + {{- end }} + {{- with .Values.webui.service.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} spec: type: {{ .Values.webui.service.type }} ports: @@ -25,8 +29,9 @@ spec: protocol: TCP name: http selector: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui - app.kubernetes.io/instance: {{ .Release.Name }}-ui + {{- include "falcosidekick.selectorLabels" . | nindent 4 }} + app.kubernetes.io/component: ui +{{- if .Values.webui.redis.enabled }} --- apiVersion: v1 kind: Service @@ -34,22 +39,21 @@ metadata: name: {{ include "falcosidekick.fullname" . }}-ui-redis namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui-redis - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }}-ui - app.kubernetes.io/managed-by: {{ .Release.Service }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: ui {{- with .Values.webui.redis.service.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: - type: ClusterIP + type: {{ .Values.webui.redis.service.type }} ports: - port: {{ .Values.webui.redis.service.port }} targetPort: {{ .Values.webui.redis.service.targetPort }} protocol: TCP name: redis selector: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }}-ui-redis - app.kubernetes.io/instance: {{ .Release.Name }}-ui-redis + {{- include "falcosidekick.selectorLabels" . | nindent 4 }} + app.kubernetes.io/component: ui-redis +{{- end }} {{- end }} diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service.yaml index 8cd9df8f7..fdea8debe 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/service.yaml @@ -5,14 +5,19 @@ metadata: name: {{ include "falcosidekick.fullname" . }} namespace: {{ .Release.Namespace }} labels: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - helm.sh/chart: {{ include "falcosidekick.chart" . }} - app.kubernetes.io/instance: {{ .Release.Name }} - app.kubernetes.io/managed-by: {{ .Release.Service }} - {{- with .Values.service.annotations }} + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.service.annotations }} {{- toYaml . | nindent 4 }} - {{- end }} + {{- end }} + prometheus.io/scrape: "true" spec: type: {{ .Values.service.type }} ports: @@ -20,6 +25,12 @@ spec: targetPort: http protocol: TCP name: http + {{- if not (eq .Values.config.tlsserver.notlspaths "") }} + - port: {{ .Values.config.tlsserver.notlsport }} + targetPort: http-notls + protocol: TCP + name: http-notls + {{- end }} selector: - app.kubernetes.io/name: {{ include "falcosidekick.name" . }} - app.kubernetes.io/instance: {{ .Release.Name }} + {{- include "falcosidekick.selectorLabels" . | nindent 4 }} + app.kubernetes.io/component: core diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/templates/servicemonitor.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/servicemonitor.yaml new file mode 100644 index 000000000..477fae018 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/templates/servicemonitor.yaml @@ -0,0 +1,36 @@ +{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "falcosidekick.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "falcosidekick.labels" . | nindent 4 }} + app.kubernetes.io/component: core + {{- with .Values.serviceMonitor.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.customLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- with .Values.customAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + endpoints: + - port: http + {{- if .Values.serviceMonitor.interval }} + interval: {{ .Values.serviceMonitor.interval }} + {{- end }} + {{- if .Values.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }} + {{- end }} + {{- with .Values.serviceMonitor.additionalProperties }} + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + matchLabels: + {{- include "falcosidekick.labels" . | nindent 6 }} + app.kubernetes.io/component: core +{{- end }} \ No newline at end of file diff --git a/charts/falco/falco/charts/falco/charts/falcosidekick/values.yaml b/charts/falco/falco/charts/falco/charts/falcosidekick/values.yaml index c65ec9496..c148ee729 100644 --- a/charts/falco/falco/charts/falco/charts/falcosidekick/values.yaml +++ b/charts/falco/falco/charts/falco/charts/falcosidekick/values.yaml @@ -2,366 +2,1066 @@ # This is a YAML-formatted file. # Declare variables to be passed into your templates. +# -- number of running pods replicaCount: 2 +# -- number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) +# revisionHistoryLimit: 1 + image: + # -- The image registry to pull from registry: docker.io + # -- The image repository to pull from repository: falcosecurity/falcosidekick - tag: 2.26.0 + # -- The image tag to pull + tag: 2.29.0 + # -- The image pull policy pullPolicy: IfNotPresent +# -- Sidekick pod securityContext podSecurityContext: runAsUser: 1234 fsGroup: 1234 +# -- Sidekick container securityContext securityContext: {} # One or more secrets to be used when pulling images +# -- Secrets for the registry imagePullSecrets: [] # - registrySecretName +# -- Override name nameOverride: "" +# -- Override the name fullnameOverride: "" +# -- podSecurityPolicy podSecurityPolicy: + # -- Whether to create a podSecurityPolicy create: false +# -- Name of the priority class to be used by the Sidekickpods, priority class needs to be created beforehand priorityClassName: "" +# -- custom labels to add to all resources +customLabels: {} + +# -- custom annotations to add to all resources +customAnnotations: {} + +# -- additions labels on the pods podLabels: {} +# -- additions annotations on the pods podAnnotations: {} +serviceMonitor: + # -- enable the deployment of a Service Monitor for the Prometheus Operator. + enabled: false + # -- specify Additional labels to be added on the Service Monitor. + additionalLabels: {} + # -- specify a user defined interval. When not specified Prometheus default interval is used. + interval: "" + # -- specify a user defined scrape timeout. When not specified Prometheus default scrape timeout is used. + scrapeTimeout: "" + # -- allows setting additional properties on the endpoint such as relabelings, metricRelabelings etc. + additionalProperties: {} + +prometheusRules: + # -- enable the creation of PrometheusRules for alerting + enabled: false + alerts: + warning: + # -- enable the high rate rule for the warning events + enabled: true + # -- rate interval for the high rate rule for the warning events + rate_interval: "5m" + # -- threshold for the high rate rule for the warning events + threshold: 0 + error: + # -- enable the high rate rule for the error events + enabled: true + # -- rate interval for the high rate rule for the error events + rate_interval: "5m" + # -- threshold for the high rate rule for the error events + threshold: 0 + critical: + # -- enable the high rate rule for the critical events + enabled: true + # -- rate interval for the high rate rule for the critical events + rate_interval: "5m" + # -- threshold for the high rate rule for the critical events + threshold: 0 + alert: + # -- enable the high rate rule for the alert events + enabled: true + # -- rate interval for the high rate rule for the alert events + rate_interval: "5m" + # -- threshold for the high rate rule for the alert events + threshold: 0 + emergency: + # -- enable the high rate rule for the emergency events + enabled: true + # -- rate interval for the high rate rule for the emergency events + rate_interval: "5m" + # -- threshold for the high rate rule for the emergency events + threshold: 0 + output: + # -- enable the high rate rule for the errors with the outputs + enabled: true + # -- rate interval for the high rate rule for the errors with the outputs + rate_interval: "5m" + # -- threshold for the high rate rule for the errors with the outputs + threshold: 0 + additionalAlerts: {} + config: + # -- Existing secret with configuration existingSecret: "" + # -- Extra environment variables extraEnv: [] + # -- Extra command-line arguments + extraArgs: [] + # -- DEBUG environment variable debug: false - ## - ## a list of escaped comma separated custom fields to add to falco events, syntax is "key:value\,key:value" + # -- a list of escaped comma separated custom fields to add to falco events, syntax is "key:value\,key:value" customfields: "" - mutualtlsfilespath: "/etc/certs" # folder which will used to store client.crt, client.key and ca.crt files for mutual tls (default: "/etc/certs") + # -- a list of escaped comma separated Go templated fields to add to falco events, syntax is "key:template\,key:template" + templatedfields: "" + # -- if not empty, the brackets in keys of Output Fields are replaced + bracketreplacer: "" + # if not empty, allow to change the format of the output field. (example: ": ") (default: ": ") + outputFieldFormat: "" + # -- folder which will used to store client.crt, client.key and ca.crt files for mutual tls for outputs, will be deprecated in the future (default: "/etc/certs") + mutualtlsfilespath: "/etc/certs" + + mutualtlsclient: + # -- client certification file for mutual TLS client certification, takes priority over mutualtlsfilespath if not empty + certfile: "" + # -- client key file for mutual TLS client certification, takes priority over mutualtlsfilespath if not empty + keyfile: "" + # -- CA certification file for server certification for mutual TLS authentication, takes priority over mutualtlsfilespath if not empty + cacertfile: "" + + tlsclient: + # -- CA certificate file for server certification on TLS connections, appended to the system CA pool if not empty + cacertfile: "" + + tlsserver: + # -- if true TLS server will be deployed instead of HTTP + deploy: false + # -- existing secret with server.crt, server.key and ca.crt files for TLS Server + existingSecret: "" + # -- server.crt file for TLS Server + servercrt: "" + # -- server certification file path for TLS Server + certfile: "/etc/certs/server/server.crt" + # -- server.key file for TLS Server + serverkey: "" + # -- server key file path for TLS Server + keyfile: "/etc/certs/server/server.key" + # -- if true mutual TLS server will be deployed instead of TLS, deploy also has to be true + mutualtls: false + # ca.crt file for client certification if mutualtls is true + cacrt: "" + # -- CA certification file path for client certification if mutualtls is true + cacertfile: "/etc/certs/server/ca.crt" + # -- port to serve http server serving selected endpoints + notlsport: 2810 + # -- a comma separated list of endpoints, if not empty, and tlsserver.deploy is true, a separate http server will be deployed for the specified endpoints (/ping endpoint needs to be notls for Kubernetes to be able to perform the healthchecks) + notlspaths: "/ping" slack: + # -- Slack Webhook URL (ex: ), if not `empty`, Slack output is *enabled* webhookurl: "" + # -- Slack channel (optionnal) + channel: "" + # -- Slack Footer footer: "" + # -- Slack icon (avatar) icon: "" + # -- Slack username username: "" + # -- `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Slack) outputformat: "all" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- a Go template to format Slack Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment messageformat: "" rocketchat: + # -- Rocketchat Webhook URL (ex: ), if not `empty`, Rocketchat output is *enabled* webhookurl: "" + # -- Rocketchat icon (avatar) icon: "" + # -- Rocketchat username username: "" + # -- `all` (default), `text` (only text is displayed in Rocketcaht), `fields` (only fields are displayed in Rocketchat) outputformat: "all" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- a Go template to format Rocketchat Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment messageformat: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true mattermost: + # -- Mattermost Webhook URL (ex: ), if not `empty`, Mattermost output is *enabled* webhookurl: "" + # -- Mattermost Footer footer: "" + # -- Mattermost icon (avatar) icon: "" + # -- Mattermost username username: "" + # -- `all` (default), `text` (only text is displayed in Slack), `fields` (only fields are displayed in Mattermost) outputformat: "all" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- a Go template to format Mattermost Text above Attachment, displayed in addition to the output from `slack.outputformat`. If empty, no Text is displayed before Attachment messageformat: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true teams: + # -- Teams Webhook URL (ex: "), if not `empty`, Teams output is *enabled* webhookurl: "" + # -- Teams section image activityimage: "" + # -- `all` (default), `text` (only text is displayed in Teams), `facts` (only facts are displayed in Teams) outputformat: "all" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" datadog: + # -- Datadog API Key, if not `empty`, Datadog output is *enabled* apikey: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- Datadog host. Override if you are on the Datadog EU site. Defaults to american site with "" host: "" alertmanager: + # -- AlertManager , if not `empty`, AlertManager is *enabled* hostport: "" + # -- alertmanager endpoint on which falcosidekick posts alerts, choice is: `"/api/v1/alerts" or "/api/v2/alerts" , default is "/api/v1/alerts"` endpoint: "/api/v1/alerts" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if set to a non-zero value, alert expires after that time in seconds (default: 0) expireafter: "" + # -- comma separated list of labels composed of a ':' separated name and value that is added to the Alerts. Example: my_label_1:my_value_1, my_label_1:my_value_2 + extralabels: "" + # -- comma separated list of annotations composed of a ':' separated name and value that is added to the Alerts. Example: my_annotation_1:my_value_1, my_annotation_1:my_value_2 + extraannotations: "" + # -- comma separated list of tuple composed of a ':' separated Falco priority and Alertmanager severity that is used to override the severity label associated to the priority level of falco event. Example: debug:value_1,critical:value2. Default mapping: emergency:critical,alert:critical,critical:critical,error:warning,warning:warning,notice:information,informational:information,debug:information. + customseveritymap: "" + # -- default priority of dropped events, values are emergency|alert|critical|error|warning|notice|informational|debug + dropeventdefaultpriority: "critical" + # -- comma separated list of priority re-evaluation thresholds of dropped events composed of a ':' separated integer threshold and string priority. Example: `10000:critical, 100:warning, 1:informational` + dropeventthresholds: "10000:critical, 1000:critical, 100:critical, 10:warning, 1:warning" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true elasticsearch: + # -- Elasticsearch , if not `empty`, Elasticsearch is *enabled* hostport: "" + # -- Elasticsearch index index: "falco" - type: "event" - minimumpriority: "" - mutualtls: false - checkcert: true + # -- Elasticsearch document type + type: "_doc" + # date suffix for index rotation : daily, monthly, annually, none + suffix: "daily" + # -- use this username to authenticate to Elasticsearch if the username is not empty username: "" + # -- use this password to authenticate to Elasticsearch if the password is not empty password: "" + # -- Replace . by _ to avoid mapping conflicts, force to true if createindextemplate==true (default: false) + flattenfields: false + # -- Create an index template (default: false) + createindextemplate: false + # -- Number of shards set by the index template (default: 3) + numberofshards: 3 + # -- Number of replicas set by the index template (default: 3) + numberofreplicas: 3 + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customheaders: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false + # -- check if ssl certificate of the output is valid + checkcert: true + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" influxdb: + # -- Influxdb , if not `empty`, Influxdb is *enabled* hostport: "" + # -- Influxdb database database: "falco" + # -- Influxdb organization + organization: "" + # -- write precision + precision: "ns" + # -- User to use if auth is *enabled* in Influxdb user: "" + # -- Password to use if auth is *enabled* in Influxdb password: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- API token to use if auth in enabled in Influxdb (disables user and password) + token: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true loki: + # -- Loki , if not `empty`, Loki is *enabled* hostport: "" - endpoint: "/api/prom/push" + # -- user for Grafana Logs + user: "" + # -- API Key for Grafana Logs + apikey: "" + # -- Loki endpoint URL path, more info: + endpoint: "/loki/api/v1/push" + # -- Loki tenant, if not `empty`, Loki tenant is *enabled* tenant: "" + # -- comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields extralabels: "" + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customheaders: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true prometheus: + # -- comma separated list of fields to use as labels additionally to rule, source, priority, tags and custom_fields extralabels: "" nats: + # -- NATS "nats://host:port", if not `empty`, NATS is *enabled* hostport: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true stan: + # -- Stan nats://{domain or ip}:{port}, if not empty, STAN output is *enabled* hostport: "" + # -- Cluster name, if not empty, STAN output is *enabled* clusterid: "" + # -- Client ID, if not empty, STAN output is *enabled* clientid: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true aws: + # -- Use IRSA, if true, the rolearn value will be used to set the ServiceAccount annotations and not the env var + useirsa: true + # -- AWS IAM role ARN for falcosidekick service account to associate with (optionnal if you use EC2 Instance Profile) rolearn: "" + # -- External id for the role to assume (optional if you use EC2 Instance Profile) + externalid: "" + # -- AWS Access Key Id (optionnal if you use EC2 Instance Profile) accesskeyid: "" + # -- AWS Secret Access Key (optionnal if you use EC2 Instance Profile) secretaccesskey: "" + # -- AWS Region (optionnal if you use EC2 Instance Profile) region: "" + # -- check the identity credentials, set to false for locale developments + checkidentity: true cloudwatchlogs: + # -- AWS CloudWatch Logs Group name, if not empty, CloudWatch Logs output is *enabled* loggroup: "" + # -- AWS CloudWatch Logs Stream name, if empty, Falcosidekick will try to create a log stream logstream: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" lambda: + # -- AWS Lambda Function Name, if not empty, AWS Lambda output is *enabled* functionname: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" sns: + # -- AWS SNS TopicARN, if not empty, AWS SNS output is *enabled* topicarn: "" + # -- Send RawJSON from `falco` or parse it to AWS SNS rawjson: false + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" sqs: + # -- AWS SQS Queue URL, if not empty, AWS SQS output is *enabled* url: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" s3: + # -- AWS S3, bucket name bucket: "" + # -- AWS S3, name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json prefix: "" + # -- Endpoint URL that overrides the default generated endpoint, use this for S3 compatible APIs + endpoint: "" + # -- Canned ACL (x-amz-acl) to use when creating the object + objectcannedacl: "bucket-owner-full-control" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" kinesis: + # -- AWS Kinesis Stream Name, if not empty, Kinesis output is *enabled* streamname: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + securitylake: + # -- Bucket for AWS SecurityLake data, if not empty, AWS SecurityLake output is enabled + bucket: "" + # -- Bucket Region + region: "" + # -- Prefix for keys + prefix: "" + # -- Account ID + accountid: "" + # -- Time in minutes between two puts to S3 (must be between 5 and 60min) + interval: 5 + # -- Max number of events by parquet file + batchsize: 1000 + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" smtp: + # -- "host:port" address of SMTP server, if not empty, SMTP output is *enabled* hostport: "" + # -- use TLS connection (true/false) + tls: true + # -- SASL Mechanisms : plain, oauthbearer, external, anonymous or "" (disable SASL) + authmechanism: "plain" + # -- user to access SMTP server user: "" + # -- password to access SMTP server password: "" + # -- OAuthBearer token for OAuthBearer Mechanism + token: "" + # -- identity string for Plain and External Mechanisms + identity: "" + # -- trace string for Anonymous Mechanism + trace: "" + # -- Sender address (mandatory if SMTP output is *enabled*) from: "" + # -- comma-separated list of Recipident addresses, can't be empty (mandatory if SMTP output is *enabled*) to: "" + # -- html, text outputformat: "html" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" opsgenie: + # -- Opsgenie API Key, if not empty, Opsgenie output is *enabled* apikey: "" + # -- (`us` or `eu`) region of your domain region: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true statsd: + # -- The address for the StatsD forwarder, in the form , if not empty StatsD is *enabled* forwarder: "" + # -- A prefix for all metrics namespace: "falcosidekick." dogstatsd: + # -- The address for the DogStatsD forwarder, in the form , if not empty DogStatsD is *enabled* forwarder: "" + # -- A prefix for all metrics namespace: "falcosidekick." + # -- A comma-separated list of tags to add to all metrics tags: "" webhook: + # -- Webhook address, if not empty, Webhook output is *enabled* address: "" - customHeaders: "" # a list of comma separated custom headers to add, syntax is "key:value\,key:value" + # -- HTTP method: POST or PUT + method: "POST" + # -- a list of comma separated custom headers to add, syntax is "key:value\,key:value" + customHeaders: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true azure: + # -- Azure Subscription ID subscriptionID: "" + # -- Azure Resource Group name resourceGroupName: "" + # -- Azure Identity Client ID podIdentityClientID: "" + # -- Azure Identity name podIdentityName: "" eventHub: + # -- Name of the space the Hub is in namespace: "" + # -- Name of the Hub, if not empty, EventHub is *enabled* name: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" discord: + # -- Discord WebhookURL (ex: ...), if not empty, Discord output is *enabled* webhookurl: "" + # -- Discord icon (avatar) icon: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" gcp: - credentials: "" # The base64-encoded JSON key file for the GCP service account + # -- Base64 encoded JSON key file for the GCP service account + credentials: "" pubsub: - projectid: "" # The GCP Project ID containing the Pub/Sub Topic - topic: "" # The name of the Pub/Sub topic + # -- The GCP Project ID containing the Pub/Sub Topic + projectid: "" + # -- Name of the Pub/Sub topic + topic: "" + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customattributes: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" storage: + # -- Name of prefix, keys will have format: gs:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json prefix: "" + # -- The name of the bucket bucket: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "debug" cloudfunctions: - name: "" # The name of the Cloud Function name + # -- The name of the Cloud Function which is in form `projects//locations//functions/` + name: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" cloudrun: + # -- the URL of the Cloud Run function endpoint: "" # the URL of the Cloud Run function + # -- JWT for the private access to Cloud Run function jwt: "" # JWT for the private access to Cloud Run function + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" googlechat: + # -- Google Chat Webhook URL (ex: ), if not `empty`, Google Chat output is *enabled* webhookurl: "" + # -- `all` (default), `text` (only text is displayed in Google chat) outputformat: "all" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `config.googlechat.outputformat`. If empty, no Text is displayed before Attachment messageformat: "" kafka: + # -- comma separated list of Apache Kafka bootstrap nodes for establishing the initial connection to the cluster (ex: localhost:9092,localhost:9093). Defaults to port 9092 if no port is specified after the domain, if not empty, Kafka output is *enabled* hostport: "" + # -- Name of the topic, if not empty, Kafka output is enabled topic: "" - partition: "0" - messageformat: "" + # -- SASL authentication mechanism, if empty, no authentication (PLAIN|SCRAM_SHA256|SCRAM_SHA512) + sasl: "" + # -- Use TLS for the connections + tls: false + # -- use this username to authenticate to Kafka via SASL + username: "" + # -- use this password to authenticate to Kafka via SASL + password: "" + # -- produce messages without blocking + async: false + # -- number of acknowledges from partition replicas required before receiving + requiredacks: NONE + # -- enable message compression using this algorithm, no compression (GZIP|SNAPPY|LZ4|ZSTD|NONE) + compression: "NONE" + # -- partition balancing strategy when producing + balancer: "round_robin" + # -- auto create the topic if it doesn't exist + topiccreation: false + # -- specify a client.id when communicating with the broker for tracing + clientid: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" pagerduty: + # -- Pagerduty Routing Key, if not empty, Pagerduty output is *enabled* routingkey: "" + # -- Pagerduty Region, can be 'us' or 'eu' + region: "us" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" kubeless: + # -- Name of Kubeless function, if not empty, EventHub is *enabled* function: "" + # -- Namespace of Kubeless function (mandatory) namespace: "" + # -- Port of service of Kubeless function. Default is `8080` port: 8080 + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true openfaas: + # -- Name of OpenFaaS function, if not empty, OpenFaaS is *enabled* functionname: "" + # -- Namespace of OpenFaaS function, "openfaas-fn" (default) functionnamespace: "openfaas-fn" + # -- Service of OpenFaaS Gateway, "gateway" (default) gatewayservice: "gateway" + # -- Port of service of OpenFaaS Gateway Default is `8080` gatewayport: 8080 + # -- Namespace of OpenFaaS Gateway, "openfaas" (default) gatewaynamespace: "openfaas" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) mutualtls: false + # -- check if ssl certificate of the output is valid checkcert: true cloudevents: + # -- CloudEvents consumer http address, if not empty, CloudEvents output is *enabled* address: "" + # -- Extensions to add in the outbound Event, useful for routing extension: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "" rabbitmq: + # -- Rabbitmq URL, if not empty, Rabbitmq output is *enabled* url: "" + # -- Rabbitmq Queue name queue: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` minimumpriority: "debug" wavefront: - endpointtype: "" # Wavefront endpoint type, must be 'direct' or 'proxy'. If not empty, with endpointhost, Wavefront output is enabled - endpointhost: "" # Wavefront endpoint address (only the host). If not empty, with endpointhost, Wavefront output is enabled - endpointtoken: "" # Wavefront token. Must be used only when endpointtype is 'direct' - endpointmetricport: 2878 # Wavefront endpoint port when type is 'proxy' - metricname: "falco.alert" # Metric to be created in Wavefront. Defaults to falco.alert - batchsize: 10000 # max batch of data sent per flush interval. defaults to 10,000. Used only in direct mode - flushintervalseconds: 1 # Time in seconds between flushing metrics to Wavefront. Defaults to 1s - minimumpriority: "debug" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) + # -- Wavefront endpoint type, must be 'direct' or 'proxy'. If not empty, with endpointhost, Wavefront output is *enabled* + endpointtype: "" + # -- Wavefront endpoint address (only the host). If not empty, with endpointhost, Wavefront output is *enabled* + endpointhost: "" + # -- Wavefront token. Must be used only when endpointtype is 'direct' + endpointtoken: "" + # -- Port to send metrics. Only used when endpointtype is 'proxy' + endpointmetricport: 2878 + # -- Metric to be created in Wavefront. Defaults to falco.alert + metricname: "falco.alert" + # -- Wavefront batch size. If empty uses the default 10000. Only used when endpointtype is 'direct' + batchsize: 10000 + # -- Wavefront flush interval in seconds. Defaults to 1 + flushintervalseconds: 1 + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "debug" grafana: - hostport: "" # http://{domain or ip}:{port}, if not empty, Grafana output is enabled - apikey: "" # API Key to authenticate to Grafana, if not empty, Grafana output is enabled - dashboardid: "" # annotations are scoped to a specific dashboard. Optionnal. - panelid: "" # annotations are scoped to a specific panel. Optionnal. - allfieldsastags: false # if true, all custom fields are added as tags (default: false) - mutualtls: false # if true, checkcert flag will be ignored (server cert will always be checked) - checkcert: true # check if ssl certificate of the output is valid (default: true) - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) + # -- or ip}:{port}, if not empty, Grafana output is *enabled* + hostport: "" + # -- API Key to authenticate to Grafana, if not empty, Grafana output is *enabled* + apikey: "" + # -- annotations are scoped to a specific dashboard. Optionnal. + dashboardid: "" + # -- annotations are scoped to a specific panel. Optionnal. + panelid: "" + # -- if true, all custom fields are added as tags (default: false) + allfieldsastags: false + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customheaders: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false + # -- check if ssl certificate of the output is valid + checkcert: true + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + + grafanaoncall: + # -- if not empty, Grafana OnCall output is enabled + webhookurl: "" + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customheaders: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false + # -- check if ssl certificate of the output is valid + checkcert: true + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" fission: - function: "" # Name of Fission function, if not empty, Fission is enabled - routernamespace: "fission" # Namespace of Fission Router, "fission" (default) - routerservice: "router" # Service of Fission Router, "router" (default) + # -- Name of Fission function, if not empty, Fission is enabled + function: "" + # -- Namespace of Fission Router, "fission" (default) + routernamespace: "fission" + # -- Service of Fission Router, "router" (default) + routerservice: "router" + # -- Port of service of Fission Router routerport: 80 # Port of service of Fission Router - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) - checkcert: true # check if ssl certificate of the output is valid (default: true) - mutualtls: false # if true, checkcert flag will be ignored (server cert will always be checked) + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false yandex: - accesskeyid: "" # yandex access key - secretaccesskey: "" # yandex secret access key - region: "" # yandex storage region (default: ru-central-1) + # -- yandex access key + accesskeyid: "" + # -- yandex secret access key + secretaccesskey: "" + # -- yandex storage region (default: ru-central-1) + region: "" s3: - endpoint: "" # yandex storage endpoint (default: https://storage.yandexcloud.net) - bucket: "" # Yandex storage, bucket name - prefix: "" # name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|erro + # -- yandex storage endpoint (default: https://storage.yandexcloud.net) + endpoint: "" + # -- Yandex storage, bucket name + bucket: "" + # -- name of prefix, keys will have format: s3:////YYYY-MM-DD/YYYY-MM-DDTHH:mm:ss.s+01:00.json + prefix: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + datastreams: + # -- yandex data streams endpoint (default: https://yds.serverless.yandexcloud.net) + endpoint: "" + # -- stream name in format /${region}/${folder_id}/${ydb_id}/${stream_name} + streamname: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" kafkarest: - address: "" # The full URL to the topic (example "http://kafkarest:8082/topics/test") - version: 2 # Kafka Rest Proxy API version 2|1 (default: 2) - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) - mutualtls: false # if true, checkcert flag will be ignored (server cert will always be checked) - checkcert: true # check if ssl certificate of the output is valid (default: true) + # -- The full URL to the topic (example "http://kafkarest:8082/topics/test") + address: "" + # -- Kafka Rest Proxy API version 2|1 (default: 2) + version: 2 + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false + # -- check if ssl certificate of the output is valid + checkcert: true syslog: + # -- Syslog Host, if not empty, Syslog output is *enabled* host: "" + # -- Syslog endpoint port number port: "" + # -- Syslog transport protocol. It can be either "tcp" or "udp" protocol: "tcp" - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) + # -- Syslog payload format. It can be either "json" or "cef" + format: "json" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" cliq: + # -- Zoho Cliq Channel URL (ex: ), if not empty, Cliq Chat output is *enabled* webhookurl: "" + # -- Cliq icon (avatar) icon: "" + # -- Prefix message text with an emoji useemoji: true + # -- `all` (default), `text` (only text is displayed in Cliq), `fields` (only fields are displayed in Cliq) outputformat: "all" + # -- a Go template to format Google Chat Text above Attachment, displayed in addition to the output from `cliq.outputformat`. If empty, no Text is displayed before sections. messageformat: "" - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" policyreport: + # -- if true; policyreport output is *enabled* enabled: false + # -- Kubeconfig file to use (only if falcosidekick is running outside the cluster) kubeconfig: "~/.kube/config" + # -- the max number of events that can be in a policyreport maxevents: 1000 + # -- if true; the events with lowest severity are pruned first, in FIFO order prunebypriority: false - minimumpriority: "" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + + nodered: + # -- Node-RED address, if not empty, Node-RED output is enabled + address: "" + # -- User if Basic Auth is enabled for 'http in' node in Node-RED + user: "" + # -- Password if Basic Auth is enabled for 'http in' node in Node-RED + password: "" + # -- Custom headers to add in POST, useful for Authentication, syntax is "key:value\,key:value" + customheaders: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + + mqtt: + # -- Broker address, can start with tcp:// or ssl://, if not empty, MQTT output is enabled + broker: "" + # -- Topic for messages + topic: "falco/events" + # -- QOS for messages + qos: 0 + # -- If true, messages are retained + retained: false + # -- User if the authentication is enabled in the broker + user: "" + # -- Password if the authentication is enabled in the broker + password: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + + zincsearch: + # -- http://{domain or ip}:{port}, if not empty, ZincSearch output is enabled + hostport: "" + # -- index + index: "falco" + # -- use this username to authenticate to ZincSearch + username: "" + # -- use this password to authenticate to ZincSearch + password: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + + gotify: + # -- http://{domain or ip}:{port}, if not empty, Gotify output is enabled + hostport: "" + # -- API Token + token: "" + # -- Format of the messages (plaintext, markdown, json) + format: "markdown" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + + tekton: + # -- EventListener address, if not empty, Tekton output is enabled + eventlistener: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + + spyderbat: + # -- Organization to send output to, if not empty, Spyderbat output is enabled + orguid: "" + # -- Spyderbat API key with access to the organization + apikey: "" + # -- Spyderbat API url + apiurl: "https://api.spyderbat.com" + # -- Spyderbat source ID, max 32 characters + source: "falcosidekick" + # -- Spyderbat source description and display name if not empty, max 256 characters + sourcedescription: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + + timescaledb: + # -- TimescaleDB host, if not empty, TImescaleDB output is enabled + host: "" + # -- TimescaleDB port (default: 5432) + port: 5432 + # -- Username to authenticate with TimescaleDB + user: "postgres" + # -- Password to authenticate with TimescaleDB + password: "postgres" + # -- TimescaleDB database used + database: "" + # -- Hypertable to store data events (default: falco_events) See TimescaleDB setup for more info + hypertablename: "falco_events" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + + redis: + # -- Redis address, if not empty, Redis output is enabled + address: "" + # -- Password to authenticate with Redis + password: "" + # -- Redis database number + database: 0 + # -- Redis storage type: hashmap or list + storagetype: "list" + # -- Redis storage key name for hashmap, list + key: "falco" + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" + minimumpriority: "" + + telegram: + # -- telegram bot authentication token + token: "" + # -- telegram Identifier of the shared chat + chatid: "" + # -- check if ssl certificate of the output is valid + checkcert: true + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" + minimumpriority: "" + + n8n: + # -- N8N address, if not empty, N8N output is enabled + address: "" + # -- Username to authenticate with N8N in basic auth + user: "" + # -- Password to authenticate with N8N in basic auth + password: "" + # -- Header Auth Key to authenticate with N8N + headerauthname: "" + # -- Header Auth Value to authenticate with N8N + headerauthvalue: "" + # -- check if ssl certificate of the output is valid + checkcert: true + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" + minimumpriority: "" + + openobserve: + # -- http://{domain or ip}:{port}, if not empty, OpenObserve output is enabled + hostport: "" + # -- Organization name + organizationname: "default" + # -- Stream name + streamname: "falco" + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" + minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false + # -- check if ssl certificate of the output is valid + checkcert: true + # -- use this username to authenticate to OpenObserve if the username is not empty + username: "" + # -- use this password to authenticate to OpenObserve if the password is not empty + password: "" + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customheaders: "" + + dynatrace: + # -- Dynatrace API token with the "logs.ingest" scope, more info : https://dt-url.net/8543sda, if not empty, Dynatrace output is enabled + apitoken: "" + # -- Dynatrace API url, use https://ENVIRONMENTID.live.dynatrace.com/api for Dynatrace SaaS and https://YOURDOMAIN/e/ENVIRONMENTID/api for Dynatrace Managed, more info : https://dt-url.net/ej43qge + apiurl: "" + # -- check if ssl certificate of the output is valid + checkcert: true + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" + minimumpriority: "" + + otlp: + traces: + # -- OTLP endpoint in the form of http://{domain or ip}:4318/v1/traces, if not empty, OTLP Traces output is enabled + endpoint: "" + # -- OTLP protocol http/json, http/protobuf, grpc (default: "" which uses SDK default: http/json) + protocol: "" + # -- OTLP timeout: timeout value in milliseconds (default: "" which uses SDK default: 10000) + timeout: "" + # -- OTLP headers: list of headers to apply to all outgoing traces in the form of "some-key=some-value,other-key=other-value" (default: "") + headers: "" + # -- Set to true if you want traces to be sent synchronously (default: false) + synced: false + # -- Artificial span duration in milliseconds (default: 1000) + duration: 1000 + # -- Extra env vars (override the other settings) + extraenvvars: {} + # OTEL_EXPORTER_OTLP_TRACES_TIMEOUT: 10000 + # OTEL_EXPORTER_OTLP_TIMEOUT: 10000 + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true + + sumologic: + # -- Sumologic HTTP Source URL, if not empty, Sumologic output is enabled + receiverURL: "" + # -- Override the default Sumologic Source Category + sourceCategory: "" + # -- Override the default Sumologic Source Host + sourceHost: "" + # -- Override the default Sumologic Source Name + name: "" + # -- minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default) + minimumpriority: "" + # -- check if ssl certificate of the output is valid (default: true) + checkcert: true + + quickwit: + # -- http://{domain or ip}:{port}, if not empty, Quickwit output is enabled + hostport: "" + # -- API endpoint (containing the API version, overideable in case of quickwit behind a reverse proxy with URL rewriting) + apiendpoint: "/api/v1" + # -- Index + index: "falco" + # -- Version of quickwi + version: "0.7" + # -- Autocreate a falco index mapping if it doesn't exists + autocreateindex: false + # -- a list of comma separated custom headers to add, syntax is "key:value,key:value" + customHeaders: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- if true, checkcert flag will be ignored (server cert will always be checked) + mutualtls: false + # -- check if ssl certificate of the output is valid + checkcert: true + talon: + # -- Talon address, if not empty, Talon output is enabled + address: "" + # -- minimum priority of event to use this output, order is `emergency\|alert\|critical\|error\|warning\|notice\|informational\|debug or ""` + minimumpriority: "" + # -- check if ssl certificate of the output is valid + checkcert: true service: + # -- Service type type: ClusterIP + # -- Service port port: 2801 + # -- Service annotations annotations: {} # networking.gke.io/load-balancer-type: Internal ingress: + # -- Whether to create the ingress enabled: false + # -- ingress class name + ingressClassName: "" + # -- Ingress annotations annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" + # -- Ingress hosts hosts: - host: falcosidekick.local paths: - path: / # -- pathType (e.g. ImplementationSpecific, Prefix, .. etc.) # pathType: Prefix - + # -- Ingress TLS configuration tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local +# -- The resources for falcosdekick pods resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little @@ -374,12 +1074,16 @@ resources: {} # cpu: 100m # memory: 128Mi +# -- Sidekick nodeSelector field nodeSelector: {} +# -- Tolerations for pod assignment tolerations: [] +# -- Affinity for the Sidekick pods affinity: {} +# -- Extra volumes for sidekick deployment extraVolumes: [] # - name: optional-mtls-volume # configMap: @@ -389,55 +1093,113 @@ extraVolumes: [] # - key: mtlscert.optional.tls # path: mtlscert.optional.tls +# -- Extra volume mounts for sidekick deployment extraVolumeMounts: [] # - mountPath: /etc/certs/mtlscert.optional.tls # name: optional-mtls-volume +testConnection: + # -- test connection nodeSelector field + nodeSelector: {} + + # -- Tolerations for pod assignment + tolerations: [] + + # -- Affinity for the test connection pod + affinity: {} + webui: + # -- enable Falcosidekick-UI enabled: false - + # -- number of running pods replicaCount: 2 - + # -- number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) + # revisionHistoryLimit: 1 + # -- Log level ("debug", "info", "warning", "error") + loglevel: "info" + # -- TTL for keys, the syntax in X, with : s, m, d, w (0 for no ttl) + ttl: 0 + # -- User in format : + user: "admin:admin" + # -- Disable the basic auth + disableauth: false + # -- Existing secret with configuration + existingSecret: "" + # -- Allow CORS + allowcors: false image: + # -- The web UI image registry to pull from registry: docker.io + # -- The web UI image repository to pull from repository: falcosecurity/falcosidekick-ui - tag: "v2.0.2" + # -- The web UI image tag to pull + tag: "2.2.0" + # -- The web UI image pull policy pullPolicy: IfNotPresent + # -- Web UI wait-redis initContainer + initContainer: + image: + # -- wait-redis initContainer image registry to pull from + registry: docker.io + # -- wait-redis initContainer image repository to pull from + repository: busybox + # -- wait-redis initContainer image tag to pull + tag: 1.31 + # -- wait-redis initContainer securityContext + securityContext: {} + # -- wait-redis initContainer resources + resources: {} + + # -- Web UI pod securityContext podSecurityContext: runAsUser: 1234 fsGroup: 1234 + # -- Web UI container securityContext securityContext: {} + # -- Name of the priority class to be used by the Web UI pods, priority class needs to be created beforehand priorityClassName: "" + # -- additions labels on the pods web UI podLabels: {} + # -- additions annotations on the pods web UI podAnnotations: {} service: - # type: LoadBalancer + # -- The web UI service type type: ClusterIP + # -- The web UI service port dor the falcosidekick-ui port: 2802 + # -- The web UI service nodePort nodePort: 30282 + # -- The web UI service targetPort targetPort: 2802 + # -- The web UI service annotations (use this to set a internal LB, for example.) annotations: {} # service.beta.kubernetes.io/aws-load-balancer-internal: "true" ingress: + # -- Whether to create the Web UI ingress enabled: false + # -- ingress class name + ingressClassName: "" + # -- Web UI ingress annotations annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" + # -- Web UI ingress hosts configuration hosts: - host: falcosidekick-ui.local paths: - path: / + # -- Web UI ingress TLS configuration tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local - + # -- The resources for the web UI pods resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little @@ -449,39 +1211,76 @@ webui: # requests: # cpu: 100m # memory: 128Mi - + # -- Web UI nodeSelector field nodeSelector: {} - + # -- Tolerations for pod assignment tolerations: [] - + # -- Affinity for the Web UI pods affinity: {} - + externalRedis: + # -- Enable or disable the usage of an external Redis. Is mutually exclusive with webui.redis.enabled. + enabled: false + # -- The URL of the external Redis database with RediSearch > v2 + url: "" + # -- The port of the external Redis database with RediSearch > v2 + port: 6379 redis: + # -- Is mutually exclusive with webui.externalRedis.enabled + enabled: true image: + # -- The web UI Redis image registry to pull from registry: docker.io - repository: redislabs/redisearch - tag: "2.2.4" + # -- The web UI Redis image repository to pull from + repository: redis/redis-stack + # -- The web UI Redis image tag to pull from + tag: "7.2.0-v11" + # -- The web UI image pull policy pullPolicy: IfNotPresent + # -- Existing secret with configuration + existingSecret: "" + + # -- Set a password for Redis + password: "" + + # -- Name of the priority class to be used by the Web UI Redis pods, priority class needs to be created beforehand priorityClassName: "" + # -- custom labels to add to all resources + customLabels: {} + + # -- custom annotations to add to all resources + customAnnotations: {} + + # -- additions labels on the pods podLabels: {} + # -- additions annotations on the pods podAnnotations: {} + # -- Enable the PVC for the redis pod + storageEnabled: true + # -- Size of the PVC for the redis pod storageSize: "1Gi" + # -- Storage class of the PVC for the redis pod storageClass: "" service: - # type: LoadBalancer + # -- The web UI Redis service type (i. e: LoadBalancer) type: ClusterIP + # -- The web UI Redis service port dor the falcosidekick-ui port: 6379 + # -- The web UI Redis service targetPort targetPort: 6379 + # -- The web UI Redis service annotations (use this to set a internal LB, for example.) annotations: {} + # -- Web UI Redis pod securityContext podSecurityContext: {} + # -- Web UI Redis container securityContext securityContext: {} + # -- The resources for the redis pod resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little @@ -494,8 +1293,11 @@ webui: # cpu: 100m # memory: 128Mi + # -- Web UI Redis nodeSelector field nodeSelector: {} + # -- Tolerations for pod assignment tolerations: [] + # -- Affinity for the Web UI Redis pods affinity: {} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/.helmignore b/charts/falco/falco/charts/falco/charts/k8s-metacollector/.helmignore new file mode 100644 index 000000000..0e8a0eb36 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/CHANGELOG.md b/charts/falco/falco/charts/falco/charts/k8s-metacollector/CHANGELOG.md new file mode 100644 index 000000000..a56b211a4 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/CHANGELOG.md @@ -0,0 +1,56 @@ + +# Change Log + +This file documents all notable changes to `k8s-metacollector` Helm Chart. The release +numbering uses [semantic versioning](http://semver.org). + +## v0.1.10 + +* Fix Grafana dashboards datasources + +## v0.1.9 + +* Add podLabels + +## v0.1.8 + +* Bump application version to 0.1.1. For more info see release notes: https://github.com/falcosecurity/k8s-metacollector/releases/tag/v0.1.1 + +## v0.1.7 + +* Lower initial delay seconds for readiness and liveness probes; + +## v0.1.6 + +* Add grafana dashboard; + +## v0.1.5 + +* Fix service monitor indentation; + +## v0.1.4 + +* Lower `interval` and `scrape_timeout` values for service monitor; + +## v0.1.3 + +* Bump application version to 0.1.3 + +## v0.1.2 + +### Major Changes + +* Update unit tests; + +## v0.1.1 + +### Major Changes + +* Add `work in progress` disclaimer; +* Update chart info. + +## v0.1.0 + +### Major Changes + +* Initial release of k8s-metacollector Helm Chart. **Note:** the chart uses the `main` tag, since we don't have released the k8s-metacollector yet. diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/Chart.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/Chart.yaml new file mode 100644 index 000000000..f0c188ad5 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/Chart.yaml @@ -0,0 +1,13 @@ +apiVersion: v2 +appVersion: 0.1.1 +description: Install k8s-metacollector to fetch and distribute Kubernetes metadata + to Falco instances. +home: https://github.com/falcosecurity/k8s-metacollector +maintainers: +- email: cncf-falco-dev@lists.cncf.io + name: The Falco Authors +name: k8s-metacollector +sources: +- https://github.com/falcosecurity/k8s-metacollector +type: application +version: 0.1.10 diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/README.gotmpl b/charts/falco/falco/charts/falco/charts/k8s-metacollector/README.gotmpl new file mode 100644 index 000000000..96d017f07 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/README.gotmpl @@ -0,0 +1,71 @@ +# k8s-metacollector + +[k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers. + +## Introduction + +This chart installs the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) in a kubernetes cluster. The main application will be deployed as Kubernetes deployment with replica count equal to 1. In order for the application to work correctly the following resources will be created: +* ServiceAccount; +* ClusterRole; +* ClusterRoleBinding; +* Service; +* ServiceMonitor (optional); + +*Note*: Incrementing the number of replicas is not recommended. The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) does not implement memory sharding techniques. Furthermore, events are distributed over `gRPC` using `streams` which does not work well with load balancing mechanisms implemented by Kubernetes. + +## Adding `falcosecurity` repository + +Before installing the chart, add the `falcosecurity` charts repository: + +```bash +helm repo add falcosecurity https://falcosecurity.github.io/charts +helm repo update +``` + +## Installing the Chart + +To install the chart with default values and release name `k8s-metacollector` run: + +```bash +helm install k8s-metacollector falcosecurity/k8s-metacollector --namespace metacollector --create-namespace +``` + +After a few seconds, k8s-metacollector should be running in the `metacollector` namespace. + +### Enabling ServiceMonitor +Assuming that Prometheus scrapes only the ServiceMonitors that present a `release label` the following command will install and label the ServiceMonitor: + +```bash +helm install k8s-metacollector falcosecurity/k8s-metacollector \ + --create-namespace \ + --namespace metacollector \ + --set serviceMonitor.create=true \ + --set serviceMonitor.labels.release="kube-prometheus-stack" +``` + +### Deploying the Grafana Dashboard +By setting `grafana.dashboards.enabled=true` the k8s-metacollector's grafana dashboard is deployed in the cluster using a configmap. +Based in Grafana's configuration, the configmap could be scraped by Grafana dashboard sidecar. +The following command will deploy the k8s-metacollector + serviceMonitor + grafana dashboard: + +```bash +helm install k8s-metacollector falcosecurity/k8s-metacollector \ + --create-namespace \ + --namespace metacollector \ + --set serviceMonitor.create=true \ + --set serviceMonitor.labels.release="kube-prometheus-stack" \ + --set grafana.dashboards.enabled=true +``` + +## Uninstalling the Chart +To uninstall the `k8s-metacollector` release in namespace `metacollector`: +```bash +helm uninstall k8s-metacollector --namespace metacollector +``` +The command removes all the Kubernetes resources associated with the chart and deletes the release. + +## Configuration + +The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See `values.yaml` for full list. + +{{ template "chart.valuesSection" . }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/README.md b/charts/falco/falco/charts/falco/charts/k8s-metacollector/README.md new file mode 100644 index 000000000..def3ab4a8 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/README.md @@ -0,0 +1,151 @@ +# k8s-metacollector + +[k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers. + +## Introduction + +This chart installs the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) in a kubernetes cluster. The main application will be deployed as Kubernetes deployment with replica count equal to 1. In order for the application to work correctly the following resources will be created: +* ServiceAccount; +* ClusterRole; +* ClusterRoleBinding; +* Service; +* ServiceMonitor (optional); + +*Note*: Incrementing the number of replicas is not recommended. The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) does not implement memory sharding techniques. Furthermore, events are distributed over `gRPC` using `streams` which does not work well with load balancing mechanisms implemented by Kubernetes. + +## Adding `falcosecurity` repository + +Before installing the chart, add the `falcosecurity` charts repository: + +```bash +helm repo add falcosecurity https://falcosecurity.github.io/charts +helm repo update +``` + +## Installing the Chart + +To install the chart with default values and release name `k8s-metacollector` run: + +```bash +helm install k8s-metacollector falcosecurity/k8s-metacollector --namespace metacollector --create-namespace +``` + +After a few seconds, k8s-metacollector should be running in the `metacollector` namespace. + +### Enabling ServiceMonitor +Assuming that Prometheus scrapes only the ServiceMonitors that present a `release label` the following command will install and label the ServiceMonitor: + +```bash +helm install k8s-metacollector falcosecurity/k8s-metacollector \ + --create-namespace \ + --namespace metacollector \ + --set serviceMonitor.create=true \ + --set serviceMonitor.labels.release="kube-prometheus-stack" +``` + +### Deploying the Grafana Dashboard +By setting `grafana.dashboards.enabled=true` the k8s-metacollector's grafana dashboard is deployed in the cluster using a configmap. +Based in Grafana's configuration, the configmap could be scraped by Grafana dashboard sidecar. +The following command will deploy the k8s-metacollector + serviceMonitor + grafana dashboard: + +```bash +helm install k8s-metacollector falcosecurity/k8s-metacollector \ + --create-namespace \ + --namespace metacollector \ + --set serviceMonitor.create=true \ + --set serviceMonitor.labels.release="kube-prometheus-stack" \ + --set grafana.dashboards.enabled=true +``` + +## Uninstalling the Chart +To uninstall the `k8s-metacollector` release in namespace `metacollector`: +```bash +helm uninstall k8s-metacollector --namespace metacollector +``` +The command removes all the Kubernetes resources associated with the chart and deletes the release. + +## Configuration + +The following table lists the main configurable parameters of the k8s-metacollector chart v0.1.10 and their default values. See `values.yaml` for full list. + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes. | +| containerSecurityContext | object | `{"capabilities":{"drop":["ALL"]}}` | containerSecurityContext holds the security settings for the container. | +| containerSecurityContext.capabilities | object | `{"drop":["ALL"]}` | capabilities fine-grained privileges that can be assigned to processes. | +| containerSecurityContext.capabilities.drop | list | `["ALL"]` | drop drops the given set of privileges. | +| fullnameOverride | string | `""` | fullNameOverride same as nameOverride but for the full name. | +| grafana | object | `{"dashboards":{"configMaps":{"collector":{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}},"enabled":false}}` | grafana contains the configuration related to grafana. | +| grafana.dashboards | object | `{"configMaps":{"collector":{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}},"enabled":false}` | dashboards contains configuration for grafana dashboards. | +| grafana.dashboards.configMaps | object | `{"collector":{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}}` | configmaps to be deployed that contain a grafana dashboard. | +| grafana.dashboards.configMaps.collector | object | `{"folder":"","name":"k8s-metacollector-grafana-dashboard","namespace":""}` | collector contains the configuration for collector's dashboard. | +| grafana.dashboards.configMaps.collector.folder | string | `""` | folder where the dashboard is stored by grafana. | +| grafana.dashboards.configMaps.collector.name | string | `"k8s-metacollector-grafana-dashboard"` | name specifies the name for the configmap. | +| grafana.dashboards.configMaps.collector.namespace | string | `""` | namespace specifies the namespace for the configmap. | +| grafana.dashboards.enabled | bool | `false` | enabled specifies whether the dashboards should be deployed. | +| healthChecks | object | `{"livenessProbe":{"httpGet":{"path":"/healthz","port":8081},"initialDelaySeconds":45,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | healthChecks contains the configuration for liveness and readiness probes. | +| healthChecks.livenessProbe | object | `{"httpGet":{"path":"/healthz","port":8081},"initialDelaySeconds":45,"periodSeconds":15,"timeoutSeconds":5}` | livenessProbe is a diagnostic mechanism used to determine wether a container within a Pod is still running and healthy. | +| healthChecks.livenessProbe.httpGet | object | `{"path":"/healthz","port":8081}` | httpGet specifies that the liveness probe will make an HTTP GET request to check the health of the container. | +| healthChecks.livenessProbe.httpGet.path | string | `"/healthz"` | path is the specific endpoint on which the HTTP GET request will be made. | +| healthChecks.livenessProbe.httpGet.port | int | `8081` | port is the port on which the container exposes the "/healthz" endpoint. | +| healthChecks.livenessProbe.initialDelaySeconds | int | `45` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. | +| healthChecks.livenessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the liveness probe will be repeated. | +| healthChecks.livenessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. | +| healthChecks.readinessProbe | object | `{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}` | readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic. | +| healthChecks.readinessProbe.httpGet | object | `{"path":"/readyz","port":8081}` | httpGet specifies that the readiness probe will make an HTTP GET request to check whether the container is ready. | +| healthChecks.readinessProbe.httpGet.path | string | `"/readyz"` | path is the specific endpoint on which the HTTP GET request will be made. | +| healthChecks.readinessProbe.httpGet.port | int | `8081` | port is the port on which the container exposes the "/readyz" endpoint. | +| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. | +| healthChecks.readinessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the readiness probe will be repeated. | +| healthChecks.readinessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. | +| image | object | `{"pullPolicy":"IfNotPresent","pullSecrets":[],"registry":"docker.io","repository":"falcosecurity/k8s-metacollector","tag":""}` | image is the configuration for the k8s-metacollector image. | +| image.pullPolicy | string | `"IfNotPresent"` | pullPolicy is the policy used to determine when a node should attempt to pull the container image. | +| image.pullSecrets | list | `[]` | pullSecects a list of secrets containing credentials used when pulling from private/secure registries. | +| image.registry | string | `"docker.io"` | registry is the image registry to pull from. | +| image.repository | string | `"falcosecurity/k8s-metacollector"` | repository is the image repository to pull from | +| image.tag | string | `""` | tag is image tag to pull. Overrides the image tag whose default is the chart appVersion. | +| nameOverride | string | `""` | nameOverride is the new name used to override the release name used for k8s-metacollector components. | +| namespaceOverride | string | `""` | namespaceOverride overrides the deployment namespace. It's useful for multi-namespace deployments in combined charts. | +| nodeSelector | object | `{}` | nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes for the Pod to be eligible for scheduling on that node. | +| podAnnotations | object | `{}` | podAnnotations are custom annotations to be added to the pod. | +| podLabels | object | `{}` | podLabels are labels to be added to the pod. | +| podSecurityContext | object | `{"fsGroup":1000,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000}` | These settings are override by the ones specified for the container when there is overlap. | +| podSecurityContext.fsGroup | int | `1000` | fsGroup specifies the group ID (GID) that should be used for the volume mounted within a Pod. | +| podSecurityContext.runAsGroup | int | `1000` | runAsGroup specifies the group ID (GID) that the containers inside the pod should run as. | +| podSecurityContext.runAsNonRoot | bool | `true` | runAsNonRoot when set to true enforces that the specified container runs as a non-root user. | +| podSecurityContext.runAsUser | int | `1000` | runAsUser specifies the user ID (UID) that the containers inside the pod should run as. | +| replicaCount | int | `1` | replicaCount is the number of identical copies of the k8s-metacollector. | +| resources | object | `{}` | resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod. | +| service | object | `{"create":true,"ports":{"broker-grpc":{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"},"health-probe":{"port":8081,"protocol":"TCP","targetPort":"health-probe"},"metrics":{"port":8080,"protocol":"TCP","targetPort":"metrics"}},"type":"ClusterIP"}` | service exposes the k8s-metacollector services to be accessed from within the cluster. ref: https://kubernetes.io/docs/concepts/services-networking/service/ | +| service.create | bool | `true` | enabled specifies whether a service should be created. | +| service.ports | object | `{"broker-grpc":{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"},"health-probe":{"port":8081,"protocol":"TCP","targetPort":"health-probe"},"metrics":{"port":8080,"protocol":"TCP","targetPort":"metrics"}}` | ports denotes all the ports on which the Service will listen. | +| service.ports.broker-grpc | object | `{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"}` | broker-grpc denotes a listening service named "grpc-broker" | +| service.ports.broker-grpc.port | int | `45000` | port is the port on which the Service will listen. | +| service.ports.broker-grpc.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. | +| service.ports.broker-grpc.targetPort | string | `"broker-grpc"` | targetPort is the port on which the Pod is listening. | +| service.ports.health-probe | object | `{"port":8081,"protocol":"TCP","targetPort":"health-probe"}` | health-probe denotes a listening service named "health-probe" | +| service.ports.health-probe.port | int | `8081` | port is the port on which the Service will listen. | +| service.ports.health-probe.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. | +| service.ports.health-probe.targetPort | string | `"health-probe"` | targetPort is the port on which the Pod is listening. | +| service.ports.metrics | object | `{"port":8080,"protocol":"TCP","targetPort":"metrics"}` | metrics denotes a listening service named "metrics". | +| service.ports.metrics.port | int | `8080` | port is the port on which the Service will listen. | +| service.ports.metrics.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. | +| service.ports.metrics.targetPort | string | `"metrics"` | targetPort is the port on which the Pod is listening. | +| service.type | string | `"ClusterIP"` | type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible from within the cluster. | +| serviceAccount | object | `{"annotations":{},"create":true,"name":""}` | serviceAccount is the configuration for the service account. | +| serviceAccount.annotations | object | `{}` | annotations to add to the service account. | +| serviceAccount.create | bool | `true` | create specifies whether a service account should be created. | +| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the full name template. | +| serviceMonitor | object | `{"create":false,"interval":"15s","labels":{},"path":"/metrics","relabelings":[],"scheme":"http","scrapeTimeout":"10s","targetLabels":[],"tlsConfig":{}}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should discover and scrape metrics from the k8s-metacollector service. | +| serviceMonitor.create | bool | `false` | create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. https://github.com/coreos/prometheus-operator Enable it only if the ServiceMonitor CRD is installed in your cluster. | +| serviceMonitor.interval | string | `"15s"` | interval specifies the time interval at which Prometheus should scrape metrics from the service. | +| serviceMonitor.labels | object | `{}` | labels set of labels to be applied to the ServiceMonitor resource. If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right label here in order for the ServiceMonitor to be selected for target discovery. | +| serviceMonitor.path | string | `"/metrics"` | path at which the metrics are expose by the k8s-metacollector. | +| serviceMonitor.relabelings | list | `[]` | relabelings configures the relabeling rules to apply the target’s metadata labels. | +| serviceMonitor.scheme | string | `"http"` | scheme specifies network protocol used by the metrics endpoint. In this case HTTP. | +| serviceMonitor.scrapeTimeout | string | `"10s"` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. | +| serviceMonitor.targetLabels | list | `[]` | targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. | +| serviceMonitor.tlsConfig | object | `{}` | tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when scraping metrics from a service. It allows you to define the details of the TLS connection, such as CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support TLS configuration for the metrics endpoint. | +| tolerations | list | `[]` | tolerations are applied to pods and allow them to be scheduled on nodes with matching taints. | diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/dashboards/k8s-metacollector-dashboard.json b/charts/falco/falco/charts/falco/charts/k8s-metacollector/dashboards/k8s-metacollector-dashboard.json new file mode 100644 index 000000000..c9682d203 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/dashboards/k8s-metacollector-dashboard.json @@ -0,0 +1,1650 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": { + "type": "grafana", + "uid": "-- Grafana --" + }, + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "target": { + "limit": 100, + "matchAny": false, + "tags": [], + "type": "dashboard" + }, + "type": "dashboard" + } + ] + }, + "description": "", + "editable": true, + "fiscalYearStartMonth": 0, + "graphTooltip": 0, + "id": 146, + "links": [], + "liveNow": false, + "panels": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "continuous-GrYlRd" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 20, + "gradientMode": "scheme", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "smooth", + "lineWidth": 3, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 0 + }, + "id": 2, + "interval": "1m", + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "pluginVersion": "8.4.3", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "code", + "exemplar": true, + "expr": "rate(process_cpu_seconds_total{namespace=\"$namespace\", pod=\"$pod\"}[5m]) * 100", + "format": "time_series", + "interval": "", + "intervalFactor": 2, + "legendFormat": "Pod: {{pod}} | Container: {{container}}", + "range": true, + "refId": "A", + "step": 10 + } + ], + "title": "Controller CPU Usage", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "continuous-GrYlRd" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 20, + "gradientMode": "scheme", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "smooth", + "lineWidth": 3, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 0 + }, + "id": 4, + "interval": "1m", + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "pluginVersion": "8.4.3", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "exemplar": true, + "expr": "process_resident_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}", + "format": "time_series", + "interval": "", + "intervalFactor": 2, + "legendFormat": "Pod: {{pod}} | Container: {{container}}", + "range": true, + "refId": "A", + "step": 10 + } + ], + "title": "Controller Memory Usage", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "Number of subscribers", + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "blue", + "mode": "fixed" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 8 + }, + "id": 41, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "pluginVersion": "9.5.1", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "expr": "meta_collector_server_subscribers{namespace=\"$namespace\", pod=\"$pod\", job=\"$job\"}", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Subscribers", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "blue", + "mode": "palette-classic" + }, + "custom": { + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + } + }, + "mappings": [] + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 8 + }, + "id": 40, + "options": { + "displayLabels": [ + "name" + ], + "legend": { + "displayMode": "table", + "placement": "right", + "showLegend": true, + "values": [ + "value", + "percent" + ] + }, + "pieType": "pie", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "pluginVersion": "9.5.1", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "expr": "sum by(type) (meta_collector_broker_queue_adds{pod=\"$pod\", namespace=\"$namespace\"})", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Events Added To Broker Queue Per Type", + "type": "piechart" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "blue", + "mode": "palette-classic" + }, + "custom": { + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + } + }, + "mappings": [] + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 16 + }, + "id": 23, + "options": { + "displayLabels": [], + "legend": { + "displayMode": "table", + "placement": "right", + "showLegend": true, + "values": [ + "value", + "percent" + ] + }, + "pieType": "pie", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "pluginVersion": "9.5.1", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "expr": "sum by(controller) (controller_runtime_reconcile_total{pod=\"$pod\", namespace=\"$namespace\"})", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Reconciles Per collector", + "type": "piechart" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 16 + }, + "id": 26, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "11.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "expr": "sum by(job) (meta_collector_broker_dispatched_events{pod=\"$pod\", namespace=\"$namespace\"})", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Total Events Dispatched", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "Events sent to subscribers", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 24 + }, + "id": 17, + "options": { + "displayMode": "gradient", + "maxVizHeight": 300, + "minVizHeight": 10, + "minVizWidth": 0, + "namePlacement": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showUnfilled": true, + "sizing": "auto", + "valueMode": "color" + }, + "pluginVersion": "11.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "exemplar": false, + "expr": "sum by(kind) (meta_collector_broker_dispatched_events{pod=\"$pod\", namespace=\"$namespace\"})", + "format": "time_series", + "legendFormat": "{{kind}}", + "range": true, + "refId": "A" + } + ], + "title": "Events Dispatched Per Resource Kind", + "type": "bargauge" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 24 + }, + "id": 25, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "11.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "expr": "sum(meta_collector_collector_event_api_server_received{pod=\"$pod\", namespace=\"$namespace\", source=\"api-server\"})", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Events Received from api-server", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 32 + }, + "id": 24, + "options": { + "displayMode": "gradient", + "maxVizHeight": 300, + "minVizHeight": 10, + "minVizWidth": 0, + "namePlacement": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showUnfilled": true, + "sizing": "auto", + "valueMode": "color" + }, + "pluginVersion": "11.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "expr": "sum by(name) (meta_collector_collector_event_api_server_received{pod=\"$pod\", namespace=\"$namespace\", source=\"api-server\"})", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Events From Api Server Per collector", + "type": "bargauge" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "How long in seconds an item stays in the broker queue before being processed by the broker.", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 10, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "normal" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s" + }, + "overrides": [ + { + "__systemRef": "hideSeriesFrom", + "matcher": { + "id": "byNames", + "options": { + "mode": "exclude", + "names": [ + "P50 blockingChannel 10.16.1.4:8080 " + ], + "prefix": "All except:", + "readOnly": true + } + }, + "properties": [ + { + "id": "custom.hideFrom", + "value": { + "legend": false, + "tooltip": false, + "viz": true + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 32 + }, + "id": 13, + "options": { + "legend": { + "calcs": [ + "max", + "mean" + ], + "displayMode": "list", + "placement": "right", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "exemplar": true, + "expr": "histogram_quantile(0.5, sum by(instance, name, le) (rate(meta_collector_broker_queue_duration_seconds_bucket{pod=\"$pod\", namespace=\"$namespace\"}[5m])))", + "interval": "", + "legendFormat": "P50 {{name}} {{instance}} ", + "range": true, + "refId": "A" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "exemplar": true, + "expr": "histogram_quantile(0.9, sum by(instance, name, le) (rate(broker_queue_duration_seconds_bucket{pod=\"$pod\", namespace=\"$namespace\"}[5m])))", + "hide": false, + "interval": "", + "legendFormat": "P90 {{name}} {{instance}} ", + "range": true, + "refId": "B" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "builder", + "exemplar": true, + "expr": "histogram_quantile(0.99, sum by(instance, name, le) (rate(broker_queue_duration_seconds_bucket{pod=\"$pod\", namespace=\"$namespace\"}[5m])))", + "hide": false, + "interval": "", + "legendFormat": "P99 {{name}} {{instance}} ", + "range": true, + "refId": "C" + } + ], + "title": "Seconds For Items Stay In Broker Queue (before being proccesed) (P50, P90, P99)", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "How long in seconds an item stays in workqueue before being requested", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 10, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "normal" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 0, + "y": 40 + }, + "id": 30, + "options": { + "legend": { + "calcs": [ + "max", + "mean" + ], + "displayMode": "list", + "placement": "right", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "histogram_quantile(0.50, sum(rate(workqueue_queue_duration_seconds_bucket{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name, le))", + "interval": "", + "legendFormat": "P50 {{name}} {{instance}} ", + "refId": "A" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "histogram_quantile(0.90, sum(rate(workqueue_queue_duration_seconds_bucket{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name, le))", + "hide": false, + "interval": "", + "legendFormat": "P90 {{name}} {{instance}} ", + "refId": "B" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "histogram_quantile(0.99, sum(rate(workqueue_queue_duration_seconds_bucket{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name, le))", + "hide": false, + "interval": "", + "legendFormat": "P99 {{name}} {{instance}} ", + "refId": "C" + } + ], + "title": "Seconds For Items Stay In Queue (before being requested) (P50, P90, P99)", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "Total number of retries handled by workqueue", + "fieldConfig": { + "defaults": { + "color": { + "mode": "continuous-GrYlRd" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 20, + "gradientMode": "scheme", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "smooth", + "lineWidth": 3, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "ops" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 40 + }, + "id": 34, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "code", + "exemplar": true, + "expr": "sum(rate(workqueue_retries_total{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name)", + "interval": "", + "legendFormat": "{{name}} {{instance}} ", + "range": true, + "refId": "A" + } + ], + "title": "Work Queue Retries Rate", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "How long in seconds processing an item from workqueue takes.", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 10, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 0, + "y": 47 + }, + "id": 29, + "options": { + "legend": { + "calcs": [ + "max", + "mean" + ], + "displayMode": "table", + "placement": "right", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "histogram_quantile(0.50, sum(rate(workqueue_work_duration_seconds_bucket{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name, le))", + "interval": "", + "legendFormat": "P50 {{name}} {{instance}} ", + "refId": "A" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "histogram_quantile(0.90, sum(rate(workqueue_work_duration_seconds_bucket{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name, le))", + "hide": false, + "interval": "", + "legendFormat": "P90 {{name}} {{instance}} ", + "refId": "B" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "histogram_quantile(0.99, sum(rate(workqueue_work_duration_seconds_bucket{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name, le))", + "hide": false, + "interval": "", + "legendFormat": "P99 {{name}} {{instance}} ", + "refId": "C" + } + ], + "title": "Seconds Processing Items From WorkQueue (P50, P90, P99)", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "Total number of reconciliation errors per controller", + "fieldConfig": { + "defaults": { + "color": { + "mode": "continuous-GrYlRd" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 20, + "gradientMode": "scheme", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "smooth", + "lineWidth": 3, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "cpm" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 47 + }, + "id": 32, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "code", + "exemplar": true, + "expr": "sum(rate(controller_runtime_reconcile_errors_total{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, pod)", + "interval": "", + "legendFormat": "{{instance}} {{pod}}", + "range": true, + "refId": "A" + } + ], + "title": "Reconciliation Error Count Per Controller", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "continuous-GrYlRd" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 20, + "gradientMode": "scheme", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "smooth", + "lineWidth": 3, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "ops" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 0, + "y": 54 + }, + "id": 33, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "pluginVersion": "8.4.3", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "exemplar": true, + "expr": "sum(rate(workqueue_adds_total{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, name)", + "interval": "", + "legendFormat": "{{name}} {{instance}}", + "refId": "A" + } + ], + "title": "Work Queue Add Rate", + "type": "timeseries" + }, + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "description": "Total number of reconciliations per controller", + "fieldConfig": { + "defaults": { + "color": { + "mode": "continuous-GrYlRd" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 20, + "gradientMode": "scheme", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "smooth", + "lineWidth": 3, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "cpm" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 54 + }, + "id": 31, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "maxHeight": 600, + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "editorMode": "code", + "exemplar": true, + "expr": "sum(rate(controller_runtime_reconcile_total{job=\"$job\", namespace=\"$namespace\"}[5m])) by (instance, pod)", + "interval": "", + "legendFormat": "{{instance}} {{pod}}", + "range": true, + "refId": "A" + } + ], + "title": "Total Reconciliation Count Per Controller", + "type": "timeseries" + } + ], + "refresh": "5s", + "schemaVersion": 39, + "tags": [ + "falco" + ], + "templating": { + "list": [ + { + "current": {}, + "hide": 0, + "includeAll": false, + "label": "Metrics", + "multi": false, + "name": "DS_Metrics", + "options": [], + "query": "prometheus", + "queryValue": "", + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "type": "datasource" + }, + { + "current": {}, + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "definition": "label_values(meta_collector_server_subscribers,namespace)", + "hide": 0, + "includeAll": false, + "multi": false, + "name": "namespace", + "options": [], + "query": { + "qryType": 1, + "query": "label_values(meta_collector_server_subscribers,namespace)", + "refId": "PrometheusVariableQueryEditor-VariableQuery" + }, + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "sort": 0, + "type": "query" + }, + { + "current": {}, + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "definition": "label_values(meta_collector_server_subscribers{namespace=\"$namespace\"},job)", + "hide": 0, + "includeAll": false, + "multi": false, + "name": "job", + "options": [], + "query": { + "qryType": 1, + "query": "label_values(meta_collector_server_subscribers{namespace=\"$namespace\"},job)", + "refId": "PrometheusVariableQueryEditor-VariableQuery" + }, + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "sort": 0, + "type": "query" + }, + { + "current": {}, + "datasource": { + "type": "prometheus", + "uid": "${DS_Metrics}" + }, + "definition": "label_values(meta_collector_server_subscribers{namespace=\"$namespace\", job=\"$job\"},pod)", + "hide": 0, + "includeAll": false, + "multi": false, + "name": "pod", + "options": [], + "query": { + "qryType": 1, + "query": "label_values(meta_collector_server_subscribers{namespace=\"$namespace\", job=\"$job\"},pod)", + "refId": "PrometheusVariableQueryEditor-VariableQuery" + }, + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "sort": 0, + "type": "query" + } + ] + }, + "time": { + "from": "now-30m", + "to": "now" + }, + "timeRangeUpdatedDuringEditOrView": false, + "timepicker": {}, + "timezone": "", + "title": "Falco / Meta Collector", + "uid": "fdr5cuh96bj7kf", + "version": 4, + "weekStart": "" +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/NOTES.txt b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/NOTES.txt new file mode 100644 index 000000000..8b1378917 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/NOTES.txt @@ -0,0 +1 @@ + diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/_helpers.tpl b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/_helpers.tpl new file mode 100644 index 000000000..4603c490c --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/_helpers.tpl @@ -0,0 +1,121 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "k8s-metacollector.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "k8s-metacollector.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "k8s-metacollector.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "k8s-metacollector.namespace" -}} +{{- default .Release.Namespace .Values.namespaceOverride -}} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "k8s-metacollector.labels" -}} +helm.sh/chart: {{ include "k8s-metacollector.chart" . }} +{{ include "k8s-metacollector.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/component: "metadata-collector" +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "k8s-metacollector.selectorLabels" -}} +app.kubernetes.io/name: {{ include "k8s-metacollector.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Return the proper k8s-metacollector image name +*/}} +{{- define "k8s-metacollector.image" -}} +" +{{- with .Values.image.registry -}} + {{- . }}/ +{{- end -}} +{{- .Values.image.repository }}: +{{- .Values.image.tag | default .Chart.AppVersion -}} +" +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "k8s-metacollector.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "k8s-metacollector.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} + +{{/* +Generate the ports for the service +*/}} +{{- define "k8s-metacollector.servicePorts" -}} +{{- if .Values.service.create }} +{{- with .Values.service.ports }} +{{- range $key, $value := . }} +- name: {{ $key }} +{{- range $key1, $value1 := $value }} + {{ $key1}}: {{ $value1 }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Generate the ports for the container +*/}} +{{- define "k8s-metacollector.containerPorts" -}} +{{- if .Values.service.create }} +{{- with .Values.service.ports }} +{{- range $key, $value := . }} +- name: "{{ $key }}" +{{- range $key1, $value1 := $value }} + {{- if ne $key1 "targetPort" }} + {{- if eq $key1 "port" }} + containerPort: {{ $value1 }} + {{- else }} + {{ $key1}}: {{ $value1 }} + {{- end }} + {{- end }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/clusterrole.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/clusterrole.yaml similarity index 50% rename from charts/falco/falco/charts/falco/templates/clusterrole.yaml rename to charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/clusterrole.yaml index 1e9e9eac3..335eed315 100644 --- a/charts/falco/falco/charts/falco/templates/clusterrole.yaml +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/clusterrole.yaml @@ -1,43 +1,39 @@ -{{- if .Values.rbac.create }} +{{- if .Values.serviceAccount.create -}} +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole -apiVersion: {{ include "rbac.apiVersion" . }} metadata: - name: {{ include "falco.fullname" . }} + name: {{ include "k8s-metacollector.fullname" . }} labels: - {{- include "falco.labels" . | nindent 4 }} + {{- include "k8s-metacollector.labels" . | nindent 4 }} rules: - apiGroups: - - extensions + - apps + resources: + - daemonsets + - deployments + - replicasets + verbs: + - get + - list + - watch + - apiGroups: - "" resources: - - nodes + - endpoints - namespaces - pods - replicationcontrollers - - replicasets - services - - daemonsets - - deployments - - events - - configmaps verbs: - get - list - watch - apiGroups: - - apps + - discovery.k8s.io resources: - - daemonsets - - deployments - - replicasets - - statefulsets + - endpointslices verbs: - get - list - watch - - nonResourceURLs: - - /healthz - - /healthz/* - verbs: - - get -{{- end }} + {{- end }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/clusterrolebinding.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/clusterrolebinding.yaml new file mode 100644 index 000000000..fa37ef703 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "k8s-metacollector.fullname" . }} + labels: + {{- include "k8s-metacollector.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "k8s-metacollector.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ include "k8s-metacollector.serviceAccountName" . }} + namespace: {{ include "k8s-metacollector.namespace" . }} + {{- end }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/collector-dashboard-grafana.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/collector-dashboard-grafana.yaml new file mode 100644 index 000000000..857fe6b3d --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/collector-dashboard-grafana.yaml @@ -0,0 +1,21 @@ +{{- if .Values.grafana.dashboards.enabled -}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ .Values.grafana.dashboards.configMaps.collector.name }} + {{ if .Values.grafana.dashboards.configMaps.collector.namespace }} + namespace: {{ .Values.grafana.dashboards.configMaps.collector.namespace }} + {{- else -}} + namespace: {{ include "k8s-metacollector.namespace" . }} + {{- end }} + labels: + grafana_dashboard: "1" + {{- if .Values.grafana.dashboards.configMaps.collector.folder }} + annotations: + k8s-sidecar-target-directory: /tmp/dashboards/{{ .Values.grafana.dashboards.configMaps.collector.folder}} + grafana_dashboard_folder: {{ .Values.grafana.dashboards.configMaps.collector.folder }} + {{- end }} +data: + dashboard.json: |- + {{- .Files.Get "dashboards/k8s-metacollector-dashboard.json" | nindent 4 }} + {{- end -}} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/deployment.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/deployment.yaml new file mode 100644 index 000000000..7688a215a --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/deployment.yaml @@ -0,0 +1,65 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "k8s-metacollector.fullname" . }} + namespace: {{ include "k8s-metacollector.namespace" . }} + labels: + {{- include "k8s-metacollector.labels" . | nindent 4 }} +spec: + replicas: {{ .Values.replicaCount }} + selector: + matchLabels: + {{- include "k8s-metacollector.selectorLabels" . | nindent 6 }} + template: + metadata: + {{- with .Values.podAnnotations }} + annotations: + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "k8s-metacollector.selectorLabels" . | nindent 8 }} + {{- if .Values.podLabels }} + {{ toYaml .Values.podLabels | nindent 8 }} + {{- end }} + spec: + {{- with .Values.image.pullSecrets }} + imagePullSecrets: + {{- toYaml . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "k8s-metacollector.serviceAccountName" . }} + securityContext: + {{- toYaml .Values.podSecurityContext | nindent 8 }} + containers: + - name: {{ .Chart.Name }} + securityContext: + {{- toYaml .Values.containerSecurityContext | nindent 12 }} + image: {{ include "k8s-metacollector.image" . }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + command: + - /meta-collector + args: + - run + ports: + {{- include "k8s-metacollector.containerPorts" . | indent 12}} + {{- with .Values.healthChecks.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 12}} + {{- end }} + {{- with .Values.healthChecks.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 12}} + {{- end }} + resources: + {{- toYaml .Values.resources | nindent 12 }} + {{- with .Values.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/service.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/service.yaml new file mode 100644 index 000000000..ff2076b00 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/service.yaml @@ -0,0 +1,15 @@ +{{- if .Values.service.create}} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "k8s-metacollector.fullname" . }} + namespace: {{ include "k8s-metacollector.namespace" . }} + labels: + {{- include "k8s-metacollector.labels" . | nindent 4 }} +spec: + type: {{ .Values.service.type }} + ports: + {{- include "k8s-metacollector.servicePorts" . | indent 4 }} + selector: + {{- include "k8s-metacollector.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/serviceaccount.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/serviceaccount.yaml new file mode 100644 index 000000000..35051a94d --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "k8s-metacollector.serviceAccountName" . }} + namespace: {{ include "k8s-metacollector.namespace" . }} + labels: + {{- include "k8s-metacollector.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/servicemonitor.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/servicemonitor.yaml new file mode 100644 index 000000000..50d353568 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/templates/servicemonitor.yaml @@ -0,0 +1,47 @@ +{{- if .Values.serviceMonitor.create }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "k8s-metacollector.fullname" . }} + {{- if .Values.serviceMonitor.namespace }} + namespace: {{ tpl .Values.serviceMonitor.namespace . }} + {{- else }} + namespace: {{ include "k8s-metacollector.namespace" . }} + {{- end }} + labels: + {{- include "k8s-metacollector.labels" . | nindent 4 }} + {{- with .Values.serviceMonitor.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + endpoints: + - port: {{ .Values.service.ports.metrics.targetPort }} + {{- with .Values.serviceMonitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + honorLabels: true + path: {{ .Values.serviceMonitor.path }} + scheme: {{ .Values.serviceMonitor.scheme }} + {{- with .Values.serviceMonitor.tlsConfig }} + tlsConfig: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.serviceMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 6 }} + {{- end }} + jobLabel: "{{ .Release.Name }}" + selector: + matchLabels: + {{- include "k8s-metacollector.selectorLabels" . | nindent 6 }} + namespaceSelector: + matchNames: + - {{ include "k8s-metacollector.namespace" . }} + {{- with .Values.serviceMonitor.targetLabels }} + targetLabels: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/chartInfo.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/chartInfo.go new file mode 100644 index 000000000..11b4b3d9c --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/chartInfo.go @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "gopkg.in/yaml.v3" +) + +func chartInfo(t *testing.T, chartPath string) (map[string]interface{}, error) { + // Get chart info. + output, err := helm.RunHelmCommandAndGetOutputE(t, &helm.Options{}, "show", "chart", chartPath) + if err != nil { + return nil, err + } + chartInfo := map[string]interface{}{} + err = yaml.Unmarshal([]byte(output), &chartInfo) + return chartInfo, err +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/commonMetaFields_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/commonMetaFields_test.go new file mode 100644 index 000000000..17b3f92e6 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/commonMetaFields_test.go @@ -0,0 +1,222 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "fmt" + "path/filepath" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" +) + +type commonMetaFieldsTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestCommonMetaFields(t *testing.T) { + t.Parallel() + // Template files that will be rendered. + templateFiles := []string{ + "templates/clusterrole.yaml", + "templates/clusterrolebinding.yaml", + "templates/deployment.yaml", + "templates/service.yaml", + "templates/serviceaccount.yaml", + "templates/servicemonitor.yaml", + } + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &commonMetaFieldsTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "releasename-test", + namespace: "metacollector-test", + templates: templateFiles, + }) +} + +func (s *commonMetaFieldsTest) TestNameOverride() { + cInfo, err := chartInfo(s.T(), s.chartPath) + s.NoError(err) + chartName, found := cInfo["name"] + s.True(found) + + testCases := []struct { + name string + values map[string]string + expected string + }{ + { + "defaultValues, release name does not contain chart name", + map[string]string{ + "serviceMonitor.create": "true", + }, + fmt.Sprintf("%s-%s", s.releaseName, chartName), + }, + { + "overrideFullName", + map[string]string{ + "fullnameOverride": "metadata", + "serviceMonitor.create": "true", + }, + "metadata", + }, + { + "overrideFullName, longer than 63 chars", + map[string]string{ + "fullnameOverride": "aVeryLongNameForTheReleaseThatIsLongerThanSixtyThreeCharsaVeryLongNameForTheReleaseThatIsLongerThanSixtyThreeChars", + "serviceMonitor.create": "true", + }, + "aVeryLongNameForTheReleaseThatIsLongerThanSixtyThreeCharsaVeryL", + }, + { + "overrideName, not containing release name", + map[string]string{ + "nameOverride": "metadata", + "serviceMonitor.create": "true", + }, + fmt.Sprintf("%s-metadata", s.releaseName), + }, + + { + "overrideName, release name contains the name", + map[string]string{ + "nameOverride": "releasename", + "serviceMonitor.create": "true", + }, + s.releaseName, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + for _, template := range s.templates { + // Render the current template. + output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, []string{template}) + // Unmarshal output to a map. + var resource unstructured.Unstructured + helm.UnmarshalK8SYaml(s.T(), output, &resource) + + s.Equal(testCase.expected, resource.GetName(), "should be equal") + } + }) + } +} + +func (s *commonMetaFieldsTest) TestNamespaceOverride() { + testCases := []struct { + name string + values map[string]string + expected string + }{ + { + "defaultValues", + map[string]string{ + "serviceMonitor.create": "true", + }, + "default", + }, + { + "overrideNamespace", + map[string]string{ + "namespaceOverride": "metacollector", + "serviceMonitor.create": "true", + }, + "metacollector", + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + for _, template := range s.templates { + // Render the current template. + output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, []string{template}) + // Unmarshal output to a map. + var resource unstructured.Unstructured + helm.UnmarshalK8SYaml(s.T(), output, &resource) + if resource.GetKind() == "ClusterRole" || resource.GetKind() == "ClusterRoleBinding" { + continue + } + s.Equal(testCase.expected, resource.GetNamespace(), "should be equal") + } + }) + } +} + +// TestLabels tests that all rendered resources have the same base set of labels. +func (s *commonMetaFieldsTest) TestLabels() { + // Get chart info. + cInfo, err := chartInfo(s.T(), s.chartPath) + s.NoError(err) + // Get app version. + appVersion, found := cInfo["appVersion"] + s.True(found, "should find app version in chart info") + appVersion = appVersion.(string) + // Get chart version. + chartVersion, found := cInfo["version"] + s.True(found, "should find chart version in chart info") + // Get chart name. + chartName, found := cInfo["name"] + s.True(found, "should find chart name in chart info") + chartName = chartName.(string) + expectedLabels := map[string]string{ + "helm.sh/chart": fmt.Sprintf("%s-%s", chartName, chartVersion), + "app.kubernetes.io/name": chartName.(string), + "app.kubernetes.io/instance": s.releaseName, + "app.kubernetes.io/version": appVersion.(string), + "app.kubernetes.io/managed-by": "Helm", + "app.kubernetes.io/component": "metadata-collector", + } + + // Adjust the values to render all components. + options := helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}} + + for _, template := range s.templates { + // Render the current template. + output := helm.RenderTemplate(s.T(), &options, s.chartPath, s.releaseName, []string{template}) + // Unmarshal output to a map. + var resource unstructured.Unstructured + helm.UnmarshalK8SYaml(s.T(), output, &resource) + labels := resource.GetLabels() + s.Len(labels, len(expectedLabels), "should have the same number of labels") + for key, value := range labels { + expectedVal := expectedLabels[key] + s.Equal(expectedVal, value) + } + } +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/defaultResources_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/defaultResources_test.go new file mode 100644 index 000000000..969741254 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/defaultResources_test.go @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "path/filepath" + "regexp" + "strings" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "k8s.io/utils/strings/slices" +) + +const chartPath = "../../" + +// Using the default values we want to test that all the expected resources are rendered. +func TestRenderedResourcesWithDefaultValues(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + releaseName := "rendered-resources" + + // Template files that we expect to be rendered. + templateFiles := []string{ + "clusterrole.yaml", + "clusterrolebinding.yaml", + "deployment.yaml", + "service.yaml", + "serviceaccount.yaml", + } + + require.NoError(t, err) + + options := &helm.Options{} + + // Template the chart using the default values.yaml file. + output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil) + require.NoError(t, err) + + // Extract all rendered files from the output. + pattern := `# Source: k8s-metacollector/templates/([^\n]+)` + re := regexp.MustCompile(pattern) + matches := re.FindAllStringSubmatch(output, -1) + + var renderedTemplates []string + for _, match := range matches { + // Filter out test templates. + if !strings.Contains(match[1], "test-") { + renderedTemplates = append(renderedTemplates, match[1]) + } + } + + // Assert that the rendered resources are equal tho the expected ones. + require.Equal(t, len(renderedTemplates), len(templateFiles), "should be equal") + + for _, rendered := range renderedTemplates { + require.True(t, slices.Contains(templateFiles, rendered), "template files should contain all the rendered files") + } +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/deploymentTemplate_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/deploymentTemplate_test.go new file mode 100644 index 000000000..4b3fa1c9e --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/deploymentTemplate_test.go @@ -0,0 +1,862 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "encoding/json" + "fmt" + "path/filepath" + "reflect" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" +) + +type deploymentTemplateTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestDeploymentTemplate(t *testing.T) { + t.Parallel() + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &deploymentTemplateTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "k8s-metacollector-test", + namespace: "metacollector-test", + templates: []string{"templates/deployment.yaml"}, + }) +} + +func (s *deploymentTemplateTest) TestImage() { + // Get chart info. + cInfo, err := chartInfo(s.T(), s.chartPath) + s.NoError(err) + // Extract the appVersion. + appVersion, found := cInfo["appVersion"] + s.True(found, "should find app version from chart info") + + testCases := []struct { + name string + values map[string]string + expected string + }{ + { + "defaultValues", + nil, + fmt.Sprintf("docker.io/falcosecurity/k8s-metacollector:%s", appVersion), + }, + { + "changingImageTag", + map[string]string{ + "image.tag": "testingTag", + }, + "docker.io/falcosecurity/k8s-metacollector:testingTag", + }, + { + "changingImageRepo", + map[string]string{ + "image.repository": "falcosecurity/testingRepository", + }, + fmt.Sprintf("docker.io/falcosecurity/testingRepository:%s", appVersion), + }, + { + "changingImageRegistry", + map[string]string{ + "image.registry": "ghcr.io", + }, + fmt.Sprintf("ghcr.io/falcosecurity/k8s-metacollector:%s", appVersion), + }, + { + "changingAllImageFields", + map[string]string{ + "image.registry": "ghcr.io", + "image.repository": "falcosecurity/testingRepository", + "image.tag": "testingTag", + }, + "ghcr.io/falcosecurity/testingRepository:testingTag", + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + s.Equal(testCase.expected, deployment.Spec.Template.Spec.Containers[0].Image, "should be equal") + }) + } +} + +func (s *deploymentTemplateTest) TestImagePullPolicy() { + testCases := []struct { + name string + values map[string]string + expected string + }{ + { + "defaultValues", + nil, + "IfNotPresent", + }, + { + "changingPullPolicy", + map[string]string{ + "image.pullPolicy": "Always", + }, + "Always", + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + s.Equal(testCase.expected, string(deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy), "should be equal") + }) + } +} + +func (s *deploymentTemplateTest) TestImagePullSecrets() { + testCases := []struct { + name string + values map[string]string + expected string + }{ + { + "defaultValues", + nil, + "", + }, + { + "changingPullPolicy", + map[string]string{ + "image.pullSecrets[0].name": "my-pull-secret", + }, + "my-pull-secret", + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + if testCase.expected == "" { + s.Nil(deployment.Spec.Template.Spec.ImagePullSecrets, "should be nil") + } else { + s.Equal(testCase.expected, deployment.Spec.Template.Spec.ImagePullSecrets[0].Name, "should be equal") + } + }) + } +} + +func (s *deploymentTemplateTest) TestServiceAccount() { + testCases := []struct { + name string + values map[string]string + expected string + }{ + { + "defaultValues", + nil, + s.releaseName, + }, + { + "changingServiceAccountName", + map[string]string{ + "serviceAccount.name": "service-account", + }, + "service-account", + }, + { + "disablingServiceAccount", + map[string]string{ + "serviceAccount.create": "false", + }, + "default", + }, + { + "disablingServiceAccountAndSettingName", + map[string]string{ + "serviceAccount.create": "false", + "serviceAccount.name": "service-account", + }, + "service-account", + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + s.Equal(testCase.expected, deployment.Spec.Template.Spec.ServiceAccountName, "should be equal") + }) + } +} + +func (s *deploymentTemplateTest) TestPodAnnotations() { + testCases := []struct { + name string + values map[string]string + expected map[string]string + }{ + { + "defaultValues", + nil, + nil, + }, + { + "settingPodAnnotations", + map[string]string{ + "podAnnotations.my-annotation": "annotationValue", + }, + map[string]string{ + "my-annotation": "annotationValue", + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + if testCase.expected == nil { + s.Nil(deployment.Spec.Template.Annotations, "should be nil") + } else { + for key, val := range testCase.expected { + val1 := deployment.Spec.Template.Annotations[key] + s.Equal(val, val1, "should contain all the added annotations") + } + } + }) + } +} + +func (s *deploymentTemplateTest) TestPodSecurityContext() { + testCases := []struct { + name string + values map[string]string + expected func(psc *corev1.PodSecurityContext) + }{ + { + "defaultValues", + nil, + func(psc *corev1.PodSecurityContext) { + s.Equal(true, *psc.RunAsNonRoot, "runAsNonRoot should be set to true") + s.Equal(int64(1000), *psc.RunAsUser, "runAsUser should be set to 1000") + s.Equal(int64(1000), *psc.FSGroup, "fsGroup should be set to 1000") + s.Equal(int64(1000), *psc.RunAsGroup, "runAsGroup should be set to 1000") + s.Nil(psc.SELinuxOptions) + s.Nil(psc.WindowsOptions) + s.Nil(psc.SupplementalGroups) + s.Nil(psc.Sysctls) + s.Nil(psc.FSGroupChangePolicy) + s.Nil(psc.SeccompProfile) + }, + }, + { + "changingServiceAccountName", + map[string]string{ + "podSecurityContext": "null", + }, + func(psc *corev1.PodSecurityContext) { + s.Nil(psc, "podSecurityContext should be set to nil") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.SecurityContext) + }) + } +} + +func (s *deploymentTemplateTest) TestContainerSecurityContext() { + testCases := []struct { + name string + values map[string]string + expected func(sc *corev1.SecurityContext) + }{ + { + "defaultValues", + nil, + func(sc *corev1.SecurityContext) { + s.Len(sc.Capabilities.Drop, 1, "capabilities in drop should be set to one") + s.Equal("ALL", string(sc.Capabilities.Drop[0]), "should drop all capabilities") + s.Nil(sc.Capabilities.Add) + s.Nil(sc.Privileged) + s.Nil(sc.SELinuxOptions) + s.Nil(sc.WindowsOptions) + s.Nil(sc.RunAsUser) + s.Nil(sc.RunAsGroup) + s.Nil(sc.RunAsNonRoot) + s.Nil(sc.ReadOnlyRootFilesystem) + s.Nil(sc.AllowPrivilegeEscalation) + s.Nil(sc.ProcMount) + s.Nil(sc.SeccompProfile) + }, + }, + { + "changingServiceAccountName", + map[string]string{ + "containerSecurityContext": "null", + }, + func(sc *corev1.SecurityContext) { + s.Nil(sc, "containerSecurityContext should be set to nil") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Containers[0].SecurityContext) + }) + } +} + +func (s *deploymentTemplateTest) TestResources() { + testCases := []struct { + name string + values map[string]string + expected func(res corev1.ResourceRequirements) + }{ + { + "defaultValues", + nil, + func(res corev1.ResourceRequirements) { + s.Nil(res.Claims) + s.Nil(res.Requests) + s.Nil(res.Limits) + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Containers[0].Resources) + }) + } +} + +func (s *deploymentTemplateTest) TestNodeSelector() { + testCases := []struct { + name string + values map[string]string + expected func(ns map[string]string) + }{ + { + "defaultValues", + nil, + func(ns map[string]string) { + s.Nil(ns, "should be nil") + }, + }, + { + "Setting nodeSelector", + map[string]string{ + "nodeSelector.mySelector": "myNode", + }, + func(ns map[string]string) { + value, ok := ns["mySelector"] + s.True(ok, "should find the key") + s.Equal("myNode", value, "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.NodeSelector) + }) + } +} + +func (s *deploymentTemplateTest) TestTolerations() { + tolerationString := `[ + { + "key": "key1", + "operator": "Equal", + "value": "value1", + "effect": "NoSchedule" + } +]` + var tolerations []corev1.Toleration + + err := json.Unmarshal([]byte(tolerationString), &tolerations) + s.NoError(err) + + testCases := []struct { + name string + values map[string]string + expected func(tol []corev1.Toleration) + }{ + { + "defaultValues", + nil, + func(tol []corev1.Toleration) { + s.Nil(tol, "should be nil") + }, + }, + { + "Setting tolerations", + map[string]string{ + "tolerations": tolerationString, + }, + func(tol []corev1.Toleration) { + s.Len(tol, 1, "should have only one toleration") + s.True(reflect.DeepEqual(tol[0], tolerations[0]), "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetJsonValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Tolerations) + }) + } +} + +func (s *deploymentTemplateTest) TestAffinity() { + affinityString := `{ + "nodeAffinity": { + "requiredDuringSchedulingIgnoredDuringExecution": { + "nodeSelectorTerms": [ + { + "matchExpressions": [ + { + "key": "disktype", + "operator": "In", + "values": [ + "ssd" + ] + } + ] + } + ] + } + } +}` + var affinity corev1.Affinity + + err := json.Unmarshal([]byte(affinityString), &affinity) + s.NoError(err) + + testCases := []struct { + name string + values map[string]string + expected func(aff *corev1.Affinity) + }{ + { + "defaultValues", + nil, + func(aff *corev1.Affinity) { + s.Nil(aff, "should be nil") + }, + }, + { + "Setting affinity", + map[string]string{ + "affinity": affinityString, + }, + func(aff *corev1.Affinity) { + s.True(reflect.DeepEqual(affinity, *aff), "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetJsonValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Affinity) + }) + } +} + +func (s *deploymentTemplateTest) TestLiveness() { + livenessProbeString := `{ + "httpGet": { + "path": "/healthz", + "port": 8081 + }, + "initialDelaySeconds": 45, + "timeoutSeconds": 5, + "periodSeconds": 15 +}` + var liveness corev1.Probe + + err := json.Unmarshal([]byte(livenessProbeString), &liveness) + s.NoError(err) + + testCases := []struct { + name string + values map[string]string + expected func(probe *corev1.Probe) + }{ + { + "defaultValues", + nil, + func(probe *corev1.Probe) { + s.True(reflect.DeepEqual(*probe, liveness), "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetJsonValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Containers[0].LivenessProbe) + }) + } +} + +func (s *deploymentTemplateTest) TestReadiness() { + readinessProbeString := `{ + "httpGet": { + "path": "/readyz", + "port": 8081 + }, + "initialDelaySeconds": 30, + "timeoutSeconds": 5, + "periodSeconds": 15 +}` + var readiness corev1.Probe + + err := json.Unmarshal([]byte(readinessProbeString), &readiness) + s.NoError(err) + + testCases := []struct { + name string + values map[string]string + expected func(probe *corev1.Probe) + }{ + { + "defaultValues", + nil, + func(probe *corev1.Probe) { + s.True(reflect.DeepEqual(*probe, readiness), "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetJsonValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Containers[0].ReadinessProbe) + }) + } +} + +func (s *deploymentTemplateTest) TestContainerPorts() { + + newPorts := `{ + "enabled": true, + "type": "ClusterIP", + "ports": { + "metrics": { + "port": 8080, + "targetPort": "metrics", + "protocol": "TCP" + }, + "health-probe": { + "port": 8081, + "targetPort": "health-probe", + "protocol": "TCP" + }, + "broker-grpc": { + "port": 45000, + "targetPort": "broker-grpc", + "protocol": "TCP" + }, + "myNewPort": { + "port": 1111, + "targetPort": "myNewPort", + "protocol": "UDP" + } + } +}` + testCases := []struct { + name string + values map[string]string + expected func(p []corev1.ContainerPort) + }{ + { + "defaultValues", + nil, + func(p []corev1.ContainerPort) { + portsJSON := `[ + { + "name": "broker-grpc", + "containerPort": 45000, + "protocol": "TCP" + }, + { + "name": "health-probe", + "containerPort": 8081, + "protocol": "TCP" + }, + { + "name": "metrics", + "containerPort": 8080, + "protocol": "TCP" + } +]` + var ports []corev1.ContainerPort + + err := json.Unmarshal([]byte(portsJSON), &ports) + s.NoError(err) + s.True(reflect.DeepEqual(ports, p), "should be equal") + }, + }, + { + "addNewPort", + map[string]string{ + "service": newPorts, + }, + func(p []corev1.ContainerPort) { + portsJSON := `[ + { + "name": "broker-grpc", + "containerPort": 45000, + "protocol": "TCP" + }, + { + "name": "health-probe", + "containerPort": 8081, + "protocol": "TCP" + }, + { + "name": "metrics", + "containerPort": 8080, + "protocol": "TCP" + }, + { + "name": "myNewPort", + "containerPort": 1111, + "protocol": "UDP" + } +]` + var ports []corev1.ContainerPort + + err := json.Unmarshal([]byte(portsJSON), &ports) + s.NoError(err) + s.True(reflect.DeepEqual(ports, p), "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetJsonValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + testCase.expected(deployment.Spec.Template.Spec.Containers[0].Ports) + }) + } +} + +func (s *deploymentTemplateTest) TestReplicaCount() { + testCases := []struct { + name string + values map[string]string + expected int32 + }{ + { + "defaultValues", + nil, + 1, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var deployment appsv1.Deployment + helm.UnmarshalK8SYaml(subT, output, &deployment) + + s.Equal(testCase.expected, (*deployment.Spec.Replicas), "should be equal") + }) + } +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/grafanaDashboards_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/grafanaDashboards_test.go new file mode 100644 index 000000000..50b7f61b7 --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/grafanaDashboards_test.go @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "fmt" + "io" + "os" + "path/filepath" + "strings" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + corev1 "k8s.io/api/core/v1" +) + +type grafanaDashboardsTemplateTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestGrafanaDashboardsTemplate(t *testing.T) { + t.Parallel() + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &grafanaDashboardsTemplateTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "k8s-metacollector-test", + namespace: "metacollector-test", + templates: []string{"templates/collector-dashboard-grafana.yaml"}, + }) +} + +func (g *grafanaDashboardsTemplateTest) TestCreationDefaultValues() { + // Render the dashboard configmap and check that it has not been rendered. + _, err := helm.RenderTemplateE(g.T(), &helm.Options{}, g.chartPath, g.releaseName, g.templates, fmt.Sprintf("--namespace=%s", g.namespace)) + g.Error(err, "should error") + g.Equal("error while running command: exit status 1; Error: could not find template templates/collector-dashboard-grafana.yaml in chart", err.Error()) +} + +func (g *grafanaDashboardsTemplateTest) TestConfig() { + testCases := []struct { + name string + values map[string]string + expected func(cm *corev1.ConfigMap) + }{ + {"dashboard enabled", + map[string]string{ + "grafana.dashboards.enabled": "true", + }, + func(cm *corev1.ConfigMap) { + // Check that the name is the expected one. + g.Equal("k8s-metacollector-grafana-dashboard", cm.Name) + // Check the namespace. + g.Equal(g.namespace, cm.Namespace) + g.Nil(cm.Annotations) + }, + }, + {"namespace", + map[string]string{ + "grafana.dashboards.enabled": "true", + "grafana.dashboards.configMaps.collector.namespace": "custom-namespace", + }, + func(cm *corev1.ConfigMap) { + // Check that the name is the expected one. + g.Equal("k8s-metacollector-grafana-dashboard", cm.Name) + // Check the namespace. + g.Equal("custom-namespace", cm.Namespace) + g.Nil(cm.Annotations) + }, + }, + {"folder", + map[string]string{ + "grafana.dashboards.enabled": "true", + "grafana.dashboards.configMaps.collector.folder": "custom-folder", + }, + func(cm *corev1.ConfigMap) { + // Check that the name is the expected one. + g.Equal("k8s-metacollector-grafana-dashboard", cm.Name) + g.NotNil(cm.Annotations) + g.Len(cm.Annotations, 2) + // Check sidecar annotation. + val, ok := cm.Annotations["k8s-sidecar-target-directory"] + g.True(ok) + g.Equal("/tmp/dashboards/custom-folder", val) + // Check grafana annotation. + val, ok = cm.Annotations["grafana_dashboard_folder"] + g.True(ok) + g.Equal("custom-folder", val) + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + g.Run(testCase.name, func() { + subT := g.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + + // Render the configmap unmarshal it. + output, err := helm.RenderTemplateE(subT, options, g.chartPath, g.releaseName, g.templates, "--namespace="+g.namespace) + g.NoError(err, "should succeed") + var cfgMap corev1.ConfigMap + helm.UnmarshalK8SYaml(subT, output, &cfgMap) + + // Common checks + // Check that contains the right label. + g.Contains(cfgMap.Labels, "grafana_dashboard") + // Check that the dashboard is contained in the config map. + file, err := os.Open("../../dashboards/k8s-metacollector-dashboard.json") + g.NoError(err) + content, err := io.ReadAll(file) + g.NoError(err) + cfgData, ok := cfgMap.Data["dashboard.json"] + g.True(ok) + g.Equal(strings.TrimRight(string(content), "\n"), cfgData) + testCase.expected(&cfgMap) + }) + } +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceAccountTemplate_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceAccountTemplate_test.go new file mode 100644 index 000000000..b208f609e --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceAccountTemplate_test.go @@ -0,0 +1,172 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "path/filepath" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" +) + +// Type used to implement the testing suite for service account +// and the related resources: clusterrole, clusterrolebinding +type serviceAccountTemplateTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestServiceAccountTemplate(t *testing.T) { + t.Parallel() + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &serviceAccountTemplateTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "k8s-metacollector-test", + namespace: "metacollector-test", + templates: []string{"templates/serviceaccount.yaml"}, + }) +} + +func (s *serviceAccountTemplateTest) TestSVCAccountResourceCreation() { + testCases := []struct { + name string + values map[string]string + }{ + {"defaultValues", + nil, + }, + {"changeName", + map[string]string{ + "serviceAccount.name": "TestName", + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + + // Render the service account and unmarshal it. + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + var svcAccount corev1.ServiceAccount + helm.UnmarshalK8SYaml(subT, output, &svcAccount) + + // Render the clusterRole and unmarshal it. + output, err = helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, []string{"templates/clusterrole.yaml"}) + s.NoError(err, "should succeed") + var clusterRole rbacv1.ClusterRole + helm.UnmarshalK8SYaml(subT, output, &clusterRole) + + output, err = helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, []string{"templates/clusterrolebinding.yaml"}) + s.NoError(err, "should succeed") + var clusterRoleBinding rbacv1.ClusterRoleBinding + helm.UnmarshalK8SYaml(subT, output, &clusterRoleBinding) + + // Check that clusterRoleBinding references the right svc account. + s.Equal(svcAccount.Name, clusterRoleBinding.Subjects[0].Name, "should be the same") + s.Equal(svcAccount.Namespace, clusterRoleBinding.Subjects[0].Namespace, "should be the same") + + // Check that clusterRobeBinding references the right clusterRole. + s.Equal(clusterRole.Name, clusterRoleBinding.RoleRef.Name) + + if testCase.values != nil { + s.Equal("TestName", svcAccount.Name) + } + }) + } +} + +func (s *serviceAccountTemplateTest) TestSVCAccountResourceNonCreation() { + options := &helm.Options{SetValues: map[string]string{"serviceAccount.create": "false"}} + // Render the service account and unmarshal it. + _, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates) + s.Error(err, "should error") + s.Equal("error while running command: exit status 1; Error: could not find template templates/serviceaccount.yaml in chart", err.Error()) + + // Render the clusterRole and unmarshal it. + _, err = helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, []string{"templates/clusterrole.yaml"}) + s.Error(err, "should error") + s.Equal("error while running command: exit status 1; Error: could not find template templates/clusterrole.yaml in chart", err.Error()) + + _, err = helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, []string{"templates/clusterrolebinding.yaml"}) + s.Error(err, "should error") + s.Equal("error while running command: exit status 1; Error: could not find template templates/clusterrolebinding.yaml in chart", err.Error()) +} + +func (s *serviceAccountTemplateTest) TestSVCAccountAnnotations() { + testCases := []struct { + name string + values map[string]string + expected map[string]string + }{ + { + "defaultValues", + nil, + nil, + }, + { + "settingSvcAccountAnnotations", + map[string]string{ + "serviceAccount.annotations.my-annotation": "annotationValue", + }, + map[string]string{ + "my-annotation": "annotationValue", + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + // Render the service account and unmarshal it. + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + var svcAccount corev1.ServiceAccount + helm.UnmarshalK8SYaml(subT, output, &svcAccount) + + if testCase.expected == nil { + s.Nil(svcAccount.Annotations, "should be nil") + } else { + for key, val := range testCase.expected { + val1 := svcAccount.Annotations[key] + s.Equal(val, val1, "should contain all the added annotations") + } + } + }) + } +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceMonitorTemplate_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceMonitorTemplate_test.go new file mode 100644 index 000000000..865e7a04d --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceMonitorTemplate_test.go @@ -0,0 +1,93 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "encoding/json" + "path/filepath" + "reflect" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" +) + +type serviceMonitorTemplateTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestServiceMonitorTemplate(t *testing.T) { + t.Parallel() + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &serviceMonitorTemplateTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "k8s-metacollector-test", + namespace: "metacollector-test", + templates: []string{"templates/servicemonitor.yaml"}, + }) +} + +func (s *serviceMonitorTemplateTest) TestCreationDefaultValues() { + // Render the servicemonitor and check that it has not been rendered. + _, err := helm.RenderTemplateE(s.T(), &helm.Options{}, s.chartPath, s.releaseName, s.templates) + s.Error(err, "should error") + s.Equal("error while running command: exit status 1; Error: could not find template templates/servicemonitor.yaml in chart", err.Error()) +} + +func (s *serviceMonitorTemplateTest) TestEndpoint() { + defaultEndpointsJSON := `[ + { + "port": "metrics", + "interval": "15s", + "scrapeTimeout": "10s", + "honorLabels": true, + "path": "/metrics", + "scheme": "http" + } +]` + var defaultEndpoints []monitoringv1.Endpoint + err := json.Unmarshal([]byte(defaultEndpointsJSON), &defaultEndpoints) + s.NoError(err) + + options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}} + output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates) + + var svcMonitor monitoringv1.ServiceMonitor + helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor) + + s.Len(svcMonitor.Spec.Endpoints, 1, "should have only one endpoint") + s.True(reflect.DeepEqual(svcMonitor.Spec.Endpoints[0], defaultEndpoints[0])) +} + +func (s *serviceMonitorTemplateTest) TestNamespaceSelector() { + options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}} + output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates) + + var svcMonitor monitoringv1.ServiceMonitor + helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor) + s.Len(svcMonitor.Spec.NamespaceSelector.MatchNames, 1) + s.Equal("default", svcMonitor.Spec.NamespaceSelector.MatchNames[0]) +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceTemplate_test.go b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceTemplate_test.go new file mode 100644 index 000000000..5f7fbd1fe --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/tests/unit/serviceTemplate_test.go @@ -0,0 +1,220 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "encoding/json" + "path/filepath" + "reflect" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + corev1 "k8s.io/api/core/v1" +) + +type serviceTemplateTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestServiceTemplate(t *testing.T) { + t.Parallel() + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &serviceTemplateTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "test", + namespace: "metacollector-test", + templates: []string{"templates/service.yaml"}, + }) +} + +func (s *serviceTemplateTest) TestServiceCreateFalse() { + options := &helm.Options{SetValues: map[string]string{"service.create": "false"}} + // Render the service account and unmarshal it. + _, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates) + s.Error(err, "should error") + s.Equal("error while running command: exit status 1; Error: could not find template templates/service.yaml in chart", err.Error()) +} + +func (s *serviceTemplateTest) TestServiceType() { + testCases := []struct { + name string + values map[string]string + expected string + }{ + {"defaultValues", + nil, + "ClusterIP", + }, + {"NodePort", + map[string]string{ + "service.type": "NodePort", + }, + "NodePort", + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetValues: testCase.values} + + // Render the service and unmarshal it. + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + var svc corev1.Service + helm.UnmarshalK8SYaml(subT, output, &svc) + + s.Equal(testCase.expected, string(svc.Spec.Type)) + }) + } +} + +func (s *serviceTemplateTest) TestServicePorts() { + newPorts := `{ + "enabled": true, + "type": "ClusterIP", + "ports": { + "metrics": { + "port": 8080, + "targetPort": "metrics", + "protocol": "TCP" + }, + "health-probe": { + "port": 8081, + "targetPort": "health-probe", + "protocol": "TCP" + }, + "broker-grpc": { + "port": 45000, + "targetPort": "broker-grpc", + "protocol": "TCP" + }, + "myNewPort": { + "port": 1111, + "targetPort": "myNewPort", + "protocol": "UDP" + } + } +}` + testCases := []struct { + name string + values map[string]string + expected func(p []corev1.ServicePort) + }{ + { + "defaultValues", + nil, + func(p []corev1.ServicePort) { + portsJSON := `[ + { + "name": "broker-grpc", + "port": 45000, + "protocol": "TCP", + "targetPort": "broker-grpc" + }, + { + "name": "health-probe", + "port": 8081, + "protocol": "TCP", + "targetPort": "health-probe" + }, + { + "name": "metrics", + "port": 8080, + "protocol": "TCP", + "targetPort": "metrics" + } +]` + var ports []corev1.ServicePort + + err := json.Unmarshal([]byte(portsJSON), &ports) + s.NoError(err) + s.True(reflect.DeepEqual(ports, p), "should be equal") + }, + }, + { + "addNewPort", + map[string]string{ + "service": newPorts, + }, + func(p []corev1.ServicePort) { + portsJSON := `[ + { + "name": "broker-grpc", + "port": 45000, + "protocol": "TCP", + "targetPort": "broker-grpc" + }, + { + "name": "health-probe", + "port": 8081, + "protocol": "TCP", + "targetPort": "health-probe" + }, + { + "name": "metrics", + "port": 8080, + "protocol": "TCP", + "targetPort": "metrics" + }, + { + "name": "myNewPort", + "port": 1111, + "protocol": "UDP", + "targetPort": "myNewPort" + } +]` + var ports []corev1.ServicePort + + err := json.Unmarshal([]byte(portsJSON), &ports) + s.NoError(err) + s.True(reflect.DeepEqual(ports, p), "should be equal") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + s.Run(testCase.name, func() { + subT := s.T() + subT.Parallel() + + options := &helm.Options{SetJsonValues: testCase.values} + output, err := helm.RenderTemplateE(subT, options, s.chartPath, s.releaseName, s.templates) + s.NoError(err, "should succeed") + + var svc corev1.Service + helm.UnmarshalK8SYaml(subT, output, &svc) + + testCase.expected(svc.Spec.Ports) + }) + } +} diff --git a/charts/falco/falco/charts/falco/charts/k8s-metacollector/values.yaml b/charts/falco/falco/charts/falco/charts/k8s-metacollector/values.yaml new file mode 100644 index 000000000..98e1fa24e --- /dev/null +++ b/charts/falco/falco/charts/falco/charts/k8s-metacollector/values.yaml @@ -0,0 +1,204 @@ +# Default values for k8s-metacollector. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +# -- replicaCount is the number of identical copies of the k8s-metacollector. +replicaCount: 1 + +# -- image is the configuration for the k8s-metacollector image. +image: + # -- pullPolicy is the policy used to determine when a node should attempt to pull the container image. + pullPolicy: IfNotPresent + # -- pullSecects a list of secrets containing credentials used when pulling from private/secure registries. + pullSecrets: [] + # -- registry is the image registry to pull from. + registry: docker.io + # -- repository is the image repository to pull from + repository: falcosecurity/k8s-metacollector + # -- tag is image tag to pull. Overrides the image tag whose default is the chart appVersion. + tag: "" + +# -- nameOverride is the new name used to override the release name used for k8s-metacollector components. +nameOverride: "" +# -- fullNameOverride same as nameOverride but for the full name. +fullnameOverride: "" +# -- namespaceOverride overrides the deployment namespace. It's useful for multi-namespace deployments in combined charts. +namespaceOverride: "" + +# -- serviceAccount is the configuration for the service account. +serviceAccount: + # -- create specifies whether a service account should be created. + create: true + # -- annotations to add to the service account. + annotations: {} + # -- name is name of the service account to use. + # -- If not set and create is true, a name is generated using the full name template. + name: "" + +# -- podAnnotations are custom annotations to be added to the pod. +podAnnotations: {} + +# -- podLabels are labels to be added to the pod. +podLabels: {} + +# -- podSecurityContext holds the security settings for the pod. +# -- These settings are override by the ones specified for the container when there is overlap. +podSecurityContext: + # -- runAsNonRoot when set to true enforces that the specified container runs as a non-root user. + runAsNonRoot: true + # -- runAsUser specifies the user ID (UID) that the containers inside the pod should run as. + runAsUser: 1000 + # -- runAsGroup specifies the group ID (GID) that the containers inside the pod should run as. + runAsGroup: 1000 + # -- fsGroup specifies the group ID (GID) that should be used for the volume mounted within a Pod. + fsGroup: 1000 + +# -- containerSecurityContext holds the security settings for the container. +containerSecurityContext: + # -- capabilities fine-grained privileges that can be assigned to processes. + capabilities: + # -- drop drops the given set of privileges. + drop: + - ALL + +# -- service exposes the k8s-metacollector services to be accessed from within the cluster. +# ref: https://kubernetes.io/docs/concepts/services-networking/service/ +service: + # -- enabled specifies whether a service should be created. + create: true + # -- type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible + # from within the cluster. + type: ClusterIP + # -- ports denotes all the ports on which the Service will listen. + ports: + # -- metrics denotes a listening service named "metrics". + metrics: + # -- port is the port on which the Service will listen. + port: 8080 + # -- targetPort is the port on which the Pod is listening. + targetPort: "metrics" + # -- protocol specifies the network protocol that the Service should use for the associated port. + protocol: "TCP" + # -- health-probe denotes a listening service named "health-probe" + health-probe: + # -- port is the port on which the Service will listen. + port: 8081 + # -- targetPort is the port on which the Pod is listening. + targetPort: "health-probe" + # -- protocol specifies the network protocol that the Service should use for the associated port. + protocol: "TCP" + # -- broker-grpc denotes a listening service named "grpc-broker" + broker-grpc: + # -- port is the port on which the Service will listen. + port: 45000 + # -- targetPort is the port on which the Pod is listening. + targetPort: "broker-grpc" + # -- protocol specifies the network protocol that the Service should use for the associated port. + protocol: "TCP" + +# -- serviceMonitor holds the configuration for the ServiceMonitor CRD. +# A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should +# discover and scrape metrics from the k8s-metacollector service. +serviceMonitor: + # -- create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. + # https://github.com/coreos/prometheus-operator + # Enable it only if the ServiceMonitor CRD is installed in your cluster. + create: false + # -- path at which the metrics are expose by the k8s-metacollector. + path: /metrics + # -- labels set of labels to be applied to the ServiceMonitor resource. + # If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right + # label here in order for the ServiceMonitor to be selected for target discovery. + labels: {} + # -- interval specifies the time interval at which Prometheus should scrape metrics from the service. + interval: 15s + # -- scheme specifies network protocol used by the metrics endpoint. In this case HTTP. + scheme: http + # -- tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when + # scraping metrics from a service. It allows you to define the details of the TLS connection, such as + # CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support + # TLS configuration for the metrics endpoint. + tlsConfig: {} + # insecureSkipVerify: false + # caFile: /path/to/ca.crt + # certFile: /path/to/client.crt + # keyFile: /path/to/client.key + # -- scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. + # If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for + # that target. + scrapeTimeout: 10s + # -- relabelings configures the relabeling rules to apply the target’s metadata labels. + relabelings: [] + # -- targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. + targetLabels: [] + +# -- resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod. +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + +# -- nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes +# for the Pod to be eligible for scheduling on that node. +nodeSelector: {} + +# -- tolerations are applied to pods and allow them to be scheduled on nodes with matching taints. +tolerations: [] + +# -- affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes. +affinity: {} + +# -- healthChecks contains the configuration for liveness and readiness probes. +healthChecks: + # -- livenessProbe is a diagnostic mechanism used to determine wether a container within a Pod is still running and healthy. + livenessProbe: + # -- httpGet specifies that the liveness probe will make an HTTP GET request to check the health of the container. + httpGet: + # -- path is the specific endpoint on which the HTTP GET request will be made. + path: /healthz + # -- port is the port on which the container exposes the "/healthz" endpoint. + port: 8081 + # -- initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. + initialDelaySeconds: 45 + # -- timeoutSeconds is the number of seconds after which the probe times out. + timeoutSeconds: 5 + # -- periodSeconds specifies the interval at which the liveness probe will be repeated. + periodSeconds: 15 + # -- readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic. + readinessProbe: + # -- httpGet specifies that the readiness probe will make an HTTP GET request to check whether the container is ready. + httpGet: + # -- path is the specific endpoint on which the HTTP GET request will be made. + path: /readyz + # -- port is the port on which the container exposes the "/readyz" endpoint. + port: 8081 + # -- initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. + initialDelaySeconds: 30 + # -- timeoutSeconds is the number of seconds after which the probe times out. + timeoutSeconds: 5 + # -- periodSeconds specifies the interval at which the readiness probe will be repeated. + periodSeconds: 15 + +# -- grafana contains the configuration related to grafana. +grafana: + # -- dashboards contains configuration for grafana dashboards. + dashboards: + # -- enabled specifies whether the dashboards should be deployed. + enabled: false + # --configmaps to be deployed that contain a grafana dashboard. + configMaps: + # -- collector contains the configuration for collector's dashboard. + collector: + # -- name specifies the name for the configmap. + name: k8s-metacollector-grafana-dashboard + # -- namespace specifies the namespace for the configmap. + namespace: "" + # -- folder where the dashboard is stored by grafana. + folder: "" diff --git a/charts/falco/falco/charts/falco/generated/helm-values.md b/charts/falco/falco/charts/falco/generated/helm-values.md deleted file mode 100644 index aeb377310..000000000 --- a/charts/falco/falco/charts/falco/generated/helm-values.md +++ /dev/null @@ -1,130 +0,0 @@ -# Configuration values for falco chart -`Chart version: v2.0.16` -## Values - -| Key | Type | Default | Description | -|-----|------|---------|-------------| -| affinity | object | `{}` | Affinity constraint for pods' scheduling. | -| certs | object | `{"ca":{"crt":""},"existingSecret":"","server":{"crt":"","key":""}}` | certificates used by webserver and grpc server. paste certificate content or use helm with --set-file or use existing secret containing key, crt, ca as well as pem bundle | -| certs.ca.crt | string | `""` | CA certificate used by gRPC, webserver and AuditSink validation. | -| certs.existingSecret | string | `""` | Existing secret containing the following key, crt and ca as well as the bundle pem. | -| certs.server.crt | string | `""` | Certificate used by gRPC and webserver. | -| certs.server.key | string | `""` | Key used by gRPC and webserver. | -| collectors.containerd.enabled | bool | `true` | Enable ContainerD support. | -| collectors.containerd.socket | string | `"/run/containerd/containerd.sock"` | The path of the ContainerD socket. | -| collectors.crio.enabled | bool | `true` | Enable CRI-O support. | -| collectors.crio.socket | string | `"/run/crio/crio.sock"` | The path of the CRI-O socket. | -| collectors.docker.enabled | bool | `true` | Enable Docker support. | -| collectors.docker.socket | string | `"/var/run/docker.sock"` | The path of the Docker daemon socket. | -| collectors.enabled | bool | `true` | Enable/disable all the metadata collectors. | -| collectors.kubernetes.apiAuth | string | `"/var/run/secrets/kubernetes.io/serviceaccount/token"` | Provide the authentication method Falco should use to connect to the Kubernetes API. | -| collectors.kubernetes.apiUrl | string | `"https://$(KUBERNETES_SERVICE_HOST)"` | | -| collectors.kubernetes.enableNodeFilter | bool | `true` | If true, only the current node (on which Falco is running) will be considered when requesting metadata of pods to the API server. Disabling this option may have a performance penalty on large clusters. | -| collectors.kubernetes.enabled | bool | `true` | Enable Kubernetes meta data collection via a connection to the Kubernetes API server. When this option is disabled, Falco falls back to the container annotations to grap the meta data. In such a case, only the ID, name, namespace, labels of the pod will be available. | -| containerSecurityContext | object | `{}` | Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl" | -| controller.daemonset.updateStrategy.type | string | `"RollingUpdate"` | Perform rolling updates by default in the DaemonSet agent ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ | -| controller.deployment.replicas | int | `1` | Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. For more info check the section on Plugins in the README.md file. | -| controller.kind | string | `"daemonset"` | | -| customRules | object | `{}` | Third party rules enabled for Falco. More info on the dedicated section in README.md file. | -| driver.ebpf | object | `{"hostNetwork":false,"leastPrivileged":false,"path":null}` | Configuration section for ebpf driver. | -| driver.ebpf.hostNetwork | bool | `false` | Needed to enable eBPF JIT at runtime for performance reasons. Can be skipped if eBPF JIT is enabled from outside the container | -| driver.ebpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. This option is only supported with the eBPF driver and a kernel >= 5.8. Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). | -| driver.ebpf.path | string | `nil` | Path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init container deployed with the chart. | -| driver.enabled | bool | `true` | Set it to false if you want to deploy Falco without the drivers. Always set it to false when using Falco with plugins. | -| driver.kind | string | `"module"` | Tell Falco which driver to use. Available options: module (kernel driver) and ebpf (eBPF probe). | -| driver.loader | object | `{"enabled":true,"initContainer":{"args":[],"enabled":true,"env":{},"image":{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falco-driver-loader","tag":""},"resources":{},"securityContext":{}}}` | Configuration for the Falco init container. | -| driver.loader.enabled | bool | `true` | Enable/disable the init container. | -| driver.loader.initContainer.args | list | `[]` | Arguments to pass to the Falco driver loader init container. | -| driver.loader.initContainer.enabled | bool | `true` | Enable/disable the init container. | -| driver.loader.initContainer.env | object | `{}` | Extra environment variables that will be pass onto Falco driver loader init container. | -| driver.loader.initContainer.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | -| driver.loader.initContainer.image.registry | string | `"docker.io"` | The image registry to pull from. | -| driver.loader.initContainer.image.repository | string | `"falcosecurity/falco-driver-loader"` | The image repository to pull from. | -| driver.loader.initContainer.resources | object | `{}` | Resources requests and limits for the Falco driver loader init container. | -| driver.loader.initContainer.securityContext | object | `{}` | Security context for the Falco driver loader init container. Overrides the default security context. If driver.mode == "module" you must at least set `privileged: true`. | -| extra.args | list | `[]` | Extra command-line arguments. | -| extra.env | object | `{}` | Extra environment variables that will be pass onto Falco containers. | -| extra.initContainers | list | `[]` | Additional initContainers for Falco pods. | -| falco.buffered_outputs | bool | `false` | Whether or not output to any of the output channels below is buffered. Defaults to false | -| falco.file_output.enabled | bool | `false` | Enable file output for security notifications. | -| falco.file_output.filename | string | `"./events.txt"` | The filename for logging notifications. | -| falco.file_output.keep_alive | bool | `false` | Open file once or every time a new notification arrives. | -| falco.grpc | object | `{"bind_address":"unix:///var/run/falco/falco.sock","enabled":false,"threadiness":0}` | gRPC server using an unix socket | -| falco.grpc.bind_address | string | `"unix:///var/run/falco/falco.sock"` | Bind address for the grpc server. | -| falco.grpc.enabled | bool | `false` | Enable the Falco gRPC server. | -| falco.grpc.threadiness | int | `0` | Number of threads (and context) the gRPC server will use, 0 by default, which means "auto". | -| falco.grpc_output.enabled | bool | `false` | Enable the gRPC output and events will be kept in memory until you read them with a gRPC client. | -| falco.http_output.enabled | bool | `false` | Enable http output for security notifications. | -| falco.http_output.url | string | `""` | When including Falco inside a parent helm chart, you must set this since the auto-generated URL won't match (#280). | -| falco.http_output.user_agent | string | `"falcosecurity/falco"` | | -| falco.json_include_output_property | bool | `true` | When using json output, whether or not to include the "output" property itself (e.g. "File below a known binary directory opened for writing (user=root ....") in the json output. | -| falco.json_include_tags_property | bool | `true` | When using json output, whether or not to include the "tags" property itself in the json output. If set to true, outputs caused by rules with no tags will have a "tags" field set to an empty array. If set to false, the "tags" field will not be included in the json output at all. | -| falco.json_output | bool | `false` | Whether to output events in json or text. | -| falco.libs_logger.enabled | bool | `false` | Enable the libs logger. | -| falco.libs_logger.severity | string | `"debug"` | Minimum log severity to include in the libs logs. Note: this value is separate from the log level of the Falco logger and does not affect it. Can be one of "fatal", "critical", "error", "warning", "notice", "info", "debug", "trace". | -| falco.load_plugins | list | `[]` | Add here the names of the plugins that you want to be loaded by Falco. Please make sure that plugins have ben configured under the "plugins" section before adding them here. | -| falco.log_level | string | `"info"` | Minimum log level to include in logs. Note: these levels are separate from the priority field of rules. This refers only to the log level of falco's internal logging. Can be one of "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". | -| falco.log_stderr | bool | `true` | Send information logs to syslog. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. | -| falco.log_syslog | bool | `true` | Send information logs to stderr. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. | -| falco.metadata_download.chunk_wait_us | int | `1000` | Sleep time (in μs) for each download chunck when fetching metadata from Kubernetes. | -| falco.metadata_download.max_mb | int | `100` | Max allowed response size (in Mb) when fetching metadata from Kubernetes. | -| falco.metadata_download.watch_freq_sec | int | `1` | Watch frequency (in seconds) when fetching metadata from Kubernetes. | -| falco.output_timeout | int | `2000` | Duration in milliseconds to wait before considering the output timeout deadline exceed. | -| falco.outputs.max_burst | int | `1000` | Maximum number of tokens outstanding. | -| falco.outputs.rate | int | `1` | Number of tokens gained per second. | -| falco.plugins | list | `[{"init_config":null,"library_path":"libk8saudit.so","name":"k8saudit","open_params":"http://:9765/k8s-audit"},{"library_path":"libcloudtrail.so","name":"cloudtrail"},{"init_config":"","library_path":"libjson.so","name":"json"}]` | Plugins configuration. Add here all plugins and their configuration. Please consult the plugins documentation for more info. Remember to add the plugins name in "load_plugins: []" in order to load them in Falco. | -| falco.priority | string | `"debug"` | Minimum rule priority level to load and run. All rules having a priority more severe than this level will be loaded/run. Can be one of "emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug". | -| falco.program_output.enabled | bool | `false` | Enable program output for security notifications. | -| falco.program_output.keep_alive | bool | `false` | Start the program once or re-spawn when a notification arrives. | -| falco.program_output.program | string | `"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"` | Command to execute for program output. | -| falco.rules_file | list | `["/etc/falco/falco_rules.yaml","/etc/falco/falco_rules.local.yaml","/etc/falco/rules.d"]` | The location of the rules files that will be consumed by Falco. | -| falco.stdout_output.enabled | bool | `true` | Enable stdout output for security notifications. | -| falco.syscall_event_drops.actions | list | `["log","alert"]` | Actions to be taken when system calls were dropped from the circular buffer. | -| falco.syscall_event_drops.max_burst | int | `1` | Max burst of messages emitted. | -| falco.syscall_event_drops.rate | float | `0.03333` | Rate at which log/alert messages are emitted. | -| falco.syscall_event_drops.threshold | float | `0.1` | The messages are emitted when the percentage of dropped system calls with respect the number of events in the last second is greater than the given threshold (a double in the range [0, 1]). | -| falco.syscall_event_timeouts.max_consecutives | int | `1000` | Maximum number of consecutive timeouts without an event after which you want Falco to alert. | -| falco.syslog_output.enabled | bool | `true` | Enable syslog output for security notifications. | -| falco.time_format_iso_8601 | bool | `false` | If true, the times displayed in log messages and output messages will be in ISO 8601. By default, times are displayed in the local time zone, as governed by /etc/localtime. | -| falco.watch_config_files | bool | `true` | Watch config file and rules files for modification. When a file is modified, Falco will propagate new config, by reloading itself. | -| falco.webserver.enabled | bool | `true` | Enable Falco embedded webserver. | -| falco.webserver.k8s_healthz_endpoint | string | `"/healthz"` | Endpoint where Falco exposes the health status. | -| falco.webserver.listen_port | int | `8765` | Port where Falco embedded webserver listen to connections. | -| falco.webserver.ssl_certificate | string | `"/etc/falco/falco.pem"` | Certificate bundle path for the Falco embedded webserver. | -| falco.webserver.ssl_enabled | bool | `false` | Enable SSL on Falco embedded webserver. | -| falcosidekick | object | `{"enabled":false,"fullfqdn":false,"listenPort":""}` | For configuration values, see https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml | -| falcosidekick.enabled | bool | `false` | Enable falcosidekick deployment. | -| falcosidekick.fullfqdn | bool | `false` | Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used). | -| falcosidekick.listenPort | string | `""` | Listen port. Default value: 2801 | -| fullnameOverride | string | `""` | Same as nameOverride but for the fullname. | -| healthChecks | object | `{"livenessProbe":{"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | Parameters used | -| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | Tells the kubelet that it should wait X seconds before performing the first probe. | -| healthChecks.livenessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. | -| healthChecks.livenessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. | -| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | Tells the kubelet that it should wait X seconds before performing the first probe. | -| healthChecks.readinessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. | -| healthChecks.readinessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. | -| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | -| image.registry | string | `"docker.io"` | The image registry to pull from. | -| image.repository | string | `"falcosecurity/falco-no-driver"` | The image repository to pull from | -| image.tag | string | `""` | The image tag to pull. Overrides the image tag whose default is the chart appVersion. | -| imagePullSecrets | list | `[]` | Secrets containing credentials when pulling from private/secure registries. | -| mounts.enforceProcMount | bool | `false` | By default, `/proc` from the host is only mounted into the Falco pod when `driver.enabled` is set to `true`. This flag allows it to override this behaviour for edge cases where `/proc` is needed but syscall data source is not enabled at the same time (e.g. for specific plugins). | -| mounts.volumeMounts | list | `[]` | A list of volumes you want to add to the Falco pods. | -| mounts.volumes | list | `[]` | A list of volumes you want to add to the Falco pods. | -| nameOverride | string | `""` | Put here the new name if you want to override the release name used for Falco components. | -| nodeSelector | object | `{}` | Selectors used to deploy Falco on a given node/nodes. | -| podAnnotations | object | `{}` | Add additional pod annotations | -| podLabels | object | `{}` | Add additional pod labels | -| podPriorityClassName | string | `nil` | Set pod priorityClassName | -| podSecurityContext | object | `{}` | Set securityContext for the pods These security settings are overriden by the ones specified for the specific containers when there is overlap. | -| rbac.create | bool | `true` | | -| resources.limits | object | `{"cpu":"1000m","memory":"1024Mi"}` | Maximum amount of resources that Falco container could get. | -| resources.requests | object | `{"cpu":"100m","memory":"512Mi"}` | Although resources needed are subjective on the actual workload we provide a sane defaults ones. If you have more questions or concerns, please refer to #falco slack channel for more info about it. | -| scc.create | bool | `true` | Create OpenShift's Security Context Constraint. | -| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. | -| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. | -| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | -| services | string | `nil` | Network services configuration (scenario requirement) Add here your services to be deployed together with Falco. | -| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]` | Tolerations to allow Falco to run on Kubernetes 1.6 masters. | -| tty | bool | `false` | Attach the Falco process to a tty inside the container. Needed to flush Falco logs as soon as they are emitted. Set it to "true" when you need the Falco logs to be immediately displayed. | diff --git a/charts/falco/falco/charts/falco/rules/application_rules.yaml b/charts/falco/falco/charts/falco/rules/application_rules.yaml deleted file mode 100644 index 4e11df96b..000000000 --- a/charts/falco/falco/charts/falco/rules/application_rules.yaml +++ /dev/null @@ -1,184 +0,0 @@ -# -# Copyright (C) 2019 The Falco Authors. -# -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -- required_engine_version: 2 - -################################################################ -# By default all application-related rules are disabled for -# performance reasons. Depending on the application(s) you use, -# uncomment the corresponding rule definitions for -# application-specific activity monitoring. -################################################################ - -# Elasticsearch ports -- macro: elasticsearch_cluster_port - condition: fd.sport=9300 -- macro: elasticsearch_api_port - condition: fd.sport=9200 -- macro: elasticsearch_port - condition: elasticsearch_cluster_port or elasticsearch_api_port - -# - rule: Elasticsearch unexpected network inbound traffic -# desc: inbound network traffic to elasticsearch on a port other than the standard ports -# condition: user.name = elasticsearch and inbound and not elasticsearch_port -# output: "Inbound network traffic to Elasticsearch on unexpected port (connection=%fd.name)" -# priority: WARNING - -# - rule: Elasticsearch unexpected network outbound traffic -# desc: outbound network traffic from elasticsearch on a port other than the standard ports -# condition: user.name = elasticsearch and outbound and not elasticsearch_cluster_port -# output: "Outbound network traffic from Elasticsearch on unexpected port (connection=%fd.name)" -# priority: WARNING - -# ActiveMQ ports -- macro: activemq_cluster_port - condition: fd.sport=61616 -- macro: activemq_web_port - condition: fd.sport=8161 -- macro: activemq_port - condition: activemq_web_port or activemq_cluster_port - -# - rule: Activemq unexpected network inbound traffic -# desc: inbound network traffic to activemq on a port other than the standard ports -# condition: user.name = activemq and inbound and not activemq_port -# output: "Inbound network traffic to ActiveMQ on unexpected port (connection=%fd.name)" -# priority: WARNING - -# - rule: Activemq unexpected network outbound traffic -# desc: outbound network traffic from activemq on a port other than the standard ports -# condition: user.name = activemq and outbound and not activemq_cluster_port -# output: "Outbound network traffic from ActiveMQ on unexpected port (connection=%fd.name)" -# priority: WARNING - -# Cassandra ports -# https://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureFireWall_r.html -- macro: cassandra_thrift_client_port - condition: fd.sport=9160 -- macro: cassandra_cql_port - condition: fd.sport=9042 -- macro: cassandra_cluster_port - condition: fd.sport=7000 -- macro: cassandra_ssl_cluster_port - condition: fd.sport=7001 -- macro: cassandra_jmx_port - condition: fd.sport=7199 -- macro: cassandra_port - condition: > - cassandra_thrift_client_port or - cassandra_cql_port or cassandra_cluster_port or - cassandra_ssl_cluster_port or cassandra_jmx_port - -# - rule: Cassandra unexpected network inbound traffic -# desc: inbound network traffic to cassandra on a port other than the standard ports -# condition: user.name = cassandra and inbound and not cassandra_port -# output: "Inbound network traffic to Cassandra on unexpected port (connection=%fd.name)" -# priority: WARNING - -# - rule: Cassandra unexpected network outbound traffic -# desc: outbound network traffic from cassandra on a port other than the standard ports -# condition: user.name = cassandra and outbound and not (cassandra_ssl_cluster_port or cassandra_cluster_port) -# output: "Outbound network traffic from Cassandra on unexpected port (connection=%fd.name)" -# priority: WARNING - -# Couchdb ports -# https://github.com/davisp/couchdb/blob/master/etc/couchdb/local.ini -- macro: couchdb_httpd_port - condition: fd.sport=5984 -- macro: couchdb_httpd_ssl_port - condition: fd.sport=6984 -# xxx can't tell what clustering ports are used. not writing rules for this -# yet. - -# Fluentd ports -- macro: fluentd_http_port - condition: fd.sport=9880 -- macro: fluentd_forward_port - condition: fd.sport=24224 - -# - rule: Fluentd unexpected network inbound traffic -# desc: inbound network traffic to fluentd on a port other than the standard ports -# condition: user.name = td-agent and inbound and not (fluentd_forward_port or fluentd_http_port) -# output: "Inbound network traffic to Fluentd on unexpected port (connection=%fd.name)" -# priority: WARNING - -# - rule: Tdagent unexpected network outbound traffic -# desc: outbound network traffic from fluentd on a port other than the standard ports -# condition: user.name = td-agent and outbound and not fluentd_forward_port -# output: "Outbound network traffic from Fluentd on unexpected port (connection=%fd.name)" -# priority: WARNING - -# Gearman ports -# http://gearman.org/protocol/ -# - rule: Gearman unexpected network outbound traffic -# desc: outbound network traffic from gearman on a port other than the standard ports -# condition: user.name = gearman and outbound and outbound and not fd.sport = 4730 -# output: "Outbound network traffic from Gearman on unexpected port (connection=%fd.name)" -# priority: WARNING - -# Zookeeper -- macro: zookeeper_port - condition: fd.sport = 2181 - -# Kafka ports -# - rule: Kafka unexpected network inbound traffic -# desc: inbound network traffic to kafka on a port other than the standard ports -# condition: user.name = kafka and inbound and fd.sport != 9092 -# output: "Inbound network traffic to Kafka on unexpected port (connection=%fd.name)" -# priority: WARNING - -# Memcached ports -# - rule: Memcached unexpected network inbound traffic -# desc: inbound network traffic to memcached on a port other than the standard ports -# condition: user.name = memcached and inbound and fd.sport != 11211 -# output: "Inbound network traffic to Memcached on unexpected port (connection=%fd.name)" -# priority: WARNING - -# - rule: Memcached unexpected network outbound traffic -# desc: any outbound network traffic from memcached. memcached never initiates outbound connections. -# condition: user.name = memcached and outbound -# output: "Unexpected Memcached outbound connection (connection=%fd.name)" -# priority: WARNING - -# MongoDB ports -- macro: mongodb_server_port - condition: fd.sport = 27017 -- macro: mongodb_shardserver_port - condition: fd.sport = 27018 -- macro: mongodb_configserver_port - condition: fd.sport = 27019 -- macro: mongodb_webserver_port - condition: fd.sport = 28017 -# - rule: Mongodb unexpected network inbound traffic -# desc: inbound network traffic to mongodb on a port other than the standard ports -# condition: > -# user.name = mongodb and inbound and not (mongodb_server_port or -# mongodb_shardserver_port or mongodb_configserver_port or mongodb_webserver_port) -# output: "Inbound network traffic to MongoDB on unexpected port (connection=%fd.name)" -# priority: WARNING - -# MySQL ports -# - rule: Mysql unexpected network inbound traffic -# desc: inbound network traffic to mysql on a port other than the standard ports -# condition: user.name = mysql and inbound and fd.sport != 3306 -# output: "Inbound network traffic to MySQL on unexpected port (connection=%fd.name)" -# priority: WARNING - -# - rule: HTTP server unexpected network inbound traffic -# desc: inbound network traffic to a http server program on a port other than the standard ports -# condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443 -# output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)" -# priority: WARNING \ No newline at end of file diff --git a/charts/falco/falco/charts/falco/rules/aws_cloudtrail_rules.yaml b/charts/falco/falco/charts/falco/rules/aws_cloudtrail_rules.yaml deleted file mode 100644 index 8c209d574..000000000 --- a/charts/falco/falco/charts/falco/rules/aws_cloudtrail_rules.yaml +++ /dev/null @@ -1,440 +0,0 @@ -# -# Copyright (C) 2022 The Falco Authors. -# -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -# All rules files related to plugins should require at least engine version 10 -- required_engine_version: 10 - -- required_plugin_versions: - - name: cloudtrail - version: 0.2.3 - - name: json - version: 0.2.2 - -# Note that this rule is disabled by default. It's useful only to -# verify that the cloudtrail plugin is sending events properly. The -# very broad condition evt.num > 0 only works because the rule source -# is limited to aws_cloudtrail. This ensures that the only events that -# are matched against the rule are from the cloudtrail plugin (or -# a different plugin with the same source). -- rule: All Cloudtrail Events - desc: Match all cloudtrail events. - condition: - evt.num > 0 - output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error) - priority: DEBUG - tags: - - cloud - - aws - source: aws_cloudtrail - enabled: false - -- rule: Console Login Through Assume Role - desc: Detect a console login through Assume Role. - condition: - ct.name="ConsoleLogin" and not ct.error exists - and ct.user.identitytype="AssumedRole" - and json.value[/responseElements/ConsoleLogin]="Success" - output: - Detected a console login through Assume Role - (principal=%ct.user.principalid, - assumedRole=%ct.user.arn, - requesting IP=%ct.srcip, - AWS region=%ct.region) - priority: WARNING - tags: - - cloud - - aws - - aws_console - - aws_iam - source: aws_cloudtrail - -- rule: Console Login Without MFA - desc: Detect a console login without MFA. - condition: - ct.name="ConsoleLogin" and not ct.error exists - and ct.user.identitytype!="AssumedRole" - and json.value[/responseElements/ConsoleLogin]="Success" - and json.value[/additionalEventData/MFAUsed]="No" - output: - Detected a console login without MFA - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region) - priority: CRITICAL - tags: - - cloud - - aws - - aws_console - - aws_iam - source: aws_cloudtrail - -- rule: Console Root Login Without MFA - desc: Detect root console login without MFA. - condition: - ct.name="ConsoleLogin" and not ct.error exists - and json.value[/additionalEventData/MFAUsed]="No" - and ct.user.identitytype!="AssumedRole" - and json.value[/responseElements/ConsoleLogin]="Success" - and ct.user.identitytype="Root" - output: - Detected a root console login without MFA. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region) - priority: CRITICAL - tags: - - cloud - - aws - - aws_console - - aws_iam - source: aws_cloudtrail - -- rule: Deactivate MFA for Root User - desc: Detect deactivating MFA configuration for root. - condition: - ct.name="DeactivateMFADevice" and not ct.error exists - and ct.user.identitytype="Root" - and ct.request.username="AWS ROOT USER" - output: - Multi Factor Authentication configuration has been disabled for root - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - MFA serial number=%ct.request.serialnumber) - priority: CRITICAL - tags: - - cloud - - aws - - aws_iam - source: aws_cloudtrail - -- rule: Create AWS user - desc: Detect creation of a new AWS user. - condition: - ct.name="CreateUser" and not ct.error exists - output: - A new AWS user has been created - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - new user created=%ct.request.username) - priority: INFO - tags: - - cloud - - aws - - aws_iam - source: aws_cloudtrail - -- rule: Create Group - desc: Detect creation of a new user group. - condition: - ct.name="CreateGroup" and not ct.error exists - output: - A new user group has been created. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - group name=%ct.request.groupname) - priority: WARNING - tags: - - cloud - - aws - - aws_iam - source: aws_cloudtrail - -- rule: Delete Group - desc: Detect deletion of a user group. - condition: - ct.name="DeleteGroup" and not ct.error exists - output: - A user group has been deleted. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - group name=%ct.request.groupname) - priority: WARNING - tags: - - cloud - - aws - - aws_iam - source: aws_cloudtrail - -- rule: ECS Service Created - desc: Detect a new service is created in ECS. - condition: - ct.src="ecs.amazonaws.com" and - ct.name="CreateService" and - not ct.error exists - output: - A new service has been created in ECS - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - cluster=%ct.request.cluster, - service name=%ct.request.servicename, - task definition=%ct.request.taskdefinition) - priority: WARNING - tags: - - cloud - - aws - - aws_ecs - - aws_fargate - source: aws_cloudtrail - -- rule: ECS Task Run or Started - desc: Detect a new task is started in ECS. - condition: - ct.src="ecs.amazonaws.com" and - (ct.name="RunTask" or ct.name="StartTask") and - not ct.error exists - output: - A new task has been started in ECS - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - cluster=%ct.request.cluster, - task definition=%ct.request.taskdefinition) - priority: WARNING - tags: - - cloud - - aws - - aws_ecs - - aws_fargate - source: aws_cloudtrail - -- rule: Create Lambda Function - desc: Detect creation of a Lambda function. - condition: - ct.name="CreateFunction20150331" and not ct.error exists - output: - Lambda function has been created. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - lambda function=%ct.request.functionname) - priority: WARNING - tags: - - cloud - - aws - - aws_lambda - source: aws_cloudtrail - -- rule: Update Lambda Function Code - desc: Detect updates to a Lambda function code. - condition: - ct.name="UpdateFunctionCode20150331v2" and not ct.error exists - output: - The code of a Lambda function has been updated. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - lambda function=%ct.request.functionname) - priority: WARNING - tags: - - cloud - - aws - - aws_lambda - source: aws_cloudtrail - -- rule: Update Lambda Function Configuration - desc: Detect updates to a Lambda function configuration. - condition: - ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists - output: - The configuration of a Lambda function has been updated. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - lambda function=%ct.request.functionname) - priority: WARNING - tags: - - cloud - - aws - - aws_lambda - source: aws_cloudtrail - -- rule: Run Instances - desc: Detect launching of a specified number of instances. - condition: - ct.name="RunInstances" and not ct.error exists - output: - A number of instances have been launched. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - availability zone=%ct.request.availabilityzone, - subnet id=%ct.response.subnetid, - reservation id=%ct.response.reservationid) - priority: WARNING - tags: - - cloud - - aws - - aws_ec2 - source: aws_cloudtrail - -# Only instances launched on regions in this list are approved. -- list: approved_regions - items: - - us-east-0 - -- rule: Run Instances in Non-approved Region - desc: Detect launching of a specified number of instances in a non-approved region. - condition: - ct.name="RunInstances" and not ct.error exists and - not ct.region in (approved_regions) - output: - A number of instances have been launched in a non-approved region. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - availability zone=%ct.request.availabilityzone, - subnet id=%ct.response.subnetid, - reservation id=%ct.response.reservationid, - image id=%json.value[/responseElements/instancesSet/items/0/instanceId]) - priority: WARNING - tags: - - cloud - - aws - - aws_ec2 - source: aws_cloudtrail - -- rule: Delete Bucket Encryption - desc: Detect deleting configuration to use encryption for bucket storage. - condition: - ct.name="DeleteBucketEncryption" and not ct.error exists - output: - A encryption configuration for a bucket has been deleted - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - bucket=%s3.bucket) - priority: CRITICAL - tags: - - cloud - - aws - - aws_s3 - source: aws_cloudtrail - -- rule: Delete Bucket Public Access Block - desc: Detect deleting blocking public access to bucket. - condition: - ct.name="PutBucketPublicAccessBlock" and not ct.error exists and - json.value[/requestParameters/publicAccessBlock]="" and - (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or - json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or - json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or - json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false) - output: - A public access block for a bucket has been deleted - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - bucket=%s3.bucket) - priority: CRITICAL - tags: - - cloud - - aws - - aws_s3 - source: aws_cloudtrail - -- rule: List Buckets - desc: Detect listing of all S3 buckets. - condition: - ct.name="ListBuckets" and not ct.error exists - output: - A list of all S3 buckets has been requested. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - host=%ct.request.host) - priority: WARNING - enabled: false - tags: - - cloud - - aws - - aws_s3 - source: aws_cloudtrail - -- rule: Put Bucket ACL - desc: Detect setting the permissions on an existing bucket using access control lists. - condition: - ct.name="PutBucketAcl" and not ct.error exists - output: - The permissions on an existing bucket have been set using access control lists. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - bucket name=%s3.bucket) - priority: WARNING - tags: - - cloud - - aws - - aws_s3 - source: aws_cloudtrail - -- rule: Put Bucket Policy - desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket. - condition: - ct.name="PutBucketPolicy" and not ct.error exists - output: - An Amazon S3 bucket policy has been applied to an Amazon S3 bucket. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - bucket name=%s3.bucket, - policy=%ct.request.policy) - priority: WARNING - tags: - - cloud - - aws - - aws_s3 - source: aws_cloudtrail - -- rule: CloudTrail Trail Created - desc: Detect creation of a new trail. - condition: - ct.name="CreateTrail" and not ct.error exists - output: - A new trail has been created. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - trail name=%ct.request.name) - priority: WARNING - tags: - - cloud - - aws - - aws_cloudtrail - source: aws_cloudtrail - -- rule: CloudTrail Logging Disabled - desc: The CloudTrail logging has been disabled, this could be potentially malicious. - condition: - ct.name="StopLogging" and not ct.error exists - output: - The CloudTrail logging has been disabled. - (requesting user=%ct.user, - requesting IP=%ct.srcip, - AWS region=%ct.region, - resource name=%ct.request.name) - priority: WARNING - tags: - - cloud - - aws - - aws_cloudtrail - source: aws_cloudtrail - diff --git a/charts/falco/falco/charts/falco/rules/falco_rules.local.yaml b/charts/falco/falco/charts/falco/rules/falco_rules.local.yaml deleted file mode 100644 index f1811a4c9..000000000 --- a/charts/falco/falco/charts/falco/rules/falco_rules.local.yaml +++ /dev/null @@ -1,30 +0,0 @@ -# -# Copyright (C) 2019 The Falco Authors. -# -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -#################### -# Your custom rules! -#################### - -# Add new rules, like this one -# - rule: The program "sudo" is run in a container -# desc: An event will trigger every time you run sudo in a container -# condition: evt.type = execve and evt.dir=< and container.id != host and proc.name = sudo -# output: "Sudo run in container (user=%user.name %container.info parent=%proc.pname cmdline=%proc.cmdline)" -# priority: ERROR -# tags: [users, container] - -# Or override/append to any rule, macro, or list from the Default Rules diff --git a/charts/falco/falco/charts/falco/rules/falco_rules.yaml b/charts/falco/falco/charts/falco/rules/falco_rules.yaml deleted file mode 100644 index 6b07290f0..000000000 --- a/charts/falco/falco/charts/falco/rules/falco_rules.yaml +++ /dev/null @@ -1,3199 +0,0 @@ -# -# Copyright (C) 2022 The Falco Authors. -# -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -# The latest Falco Engine version is 9. -# Starting with version 8, the Falco engine supports exceptions. -# However the Falco rules file does not use them by default. -- required_engine_version: 9 - -# Currently disabled as read/write are ignored syscalls. The nearly -# similar open_write/open_read check for files being opened for -# reading/writing. -# - macro: write -# condition: (syscall.type=write and fd.type in (file, directory)) -# - macro: read -# condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory)) - -- macro: open_write - condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0 - -- macro: open_read - condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0 - -- macro: open_directory - condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0 - -- macro: never_true - condition: (evt.num=0) - -- macro: always_true - condition: (evt.num>=0) - -# In some cases, such as dropped system call events, information about -# the process name may be missing. For some rules that really depend -# on the identity of the process performing an action such as opening -# a file, etc., we require that the process name be known. -- macro: proc_name_exists - condition: (proc.name!="") - -- macro: rename - condition: evt.type in (rename, renameat, renameat2) - -- macro: mkdir - condition: evt.type in (mkdir, mkdirat) - -- macro: remove - condition: evt.type in (rmdir, unlink, unlinkat) - -- macro: modify - condition: rename or remove - -- macro: spawned_process - condition: evt.type in (execve, execveat) and evt.dir=< - -- macro: create_symlink - condition: evt.type in (symlink, symlinkat) and evt.dir=< - -- macro: create_hardlink - condition: evt.type in (link, linkat) and evt.dir=< - -- macro: chmod - condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<) - -# File categories -- macro: bin_dir - condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin) - -- macro: bin_dir_mkdir - condition: > - (evt.arg.path startswith /bin/ or - evt.arg.path startswith /sbin/ or - evt.arg.path startswith /usr/bin/ or - evt.arg.path startswith /usr/sbin/) - -- macro: bin_dir_rename - condition: > - (evt.arg.path startswith /bin/ or - evt.arg.path startswith /sbin/ or - evt.arg.path startswith /usr/bin/ or - evt.arg.path startswith /usr/sbin/ or - evt.arg.name startswith /bin/ or - evt.arg.name startswith /sbin/ or - evt.arg.name startswith /usr/bin/ or - evt.arg.name startswith /usr/sbin/ or - evt.arg.oldpath startswith /bin/ or - evt.arg.oldpath startswith /sbin/ or - evt.arg.oldpath startswith /usr/bin/ or - evt.arg.oldpath startswith /usr/sbin/ or - evt.arg.newpath startswith /bin/ or - evt.arg.newpath startswith /sbin/ or - evt.arg.newpath startswith /usr/bin/ or - evt.arg.newpath startswith /usr/sbin/) - -- macro: etc_dir - condition: fd.name startswith /etc/ - -# This detects writes immediately below / or any write anywhere below /root -- macro: root_dir - condition: (fd.directory=/ or fd.name startswith /root/) - -- list: shell_binaries - items: [ash, bash, csh, ksh, sh, tcsh, zsh, dash] - -- list: ssh_binaries - items: [ - sshd, sftp-server, ssh-agent, - ssh, scp, sftp, - ssh-keygen, ssh-keysign, ssh-keyscan, ssh-add - ] - -- list: shell_mgmt_binaries - items: [add-shell, remove-shell] - -- macro: shell_procs - condition: proc.name in (shell_binaries) - -- list: coreutils_binaries - items: [ - truncate, sha1sum, numfmt, fmt, fold, uniq, cut, who, - groups, csplit, sort, expand, printf, printenv, unlink, tee, chcon, stat, - basename, split, nice, "yes", whoami, sha224sum, hostid, users, stdbuf, - base64, unexpand, cksum, od, paste, nproc, pathchk, sha256sum, wc, test, - comm, arch, du, factor, sha512sum, md5sum, tr, runcon, env, dirname, - tsort, join, shuf, install, logname, pinky, nohup, expr, pr, tty, timeout, - tail, "[", seq, sha384sum, nl, head, id, mkfifo, sum, dircolors, ptx, shred, - tac, link, chroot, vdir, chown, touch, ls, dd, uname, "true", pwd, date, - chgrp, chmod, mktemp, cat, mknod, sync, ln, "false", rm, mv, cp, echo, - readlink, sleep, stty, mkdir, df, dir, rmdir, touch - ] - -# dpkg -L login | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," -- list: login_binaries - items: [ - login, systemd, '"(systemd)"', systemd-logind, su, - nologin, faillog, lastlog, newgrp, sg - ] - -# dpkg -L passwd | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," -- list: passwd_binaries - items: [ - shadowconfig, grpck, pwunconv, grpconv, pwck, - groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod, - groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh, - gpasswd, chfn, expiry, passwd, vigr, cpgr, adduser, addgroup, deluser, delgroup - ] - -# repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' | -# awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," -- list: shadowutils_binaries - items: [ - chage, gpasswd, lastlog, newgrp, sg, adduser, deluser, chpasswd, - groupadd, groupdel, addgroup, delgroup, groupmems, groupmod, grpck, grpconv, grpunconv, - newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw, unix_chkpwd - ] - -- list: sysdigcloud_binaries - items: [setup-backend, dragent, sdchecks] - -- list: docker_binaries - items: [docker, dockerd, exe, docker-compose, docker-entrypoi, docker-runc-cur, docker-current, dockerd-current] - -- list: k8s_binaries - items: [hyperkube, skydns, kube2sky, exechealthz, weave-net, loopback, bridge, openshift-sdn, openshift] - -- list: lxd_binaries - items: [lxd, lxcfs] - -- list: http_server_binaries - items: [nginx, httpd, httpd-foregroun, lighttpd, apache, apache2] - -- list: db_server_binaries - items: [mysqld, postgres, sqlplus] - -- list: postgres_mgmt_binaries - items: [pg_dumpall, pg_ctl, pg_lsclusters, pg_ctlcluster] - -- list: nosql_server_binaries - items: [couchdb, memcached, redis-server, rabbitmq-server, mongod] - -- list: gitlab_binaries - items: [gitlab-shell, gitlab-mon, gitlab-runner-b, git] - -- list: interpreted_binaries - items: [lua, node, perl, perl5, perl6, php, python, python2, python3, ruby, tcl] - -- macro: interpreted_procs - condition: > - (proc.name in (interpreted_binaries)) - -- macro: server_procs - condition: proc.name in (http_server_binaries, db_server_binaries, docker_binaries, sshd) - -# The explicit quotes are needed to avoid the - characters being -# interpreted by the filter expression. -- list: rpm_binaries - items: [dnf, rpm, rpmkey, yum, '"75-system-updat"', rhsmcertd-worke, rhsmcertd, subscription-ma, - repoquery, rpmkeys, rpmq, yum-cron, yum-config-mana, yum-debug-dump, - abrt-action-sav, rpmdb_stat, microdnf, rhn_check, yumdb] - -- list: openscap_rpm_binaries - items: [probe_rpminfo, probe_rpmverify, probe_rpmverifyfile, probe_rpmverifypackage] - -- macro: rpm_procs - condition: (proc.name in (rpm_binaries, openscap_rpm_binaries) or proc.name in (salt-minion)) - -- list: deb_binaries - items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude, - frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key, - apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai - ] - -# The truncated dpkg-preconfigu is intentional, process names are -# truncated at the falcosecurity-libs level. -- list: package_mgmt_binaries - items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd] - -- macro: package_mgmt_procs - condition: proc.name in (package_mgmt_binaries) - -- macro: package_mgmt_ancestor_procs - condition: proc.pname in (package_mgmt_binaries) or - proc.aname[2] in (package_mgmt_binaries) or - proc.aname[3] in (package_mgmt_binaries) or - proc.aname[4] in (package_mgmt_binaries) - -- macro: coreos_write_ssh_dir - condition: (proc.name=update-ssh-keys and fd.name startswith /home/core/.ssh) - -- macro: run_by_package_mgmt_binaries - condition: proc.aname in (package_mgmt_binaries, needrestart) - -- list: ssl_mgmt_binaries - items: [ca-certificates] - -- list: dhcp_binaries - items: [dhclient, dhclient-script, 11-dhclient] - -# A canonical set of processes that run other programs with different -# privileges or as a different user. -- list: userexec_binaries - items: [sudo, su, suexec, critical-stack, dzdo] - -- list: known_setuid_binaries - items: [ - sshd, dbus-daemon-lau, ping, ping6, critical-stack-, pmmcli, - filemng, PassengerAgent, bwrap, osdetect, nginxmng, sw-engine-fpm, - start-stop-daem - ] - -- list: user_mgmt_binaries - items: [login_binaries, passwd_binaries, shadowutils_binaries] - -- list: dev_creation_binaries - items: [blkid, rename_device, update_engine, sgdisk] - -- list: hids_binaries - items: [aide, aide.wrapper, update-aide.con, logcheck, syslog-summary, osqueryd, ossec-syscheckd] - -- list: vpn_binaries - items: [openvpn] - -- list: nomachine_binaries - items: [nxexec, nxnode.bin, nxserver.bin, nxclient.bin] - -- macro: system_procs - condition: proc.name in (coreutils_binaries, user_mgmt_binaries) - -- list: mail_binaries - items: [ - sendmail, sendmail-msp, postfix, procmail, exim4, - pickup, showq, mailq, dovecot, imap-login, imap, - mailmng-core, pop3-login, dovecot-lda, pop3 - ] - -- list: mail_config_binaries - items: [ - update_conf, parse_mc, makemap_hash, newaliases, update_mk, update_tlsm4, - update_db, update_mc, ssmtp.postinst, mailq, postalias, postfix.config., - postfix.config, postfix-script, postconf - ] - -- list: sensitive_file_names - items: [/etc/shadow, /etc/sudoers, /etc/pam.conf, /etc/security/pwquality.conf] - -- list: sensitive_directory_names - items: [/, /etc, /etc/, /root, /root/] - -- macro: sensitive_files - condition: > - fd.name startswith /etc and - (fd.name in (sensitive_file_names) - or fd.directory in (/etc/sudoers.d, /etc/pam.d)) - -# Indicates that the process is new. Currently detected using time -# since process was started, using a threshold of 5 seconds. -- macro: proc_is_new - condition: proc.duration <= 5000000000 - -# Network -- macro: inbound - condition: > - (((evt.type in (accept,listen) and evt.dir=<) or - (evt.type in (recvfrom,recvmsg) and evt.dir=< and - fd.l4proto != tcp and fd.connected=false and fd.name_changed=true)) and - (fd.typechar = 4 or fd.typechar = 6) and - (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and - (evt.rawres >= 0 or evt.res = EINPROGRESS)) - -# RFC1918 addresses were assigned for private network usage -- list: rfc_1918_addresses - items: ['"10.0.0.0/8"', '"172.16.0.0/12"', '"192.168.0.0/16"'] - -- macro: outbound - condition: > - (((evt.type = connect and evt.dir=<) or - (evt.type in (sendto,sendmsg) and evt.dir=< and - fd.l4proto != tcp and fd.connected=false and fd.name_changed=true)) and - (fd.typechar = 4 or fd.typechar = 6) and - (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and - (evt.rawres >= 0 or evt.res = EINPROGRESS)) - -# Very similar to inbound/outbound, but combines the tests together -# for efficiency. -- macro: inbound_outbound - condition: > - ((((evt.type in (accept,listen,connect) and evt.dir=<)) and - (fd.typechar = 4 or fd.typechar = 6)) and - (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and - (evt.rawres >= 0 or evt.res = EINPROGRESS)) - -- macro: ssh_port - condition: fd.sport=22 - -# In a local/user rules file, you could override this macro to -# enumerate the servers for which ssh connections are allowed. For -# example, you might have a ssh gateway host for which ssh connections -# are allowed. -# -# In the main falco rules file, there isn't any way to know the -# specific hosts for which ssh access is allowed, so this macro just -# repeats ssh_port, which effectively allows ssh from all hosts. In -# the overridden macro, the condition would look something like -# "fd.sip="a.b.c.d" or fd.sip="e.f.g.h" or ..." -- macro: allowed_ssh_hosts - condition: ssh_port - -- rule: Disallowed SSH Connection - desc: Detect any new ssh connection to a host other than those in an allowed group of hosts - condition: (inbound_outbound) and ssh_port and not allowed_ssh_hosts - output: Disallowed SSH Connection (command=%proc.cmdline connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network, mitre_remote_service] - -# These rules and supporting macros are more of an example for how to -# use the fd.*ip and fd.*ip.name fields to match connection -# information against ips, netmasks, and complete domain names. -# -# To use this rule, you should modify consider_all_outbound_conns and -# populate allowed_{source,destination}_{ipaddrs,networks,domains} with the -# values that make sense for your environment. -- macro: consider_all_outbound_conns - condition: (never_true) - -# Note that this can be either individual IPs or netmasks -- list: allowed_outbound_destination_ipaddrs - items: ['"127.0.0.1"', '"8.8.8.8"'] - -- list: allowed_outbound_destination_networks - items: ['"127.0.0.1/8"'] - -- list: allowed_outbound_destination_domains - items: [google.com, www.yahoo.com] - -- rule: Unexpected outbound connection destination - desc: Detect any outbound connection to a destination outside of an allowed set of ips, networks, or domain names - condition: > - consider_all_outbound_conns and outbound and not - ((fd.sip in (allowed_outbound_destination_ipaddrs)) or - (fd.snet in (allowed_outbound_destination_networks)) or - (fd.sip.name in (allowed_outbound_destination_domains))) - output: Disallowed outbound connection destination (command=%proc.cmdline connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network] - -- macro: consider_all_inbound_conns - condition: (never_true) - -- list: allowed_inbound_source_ipaddrs - items: ['"127.0.0.1"'] - -- list: allowed_inbound_source_networks - items: ['"127.0.0.1/8"', '"10.0.0.0/8"'] - -- list: allowed_inbound_source_domains - items: [google.com] - -- rule: Unexpected inbound connection source - desc: Detect any inbound connection from a source outside of an allowed set of ips, networks, or domain names - condition: > - consider_all_inbound_conns and inbound and not - ((fd.cip in (allowed_inbound_source_ipaddrs)) or - (fd.cnet in (allowed_inbound_source_networks)) or - (fd.cip.name in (allowed_inbound_source_domains))) - output: Disallowed inbound connection source (command=%proc.cmdline connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network] - -- list: bash_config_filenames - items: [.bashrc, .bash_profile, .bash_history, .bash_login, .bash_logout, .inputrc, .profile] - -- list: bash_config_files - items: [/etc/profile, /etc/bashrc] - -# Covers both csh and tcsh -- list: csh_config_filenames - items: [.cshrc, .login, .logout, .history, .tcshrc, .cshdirs] - -- list: csh_config_files - items: [/etc/csh.cshrc, /etc/csh.login] - -- list: zsh_config_filenames - items: [.zshenv, .zprofile, .zshrc, .zlogin, .zlogout] - -- list: shell_config_filenames - items: [bash_config_filenames, csh_config_filenames, zsh_config_filenames] - -- list: shell_config_files - items: [bash_config_files, csh_config_files] - -- list: shell_config_directories - items: [/etc/zsh] - -- macro: user_known_shell_config_modifiers - condition: (never_true) - -- rule: Modify Shell Configuration File - desc: Detect attempt to modify shell configuration files - condition: > - open_write and - (fd.filename in (shell_config_filenames) or - fd.name in (shell_config_files) or - fd.directory in (shell_config_directories)) - and not proc.name in (shell_binaries) - and not exe_running_docker_save - and not user_known_shell_config_modifiers - output: > - a shell configuration file has been modified (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name container_id=%container.id image=%container.image.repository) - priority: - WARNING - tags: [file, mitre_persistence] - -# This rule is not enabled by default, as there are many legitimate -# readers of shell config files. If you want to enable it, modify the -# following macro. - -- macro: consider_shell_config_reads - condition: (never_true) - -- rule: Read Shell Configuration File - desc: Detect attempts to read shell configuration files by non-shell programs - condition: > - open_read and - consider_shell_config_reads and - (fd.filename in (shell_config_filenames) or - fd.name in (shell_config_files) or - fd.directory in (shell_config_directories)) and - (not proc.name in (shell_binaries)) - output: > - a shell configuration file was read by a non-shell program (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline file=%fd.name container_id=%container.id image=%container.image.repository) - priority: - WARNING - tags: [file, mitre_discovery] - -- macro: consider_all_cron_jobs - condition: (never_true) - -- macro: user_known_cron_jobs - condition: (never_true) - -- rule: Schedule Cron Jobs - desc: Detect cron jobs scheduled - condition: > - ((open_write and fd.name startswith /etc/cron) or - (spawned_process and proc.name = "crontab")) and - consider_all_cron_jobs and - not user_known_cron_jobs - output: > - Cron jobs were scheduled to run (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline - file=%fd.name container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: - NOTICE - tags: [file, mitre_persistence] - -# Use this to test whether the event occurred within a container. - -# When displaying container information in the output field, use -# %container.info, without any leading term (file=%fd.name -# %container.info user=%user.name user_loginuid=%user.loginuid, and not file=%fd.name -# container=%container.info user=%user.name user_loginuid=%user.loginuid). The output will change -# based on the context and whether or not -pk/-pm/-pc was specified on -# the command line. -- macro: container - condition: (container.id != host) - -- macro: container_started - condition: > - ((evt.type = container or - (spawned_process and proc.vpid=1)) and - container.image.repository != incomplete) - -- macro: interactive - condition: > - ((proc.aname=sshd and proc.name != sshd) or - proc.name=systemd-logind or proc.name=login) - -- list: cron_binaries - items: [anacron, cron, crond, crontab] - -# https://github.com/liske/needrestart -- list: needrestart_binaries - items: [needrestart, 10-dpkg, 20-rpm, 30-pacman] - -# Possible scripts run by sshkit -- list: sshkit_script_binaries - items: [10_etc_sudoers., 10_passwd_group] - -- list: plesk_binaries - items: [sw-engine, sw-engine-fpm, sw-engine-kv, filemng, f2bmng] - -# System users that should never log into a system. Consider adding your own -# service users (e.g. 'apache' or 'mysqld') here. -- macro: system_users - condition: user.name in (bin, daemon, games, lp, mail, nobody, sshd, sync, uucp, www-data) - -- macro: httpd_writing_ssl_conf - condition: > - (proc.pname=run-httpd and - (proc.cmdline startswith "sed -ri" or proc.cmdline startswith "sed -i") and - (fd.name startswith /etc/httpd/conf.d/ or fd.name startswith /etc/httpd/conf)) - -- macro: userhelper_writing_etc_security - condition: (proc.name=userhelper and fd.name startswith /etc/security) - -- macro: ansible_running_python - condition: (proc.name in (python, pypy, python3) and proc.cmdline contains ansible) - -- macro: python_running_chef - condition: (proc.name=python and (proc.cmdline contains yum-dump.py or proc.cmdline="python /usr/bin/chef-monitor.py")) - -- macro: python_running_denyhosts - condition: > - (proc.name=python and - (proc.cmdline contains /usr/sbin/denyhosts or - proc.cmdline contains /usr/local/bin/denyhosts.py)) - -# Qualys seems to run a variety of shell subprocesses, at various -# levels. This checks at a few levels without the cost of a full -# proc.aname, which traverses the full parent hierarchy. -- macro: run_by_qualys - condition: > - (proc.pname=qualys-cloud-ag or - proc.aname[2]=qualys-cloud-ag or - proc.aname[3]=qualys-cloud-ag or - proc.aname[4]=qualys-cloud-ag) - -- macro: run_by_sumologic_securefiles - condition: > - ((proc.cmdline="usermod -a -G sumologic_collector" or - proc.cmdline="groupadd sumologic_collector") and - (proc.pname=secureFiles.sh and proc.aname[2]=java)) - -- macro: run_by_yum - condition: ((proc.pname=sh and proc.aname[2]=yum) or - (proc.aname[2]=sh and proc.aname[3]=yum)) - -- macro: run_by_ms_oms - condition: > - (proc.aname[3] startswith omsagent- or - proc.aname[3] startswith scx-) - -- macro: run_by_google_accounts_daemon - condition: > - (proc.aname[1] startswith google_accounts or - proc.aname[2] startswith google_accounts or - proc.aname[3] startswith google_accounts) - -# Chef is similar. -- macro: run_by_chef - condition: (proc.aname[2]=chef_command_wr or proc.aname[3]=chef_command_wr or - proc.aname[2]=chef-client or proc.aname[3]=chef-client or - proc.name=chef-client) - -- macro: run_by_adclient - condition: (proc.aname[2]=adclient or proc.aname[3]=adclient or proc.aname[4]=adclient) - -- macro: run_by_centrify - condition: (proc.aname[2]=centrify or proc.aname[3]=centrify or proc.aname[4]=centrify) - -# Also handles running semi-indirectly via scl -- macro: run_by_foreman - condition: > - (user.name=foreman and - ((proc.pname in (rake, ruby, scl) and proc.aname[5] in (tfm-rake,tfm-ruby)) or - (proc.pname=scl and proc.aname[2] in (tfm-rake,tfm-ruby)))) - -- macro: java_running_sdjagent - condition: proc.name=java and proc.cmdline contains sdjagent.jar - -- macro: kubelet_running_loopback - condition: (proc.pname=kubelet and proc.name=loopback) - -- macro: python_mesos_marathon_scripting - condition: (proc.pcmdline startswith "python3 /marathon-lb/marathon_lb.py") - -- macro: splunk_running_forwarder - condition: (proc.pname=splunkd and proc.cmdline startswith "sh -c /opt/splunkforwarder") - -- macro: parent_supervise_running_multilog - condition: (proc.name=multilog and proc.pname=supervise) - -- macro: supervise_writing_status - condition: (proc.name in (supervise,svc) and fd.name startswith "/etc/sb/") - -- macro: pki_realm_writing_realms - condition: (proc.cmdline startswith "bash /usr/local/lib/pki/pki-realm" and fd.name startswith /etc/pki/realms) - -- macro: htpasswd_writing_passwd - condition: (proc.name=htpasswd and fd.name=/etc/nginx/.htpasswd) - -- macro: lvprogs_writing_conf - condition: > - (proc.name in (dmeventd,lvcreate,pvscan,lvs) and - (fd.name startswith /etc/lvm/archive or - fd.name startswith /etc/lvm/backup or - fd.name startswith /etc/lvm/cache)) - -- macro: ovsdb_writing_openvswitch - condition: (proc.name=ovsdb-server and fd.directory=/etc/openvswitch) - -- macro: perl_running_plesk - condition: (proc.cmdline startswith "perl /opt/psa/admin/bin/plesk_agent_manager" or - proc.pcmdline startswith "perl /opt/psa/admin/bin/plesk_agent_manager") - -- macro: perl_running_updmap - condition: (proc.cmdline startswith "perl /usr/bin/updmap") - -- macro: perl_running_centrifydc - condition: (proc.cmdline startswith "perl /usr/share/centrifydc") - -- macro: runuser_reading_pam - condition: (proc.name=runuser and fd.directory=/etc/pam.d) - -# CIS Linux Benchmark program -- macro: linux_bench_reading_etc_shadow - condition: ((proc.aname[2]=linux-bench and - proc.name in (awk,cut,grep)) and - (fd.name=/etc/shadow or - fd.directory=/etc/pam.d)) - -- macro: parent_ucf_writing_conf - condition: (proc.pname=ucf and proc.aname[2]=frontend) - -- macro: consul_template_writing_conf - condition: > - ((proc.name=consul-template and fd.name startswith /etc/haproxy) or - (proc.name=reload.sh and proc.aname[2]=consul-template and fd.name startswith /etc/ssl)) - -- macro: countly_writing_nginx_conf - condition: (proc.cmdline startswith "nodejs /opt/countly/bin" and fd.name startswith /etc/nginx) - -- list: ms_oms_binaries - items: [omi.postinst, omsconfig.posti, scx.postinst, omsadmin.sh, omiagent] - -- macro: ms_oms_writing_conf - condition: > - ((proc.name in (omiagent,omsagent,in_heartbeat_r*,omsadmin.sh,PerformInventor,dsc_host) - or proc.pname in (ms_oms_binaries) - or proc.aname[2] in (ms_oms_binaries)) - and (fd.name startswith /etc/opt/omi or fd.name startswith /etc/opt/microsoft/omsagent)) - -- macro: ms_scx_writing_conf - condition: (proc.name in (GetLinuxOS.sh) and fd.name startswith /etc/opt/microsoft/scx) - -- macro: azure_scripts_writing_conf - condition: (proc.pname startswith "bash /var/lib/waagent/" and fd.name startswith /etc/azure) - -- macro: azure_networkwatcher_writing_conf - condition: (proc.name in (NetworkWatcherA) and fd.name=/etc/init.d/AzureNetworkWatcherAgent) - -- macro: couchdb_writing_conf - condition: (proc.name=beam.smp and proc.cmdline contains couchdb and fd.name startswith /etc/couchdb) - -- macro: update_texmf_writing_conf - condition: (proc.name=update-texmf and fd.name startswith /etc/texmf) - -- macro: slapadd_writing_conf - condition: (proc.name=slapadd and fd.name startswith /etc/ldap) - -- macro: openldap_writing_conf - condition: (proc.pname=run-openldap.sh and fd.name startswith /etc/openldap) - -- macro: ucpagent_writing_conf - condition: (proc.name=apiserver and container.image.repository=docker/ucp-agent and fd.name=/etc/authorization_config.cfg) - -- macro: iscsi_writing_conf - condition: (proc.name=iscsiadm and fd.name startswith /etc/iscsi) - -- macro: istio_writing_conf - condition: (proc.name=pilot-agent and fd.name startswith /etc/istio) - -- macro: symantec_writing_conf - condition: > - ((proc.name=symcfgd and fd.name startswith /etc/symantec) or - (proc.name=navdefutil and fd.name=/etc/symc-defutils.conf)) - -- macro: liveupdate_writing_conf - condition: (proc.cmdline startswith "java LiveUpdate" and fd.name in (/etc/liveupdate.conf, /etc/Product.Catalog.JavaLiveUpdate)) - -- macro: rancher_agent - condition: (proc.name=agent and container.image.repository contains "rancher/agent") - -- macro: rancher_network_manager - condition: (proc.name=rancher-bridge and container.image.repository contains "rancher/network-manager") - -- macro: sosreport_writing_files - condition: > - (proc.name=urlgrabber-ext- and proc.aname[3]=sosreport and - (fd.name startswith /etc/pkt/nssdb or fd.name startswith /etc/pki/nssdb)) - -- macro: pkgmgmt_progs_writing_pki - condition: > - (proc.name=urlgrabber-ext- and proc.pname in (yum, yum-cron, repoquery) and - (fd.name startswith /etc/pkt/nssdb or fd.name startswith /etc/pki/nssdb)) - -- macro: update_ca_trust_writing_pki - condition: (proc.pname=update-ca-trust and proc.name=trust and fd.name startswith /etc/pki) - -- macro: brandbot_writing_os_release - condition: proc.name=brandbot and fd.name=/etc/os-release - -- macro: selinux_writing_conf - condition: (proc.name in (semodule,genhomedircon,sefcontext_comp) and fd.name startswith /etc/selinux) - -- list: veritas_binaries - items: [vxconfigd, sfcache, vxclustadm, vxdctl, vxprint, vxdmpadm, vxdisk, vxdg, vxassist, vxtune] - -- macro: veritas_driver_script - condition: (proc.cmdline startswith "perl /opt/VRTSsfmh/bin/mh_driver.pl") - -- macro: veritas_progs - condition: (proc.name in (veritas_binaries) or veritas_driver_script) - -- macro: veritas_writing_config - condition: (veritas_progs and (fd.name startswith /etc/vx or fd.name startswith /etc/opt/VRTS or fd.name startswith /etc/vom)) - -- macro: nginx_writing_conf - condition: (proc.name in (nginx,nginx-ingress-c,nginx-ingress) and (fd.name startswith /etc/nginx or fd.name startswith /etc/ingress-controller)) - -- macro: nginx_writing_certs - condition: > - (((proc.name=openssl and proc.pname=nginx-launch.sh) or proc.name=nginx-launch.sh) and fd.name startswith /etc/nginx/certs) - -- macro: chef_client_writing_conf - condition: (proc.pcmdline startswith "chef-client /opt/gitlab" and fd.name startswith /etc/gitlab) - -- macro: centrify_writing_krb - condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5) - -- macro: sssd_writing_krb - condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5) - -- macro: cockpit_writing_conf - condition: > - ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la) - and fd.name startswith /etc/cockpit) - -- macro: ipsec_writing_conf - condition: (proc.name=start-ipsec.sh and fd.directory=/etc/ipsec) - -- macro: exe_running_docker_save - condition: > - proc.name = "exe" - and (proc.cmdline contains "/var/lib/docker" - or proc.cmdline contains "/var/run/docker") - and proc.pname in (dockerd, docker, dockerd-current, docker-current) - -# Ideally we'd have a length check here as well but -# filterchecks don't have operators like len() -- macro: sed_temporary_file - condition: (proc.name=sed and fd.name startswith "/etc/sed") - -- macro: python_running_get_pip - condition: (proc.cmdline startswith "python get-pip.py") - -- macro: python_running_ms_oms - condition: (proc.cmdline startswith "python /var/lib/waagent/") - -- macro: gugent_writing_guestagent_log - condition: (proc.name=gugent and fd.name=GuestAgent.log) - -- macro: dse_writing_tmp - condition: (proc.name=dse-entrypoint and fd.name=/root/tmp__) - -- macro: zap_writing_state - condition: (proc.name=java and proc.cmdline contains "jar /zap" and fd.name startswith /root/.ZAP) - -- macro: airflow_writing_state - condition: (proc.name=airflow and fd.name startswith /root/airflow) - -- macro: rpm_writing_root_rpmdb - condition: (proc.name=rpm and fd.directory=/root/.rpmdb) - -- macro: maven_writing_groovy - condition: (proc.name=java and proc.cmdline contains "classpath /usr/local/apache-maven" and fd.name startswith /root/.groovy) - -- macro: chef_writing_conf - condition: (proc.name=chef-client and fd.name startswith /root/.chef) - -- macro: kubectl_writing_state - condition: (proc.name in (kubectl,oc) and fd.name startswith /root/.kube) - -- macro: java_running_cassandra - condition: (proc.name=java and proc.cmdline contains "cassandra.jar") - -- macro: cassandra_writing_state - condition: (java_running_cassandra and fd.directory=/root/.cassandra) - -# Istio -- macro: galley_writing_state - condition: (proc.name=galley and fd.name in (known_istio_files)) - -- list: known_istio_files - items: [/healthready, /healthliveness] - -- macro: calico_writing_state - condition: (proc.name=kube-controller and fd.name startswith /status.json and k8s.pod.name startswith calico) - -- macro: calico_writing_envvars - condition: (proc.name=start_runit and fd.name startswith "/etc/envvars" and container.image.repository endswith "calico/node") - -- list: repository_files - items: [sources.list] - -- list: repository_directories - items: [/etc/apt/sources.list.d, /etc/yum.repos.d, /etc/apt] - -- macro: access_repositories - condition: (fd.directory in (repository_directories) or - (fd.name pmatch (repository_directories) and - fd.filename in (repository_files))) - -- macro: modify_repositories - condition: (evt.arg.newpath pmatch (repository_directories)) - -- macro: user_known_update_package_registry - condition: (never_true) - -- rule: Update Package Repository - desc: Detect package repositories get updated - condition: > - ((open_write and access_repositories) or (modify and modify_repositories)) - and not package_mgmt_procs - and not package_mgmt_ancestor_procs - and not exe_running_docker_save - and not user_known_update_package_registry - output: > - Repository files get updated (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name newpath=%evt.arg.newpath container_id=%container.id image=%container.image.repository) - priority: - NOTICE - tags: [filesystem, mitre_persistence] - -# Users should overwrite this macro to specify conditions under which a -# write under the binary dir is ignored. For example, it may be okay to -# install a binary in the context of a ci/cd build. -- macro: user_known_write_below_binary_dir_activities - condition: (never_true) - -- rule: Write below binary dir - desc: an attempt to write to any file below a set of binary directories - condition: > - bin_dir and evt.dir = < and open_write - and not package_mgmt_procs - and not exe_running_docker_save - and not python_running_get_pip - and not python_running_ms_oms - and not user_known_write_below_binary_dir_activities - output: > - File below a known binary directory opened for writing (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2] container_id=%container.id image=%container.image.repository) - priority: ERROR - tags: [filesystem, mitre_persistence] - -# If you'd like to generally monitor a wider set of directories on top -# of the ones covered by the rule Write below binary dir, you can use -# the following rule and lists. - -- list: monitored_directories - items: [/boot, /lib, /lib64, /usr/lib, /usr/local/lib, /usr/local/sbin, /usr/local/bin, /root/.ssh] - -- macro: user_ssh_directory - condition: (fd.name glob '/home/*/.ssh/*') - -# google_accounts_(daemon) -- macro: google_accounts_daemon_writing_ssh - condition: (proc.name=google_accounts and user_ssh_directory) - -- macro: cloud_init_writing_ssh - condition: (proc.name=cloud-init and user_ssh_directory) - -- macro: mkinitramfs_writing_boot - condition: (proc.pname in (mkinitramfs, update-initramf) and fd.directory=/boot) - -- macro: monitored_dir - condition: > - (fd.directory in (monitored_directories) - or user_ssh_directory) - and not mkinitramfs_writing_boot - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to allow for specific combinations of -# programs writing below monitored directories. -# -# Its default value is an expression that always is false, which -# becomes true when the "not ..." in the rule is applied. -- macro: user_known_write_monitored_dir_conditions - condition: (never_true) - -- rule: Write below monitored dir - desc: an attempt to write to any file below a set of monitored directories - condition: > - evt.dir = < and open_write and monitored_dir - and not package_mgmt_procs - and not coreos_write_ssh_dir - and not exe_running_docker_save - and not python_running_get_pip - and not python_running_ms_oms - and not google_accounts_daemon_writing_ssh - and not cloud_init_writing_ssh - and not user_known_write_monitored_dir_conditions - output: > - File below a monitored directory opened for writing (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2] container_id=%container.id image=%container.image.repository) - priority: ERROR - tags: [filesystem, mitre_persistence] - -# This rule is disabled by default as many system management tools -# like ansible, etc can read these files/paths. Enable it using this macro. - -- macro: consider_ssh_reads - condition: (never_true) - -- macro: user_known_read_ssh_information_activities - condition: (never_true) - -- rule: Read ssh information - desc: Any attempt to read files below ssh directories by non-ssh programs - condition: > - ((open_read or open_directory) and - consider_ssh_reads and - (user_ssh_directory or fd.name startswith /root/.ssh) and - not user_known_read_ssh_information_activities and - not proc.name in (ssh_binaries)) - output: > - ssh-related file/directory read by non-ssh program (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline container_id=%container.id image=%container.image.repository) - priority: ERROR - tags: [filesystem, mitre_discovery] - -- list: safe_etc_dirs - items: [/etc/cassandra, /etc/ssl/certs/java, /etc/logstash, /etc/nginx/conf.d, /etc/container_environment, /etc/hrmconfig, /etc/fluent/configs.d] - -- macro: fluentd_writing_conf_files - condition: (proc.name=start-fluentd and fd.name in (/etc/fluent/fluent.conf, /etc/td-agent/td-agent.conf)) - -- macro: qualys_writing_conf_files - condition: (proc.name=qualys-cloud-ag and fd.name=/etc/qualys/cloud-agent/qagent-log.conf) - -- macro: git_writing_nssdb - condition: (proc.name=git-remote-http and fd.directory=/etc/pki/nssdb) - -- macro: plesk_writing_keys - condition: (proc.name in (plesk_binaries) and fd.name startswith /etc/sw/keys) - -- macro: plesk_install_writing_apache_conf - condition: (proc.cmdline startswith "bash -hB /usr/lib/plesk-9.0/services/webserver.apache configure" - and fd.name="/etc/apache2/apache2.conf.tmp") - -- macro: plesk_running_mktemp - condition: (proc.name=mktemp and proc.aname[3] in (plesk_binaries)) - -- macro: networkmanager_writing_resolv_conf - condition: proc.aname[2]=nm-dispatcher and fd.name=/etc/resolv.conf - -- macro: add_shell_writing_shells_tmp - condition: (proc.name=add-shell and fd.name=/etc/shells.tmp) - -- macro: duply_writing_exclude_files - condition: (proc.name=touch and proc.pcmdline startswith "bash /usr/bin/duply" and fd.name startswith "/etc/duply") - -- macro: xmlcatalog_writing_files - condition: (proc.name=update-xmlcatal and fd.directory=/etc/xml) - -- macro: datadog_writing_conf - condition: ((proc.cmdline startswith "python /opt/datadog-agent" or - proc.cmdline startswith "entrypoint.sh /entrypoint.sh datadog start" or - proc.cmdline startswith "agent.py /opt/datadog-agent") - and fd.name startswith "/etc/dd-agent") - -- macro: rancher_writing_conf - condition: ((proc.name in (healthcheck, lb-controller, rancher-dns)) and - (container.image.repository contains "rancher/healthcheck" or - container.image.repository contains "rancher/lb-service-haproxy" or - container.image.repository contains "rancher/dns") and - (fd.name startswith "/etc/haproxy" or fd.name startswith "/etc/rancher-dns")) - -- macro: rancher_writing_root - condition: (proc.name=rancher-metadat and - (container.image.repository contains "rancher/metadata" or container.image.repository contains "rancher/lb-service-haproxy") and - fd.name startswith "/answers.json") - -- macro: checkpoint_writing_state - condition: (proc.name=checkpoint and - container.image.repository contains "coreos/pod-checkpointer" and - fd.name startswith "/etc/kubernetes") - -- macro: jboss_in_container_writing_passwd - condition: > - ((proc.cmdline="run-java.sh /opt/jboss/container/java/run/run-java.sh" - or proc.cmdline="run-java.sh /opt/run-java/run-java.sh") - and container - and fd.name=/etc/passwd) - -- macro: curl_writing_pki_db - condition: (proc.name=curl and fd.directory=/etc/pki/nssdb) - -- macro: haproxy_writing_conf - condition: ((proc.name in (update-haproxy-,haproxy_reload.) or proc.pname in (update-haproxy-,haproxy_reload,haproxy_reload.)) - and (fd.name=/etc/openvpn/client.map or fd.name startswith /etc/haproxy)) - -- macro: java_writing_conf - condition: (proc.name=java and fd.name=/etc/.java/.systemPrefs/.system.lock) - -- macro: rabbitmq_writing_conf - condition: (proc.name=rabbitmq-server and fd.directory=/etc/rabbitmq) - -- macro: rook_writing_conf - condition: (proc.name=toolbox.sh and container.image.repository=rook/toolbox - and fd.directory=/etc/ceph) - -- macro: httpd_writing_conf_logs - condition: (proc.name=httpd and fd.name startswith /etc/httpd/) - -- macro: mysql_writing_conf - condition: > - ((proc.name in (start-mysql.sh, run-mysqld) or proc.pname=start-mysql.sh) and - (fd.name startswith /etc/mysql or fd.directory=/etc/my.cnf.d)) - -- macro: redis_writing_conf - condition: > - (proc.name in (run-redis, redis-launcher.) and (fd.name=/etc/redis.conf or fd.name startswith /etc/redis)) - -- macro: openvpn_writing_conf - condition: (proc.name in (openvpn,openvpn-entrypo) and fd.name startswith /etc/openvpn) - -- macro: php_handlers_writing_conf - condition: (proc.name=php_handlers_co and fd.name=/etc/psa/php_versions.json) - -- macro: sed_writing_temp_file - condition: > - ((proc.aname[3]=cron_start.sh and fd.name startswith /etc/security/sed) or - (proc.name=sed and (fd.name startswith /etc/apt/sources.list.d/sed or - fd.name startswith /etc/apt/sed or - fd.name startswith /etc/apt/apt.conf.d/sed))) - -- macro: cron_start_writing_pam_env - condition: (proc.cmdline="bash /usr/sbin/start-cron" and fd.name=/etc/security/pam_env.conf) - -# In some cases dpkg-reconfigur runs commands that modify /etc. Not -# putting the full set of package management programs yet. -- macro: dpkg_scripting - condition: (proc.aname[2] in (dpkg-reconfigur, dpkg-preconfigu)) - -- macro: ufw_writing_conf - condition: (proc.name=ufw and fd.directory=/etc/ufw) - -- macro: calico_writing_conf - condition: > - (((proc.name = calico-node) or - (container.image.repository=gcr.io/projectcalico-org/node and proc.name in (start_runit, cp)) or - (container.image.repository=gcr.io/projectcalico-org/cni and proc.name=sed)) - and fd.name startswith /etc/calico) - -- macro: prometheus_conf_writing_conf - condition: (proc.name=prometheus-conf and fd.name startswith /etc/prometheus/config_out) - -- macro: openshift_writing_conf - condition: (proc.name=oc and fd.name startswith /etc/origin/node) - -- macro: keepalived_writing_conf - condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf) - -- macro: etcd_manager_updating_dns - condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts) - -- macro: automount_using_mtab - condition: (proc.pname = automount and fd.name startswith /etc/mtab) - -- macro: mcafee_writing_cma_d - condition: (proc.name=macompatsvc and fd.directory=/etc/cma.d) - -- macro: avinetworks_supervisor_writing_ssh - condition: > - (proc.cmdline="se_supervisor.p /opt/avi/scripts/se_supervisor.py -d" and - (fd.name startswith /etc/ssh/known_host_ or - fd.name startswith /etc/ssh/ssh_monitor_config_ or - fd.name startswith /etc/ssh/ssh_config_)) - -- macro: multipath_writing_conf - condition: (proc.name = multipath and fd.name startswith /etc/multipath/) - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to allow for specific combinations of -# programs writing below specific directories below -# /etc. fluentd_writing_conf_files is a good example to follow, as it -# specifies both the program doing the writing as well as the specific -# files it is allowed to modify. -# -# In this file, it just takes one of the programs in the base macro -# and repeats it. - -- macro: user_known_write_etc_conditions - condition: proc.name=confd - -# This is a placeholder for user to extend the whitelist for write below etc rule -- macro: user_known_write_below_etc_activities - condition: (never_true) - -- macro: write_etc_common - condition: > - etc_dir and evt.dir = < and open_write - and proc_name_exists - and not proc.name in (passwd_binaries, shadowutils_binaries, sysdigcloud_binaries, - package_mgmt_binaries, ssl_mgmt_binaries, dhcp_binaries, - dev_creation_binaries, shell_mgmt_binaries, - mail_config_binaries, - sshkit_script_binaries, - ldconfig.real, ldconfig, confd, gpg, insserv, - apparmor_parser, update-mime, tzdata.config, tzdata.postinst, - systemd, systemd-machine, systemd-sysuser, - debconf-show, rollerd, bind9.postinst, sv, - gen_resolvconf., update-ca-certi, certbot, runsv, - qualys-cloud-ag, locales.postins, nomachine_binaries, - adclient, certutil, crlutil, pam-auth-update, parallels_insta, - openshift-launc, update-rc.d, puppet) - and not (container and proc.cmdline in ("cp /run/secrets/kubernetes.io/serviceaccount/ca.crt /etc/pki/ca-trust/source/anchors/openshift-ca.crt")) - and not proc.pname in (sysdigcloud_binaries, mail_config_binaries, hddtemp.postins, sshkit_script_binaries, locales.postins, deb_binaries, dhcp_binaries) - and not fd.name pmatch (safe_etc_dirs) - and not fd.name in (/etc/container_environment.sh, /etc/container_environment.json, /etc/motd, /etc/motd.svc) - and not sed_temporary_file - and not exe_running_docker_save - and not ansible_running_python - and not python_running_denyhosts - and not fluentd_writing_conf_files - and not user_known_write_etc_conditions - and not run_by_centrify - and not run_by_adclient - and not qualys_writing_conf_files - and not git_writing_nssdb - and not plesk_writing_keys - and not plesk_install_writing_apache_conf - and not plesk_running_mktemp - and not networkmanager_writing_resolv_conf - and not run_by_chef - and not add_shell_writing_shells_tmp - and not duply_writing_exclude_files - and not xmlcatalog_writing_files - and not parent_supervise_running_multilog - and not supervise_writing_status - and not pki_realm_writing_realms - and not htpasswd_writing_passwd - and not lvprogs_writing_conf - and not ovsdb_writing_openvswitch - and not datadog_writing_conf - and not curl_writing_pki_db - and not haproxy_writing_conf - and not java_writing_conf - and not dpkg_scripting - and not parent_ucf_writing_conf - and not rabbitmq_writing_conf - and not rook_writing_conf - and not php_handlers_writing_conf - and not sed_writing_temp_file - and not cron_start_writing_pam_env - and not httpd_writing_conf_logs - and not mysql_writing_conf - and not openvpn_writing_conf - and not consul_template_writing_conf - and not countly_writing_nginx_conf - and not ms_oms_writing_conf - and not ms_scx_writing_conf - and not azure_scripts_writing_conf - and not azure_networkwatcher_writing_conf - and not couchdb_writing_conf - and not update_texmf_writing_conf - and not slapadd_writing_conf - and not symantec_writing_conf - and not liveupdate_writing_conf - and not sosreport_writing_files - and not selinux_writing_conf - and not veritas_writing_config - and not nginx_writing_conf - and not nginx_writing_certs - and not chef_client_writing_conf - and not centrify_writing_krb - and not sssd_writing_krb - and not cockpit_writing_conf - and not ipsec_writing_conf - and not httpd_writing_ssl_conf - and not userhelper_writing_etc_security - and not pkgmgmt_progs_writing_pki - and not update_ca_trust_writing_pki - and not brandbot_writing_os_release - and not redis_writing_conf - and not openldap_writing_conf - and not ucpagent_writing_conf - and not iscsi_writing_conf - and not istio_writing_conf - and not ufw_writing_conf - and not calico_writing_conf - and not calico_writing_envvars - and not prometheus_conf_writing_conf - and not openshift_writing_conf - and not keepalived_writing_conf - and not rancher_writing_conf - and not checkpoint_writing_state - and not jboss_in_container_writing_passwd - and not etcd_manager_updating_dns - and not user_known_write_below_etc_activities - and not automount_using_mtab - and not mcafee_writing_cma_d - and not avinetworks_supervisor_writing_ssh - and not multipath_writing_conf - -- rule: Write below etc - desc: an attempt to write to any file below /etc - condition: write_etc_common - output: "File below /etc opened for writing (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent=%proc.pname pcmdline=%proc.pcmdline file=%fd.name program=%proc.name gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] container_id=%container.id image=%container.image.repository)" - priority: ERROR - tags: [filesystem, mitre_persistence] - -- list: known_root_files - items: [/root/.monit.state, /root/.auth_tokens, /root/.bash_history, /root/.ash_history, /root/.aws/credentials, - /root/.viminfo.tmp, /root/.lesshst, /root/.bzr.log, /root/.gitconfig.lock, /root/.babel.json, /root/.localstack, - /root/.node_repl_history, /root/.mongorc.js, /root/.dbshell, /root/.augeas/history, /root/.rnd, /root/.wget-hsts, /health, /exec.fifo] - -- list: known_root_directories - items: [/root/.oracle_jre_usage, /root/.ssh, /root/.subversion, /root/.nami] - -- macro: known_root_conditions - condition: (fd.name startswith /root/orcexec. - or fd.name startswith /root/.m2 - or fd.name startswith /root/.npm - or fd.name startswith /root/.pki - or fd.name startswith /root/.ivy2 - or fd.name startswith /root/.config/Cypress - or fd.name startswith /root/.config/pulse - or fd.name startswith /root/.config/configstore - or fd.name startswith /root/jenkins/workspace - or fd.name startswith /root/.jenkins - or fd.name startswith /root/.cache - or fd.name startswith /root/.sbt - or fd.name startswith /root/.java - or fd.name startswith /root/.glide - or fd.name startswith /root/.sonar - or fd.name startswith /root/.v8flag - or fd.name startswith /root/infaagent - or fd.name startswith /root/.local/lib/python - or fd.name startswith /root/.pm2 - or fd.name startswith /root/.gnupg - or fd.name startswith /root/.pgpass - or fd.name startswith /root/.theano - or fd.name startswith /root/.gradle - or fd.name startswith /root/.android - or fd.name startswith /root/.ansible - or fd.name startswith /root/.crashlytics - or fd.name startswith /root/.dbus - or fd.name startswith /root/.composer - or fd.name startswith /root/.gconf - or fd.name startswith /root/.nv - or fd.name startswith /root/.local/share/jupyter - or fd.name startswith /root/oradiag_root - or fd.name startswith /root/workspace - or fd.name startswith /root/jvm - or fd.name startswith /root/.node-gyp) - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to allow for specific combinations of -# programs writing below specific directories below -# / or /root. -# -# In this file, it just takes one of the condition in the base macro -# and repeats it. -- macro: user_known_write_root_conditions - condition: fd.name=/root/.bash_history - -# This is a placeholder for user to extend the whitelist for write below root rule -- macro: user_known_write_below_root_activities - condition: (never_true) - -- macro: runc_writing_exec_fifo - condition: (proc.cmdline="runc:[1:CHILD] init" and fd.name=/exec.fifo) - -- macro: runc_writing_var_lib_docker - condition: (proc.cmdline="runc:[1:CHILD] init" and evt.arg.filename startswith /var/lib/docker) - -- macro: mysqlsh_writing_state - condition: (proc.name=mysqlsh and fd.directory=/root/.mysqlsh) - -- rule: Write below root - desc: an attempt to write to any file directly below / or /root - condition: > - root_dir and evt.dir = < and open_write - and proc_name_exists - and not fd.name in (known_root_files) - and not fd.directory pmatch (known_root_directories) - and not exe_running_docker_save - and not gugent_writing_guestagent_log - and not dse_writing_tmp - and not zap_writing_state - and not airflow_writing_state - and not rpm_writing_root_rpmdb - and not maven_writing_groovy - and not chef_writing_conf - and not kubectl_writing_state - and not cassandra_writing_state - and not galley_writing_state - and not calico_writing_state - and not rancher_writing_root - and not runc_writing_exec_fifo - and not mysqlsh_writing_state - and not known_root_conditions - and not user_known_write_root_conditions - and not user_known_write_below_root_activities - output: "File below / or /root opened for writing (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent=%proc.pname file=%fd.name program=%proc.name container_id=%container.id image=%container.image.repository)" - priority: ERROR - tags: [filesystem, mitre_persistence] - -- macro: cmp_cp_by_passwd - condition: proc.name in (cmp, cp) and proc.pname in (passwd, run-parts) - -- macro: user_known_read_sensitive_files_activities - condition: (never_true) - -- rule: Read sensitive file trusted after startup - desc: > - an attempt to read any sensitive file (e.g. files containing user/password/authentication - information) by a trusted program after startup. Trusted programs might read these files - at startup to load initial state, but not afterwards. - condition: sensitive_files and open_read and server_procs and not proc_is_new and proc.name!="sshd" and not user_known_read_sensitive_files_activities - output: > - Sensitive file opened for reading by trusted program after startup (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline parent=%proc.pname file=%fd.name parent=%proc.pname gparent=%proc.aname[2] container_id=%container.id image=%container.image.repository) - priority: WARNING - tags: [filesystem, mitre_credential_access] - -- list: read_sensitive_file_binaries - items: [ - iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, sshd, - vsftpd, systemd, mysql_install_d, psql, screen, debconf-show, sa-update, - pam-auth-update, pam-config, /usr/sbin/spamd, polkit-agent-he, lsattr, file, sosreport, - scxcimservera, adclient, rtvscand, cockpit-session, userhelper, ossec-syscheckd - ] - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to allow for specific combinations of -# programs accessing sensitive files. -# fluentd_writing_conf_files is a good example to follow, as it -# specifies both the program doing the writing as well as the specific -# files it is allowed to modify. -# -# In this file, it just takes one of the macros in the base rule -# and repeats it. - -- macro: user_read_sensitive_file_conditions - condition: cmp_cp_by_passwd - -- list: read_sensitive_file_images - items: [] - -- macro: user_read_sensitive_file_containers - condition: (container and container.image.repository in (read_sensitive_file_images)) - -# This macro detects man-db postinst, see https://salsa.debian.org/debian/man-db/-/blob/master/debian/postinst -# The rule "Read sensitive file untrusted" use this macro to avoid FPs. -- macro: mandb_postinst - condition: > - (proc.name=perl and proc.args startswith "-e" and - proc.args contains "@pwd = getpwnam(" and - proc.args contains "exec " and - proc.args contains "/usr/bin/mandb") - -- rule: Read sensitive file untrusted - desc: > - an attempt to read any sensitive file (e.g. files containing user/password/authentication - information). Exceptions are made for known trusted programs. - condition: > - sensitive_files and open_read - and proc_name_exists - and not proc.name in (user_mgmt_binaries, userexec_binaries, package_mgmt_binaries, - cron_binaries, read_sensitive_file_binaries, shell_binaries, hids_binaries, - vpn_binaries, mail_config_binaries, nomachine_binaries, sshkit_script_binaries, - in.proftpd, mandb, salt-minion, postgres_mgmt_binaries, - google_oslogin_ - ) - and not cmp_cp_by_passwd - and not ansible_running_python - and not run_by_qualys - and not run_by_chef - and not run_by_google_accounts_daemon - and not user_read_sensitive_file_conditions - and not mandb_postinst - and not perl_running_plesk - and not perl_running_updmap - and not veritas_driver_script - and not perl_running_centrifydc - and not runuser_reading_pam - and not linux_bench_reading_etc_shadow - and not user_known_read_sensitive_files_activities - and not user_read_sensitive_file_containers - output: > - Sensitive file opened for reading by non-trusted program (user=%user.name user_loginuid=%user.loginuid program=%proc.name - command=%proc.cmdline file=%fd.name parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] container_id=%container.id image=%container.image.repository) - priority: WARNING - tags: [filesystem, mitre_credential_access, mitre_discovery] - -- macro: amazon_linux_running_python_yum - condition: > - (proc.name = python and - proc.pcmdline = "python -m amazon_linux_extras system_motd" and - proc.cmdline startswith "python -c import yum;") - -- macro: user_known_write_rpm_database_activities - condition: (never_true) - -# Only let rpm-related programs write to the rpm database -- rule: Write below rpm database - desc: an attempt to write to the rpm database by any non-rpm related program - condition: > - fd.name startswith /var/lib/rpm and open_write - and not rpm_procs - and not ansible_running_python - and not python_running_chef - and not exe_running_docker_save - and not amazon_linux_running_python_yum - and not user_known_write_rpm_database_activities - output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline container_id=%container.id image=%container.image.repository)" - priority: ERROR - tags: [filesystem, software_mgmt, mitre_persistence] - -- macro: postgres_running_wal_e - condition: (proc.pname=postgres and proc.cmdline startswith "sh -c envdir /etc/wal-e.d/env /usr/local/bin/wal-e") - -- macro: redis_running_prepost_scripts - condition: (proc.aname[2]=redis-server and (proc.cmdline contains "redis-server.post-up.d" or proc.cmdline contains "redis-server.pre-up.d")) - -- macro: rabbitmq_running_scripts - condition: > - (proc.pname=beam.smp and - (proc.cmdline startswith "sh -c exec ps" or - proc.cmdline startswith "sh -c exec inet_gethost" or - proc.cmdline= "sh -s unix:cmd" or - proc.cmdline= "sh -c exec /bin/sh -s unix:cmd 2>&1")) - -- macro: rabbitmqctl_running_scripts - condition: (proc.aname[2]=rabbitmqctl and proc.cmdline startswith "sh -c ") - -- macro: run_by_appdynamics - condition: (proc.pname=java and proc.pcmdline startswith "java -jar -Dappdynamics") - -- macro: user_known_db_spawned_processes - condition: (never_true) - -- rule: DB program spawned process - desc: > - a database-server related program spawned a new process other than itself. - This shouldn\'t occur and is a follow on from some SQL injection attacks. - condition: > - proc.pname in (db_server_binaries) - and spawned_process - and not proc.name in (db_server_binaries) - and not postgres_running_wal_e - and not user_known_db_spawned_processes - output: > - Database-related program spawned process other than itself (user=%user.name user_loginuid=%user.loginuid - program=%proc.cmdline parent=%proc.pname container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [process, database, mitre_execution] - -- macro: user_known_modify_bin_dir_activities - condition: (never_true) - -- rule: Modify binary dirs - desc: an attempt to modify any file below a set of binary directories. - condition: bin_dir_rename and modify and not package_mgmt_procs and not exe_running_docker_save and not user_known_modify_bin_dir_activities - output: > - File below known binary directory renamed/removed (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline - pcmdline=%proc.pcmdline operation=%evt.type file=%fd.name %evt.args container_id=%container.id image=%container.image.repository) - priority: ERROR - tags: [filesystem, mitre_persistence] - -- macro: user_known_mkdir_bin_dir_activities - condition: (never_true) - -- rule: Mkdir binary dirs - desc: an attempt to create a directory below a set of binary directories. - condition: > - mkdir - and bin_dir_mkdir - and not package_mgmt_procs - and not user_known_mkdir_bin_dir_activities - and not exe_running_docker_save - output: > - Directory below known binary directory created (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline directory=%evt.arg.path container_id=%container.id image=%container.image.repository) - priority: ERROR - tags: [filesystem, mitre_persistence] - -# This list allows for easy additions to the set of commands allowed -# to change thread namespace without having to copy and override the -# entire change thread namespace rule. -- list: user_known_change_thread_namespace_binaries - items: [crio, multus] - -- macro: user_known_change_thread_namespace_activities - condition: (never_true) - -- list: network_plugin_binaries - items: [aws-cni, azure-vnet] - -- macro: calico_node - condition: (container.image.repository endswith calico/node and proc.name=calico-node) - -- macro: weaveworks_scope - condition: (container.image.repository endswith weaveworks/scope and proc.name=scope) - -- rule: Change thread namespace - desc: > - an attempt to change a program/thread\'s namespace (commonly done - as a part of creating a container) by calling setns. - condition: > - evt.type=setns and evt.dir=< - and proc_name_exists - and not (container.id=host and proc.name in (docker_binaries, k8s_binaries, lxd_binaries, nsenter)) - and not proc.name in (sysdigcloud_binaries, sysdig, calico, oci-umount, cilium-cni, network_plugin_binaries) - and not proc.name in (user_known_change_thread_namespace_binaries) - and not proc.name startswith "runc" - and not proc.cmdline startswith "containerd" - and not proc.pname in (sysdigcloud_binaries, hyperkube, kubelet, protokube, dockerd, tini, aws) - and not java_running_sdjagent - and not kubelet_running_loopback - and not rancher_agent - and not rancher_network_manager - and not calico_node - and not weaveworks_scope - and not user_known_change_thread_namespace_activities - enabled: false - output: > - Namespace change (setns) by unexpected program (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline - parent=%proc.pname %container.info container_id=%container.id image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [process, mitre_privilege_escalation, mitre_lateral_movement] - -# The binaries in this list and their descendents are *not* allowed -# spawn shells. This includes the binaries spawning shells directly as -# well as indirectly. For example, apache -> php/perl for -# mod_{php,perl} -> some shell is also not allowed, because the shell -# has apache as an ancestor. - -- list: protected_shell_spawning_binaries - items: [ - http_server_binaries, db_server_binaries, nosql_server_binaries, mail_binaries, - fluentd, flanneld, splunkd, consul, smbd, runsv, PM2 - ] - -- macro: parent_java_running_zookeeper - condition: (proc.pname=java and proc.pcmdline contains org.apache.zookeeper.server) - -- macro: parent_java_running_kafka - condition: (proc.pname=java and proc.pcmdline contains kafka.Kafka) - -- macro: parent_java_running_elasticsearch - condition: (proc.pname=java and proc.pcmdline contains org.elasticsearch.bootstrap.Elasticsearch) - -- macro: parent_java_running_activemq - condition: (proc.pname=java and proc.pcmdline contains activemq.jar) - -- macro: parent_java_running_cassandra - condition: (proc.pname=java and (proc.pcmdline contains "-Dcassandra.config.loader" or proc.pcmdline contains org.apache.cassandra.service.CassandraDaemon)) - -- macro: parent_java_running_jboss_wildfly - condition: (proc.pname=java and proc.pcmdline contains org.jboss) - -- macro: parent_java_running_glassfish - condition: (proc.pname=java and proc.pcmdline contains com.sun.enterprise.glassfish) - -- macro: parent_java_running_hadoop - condition: (proc.pname=java and proc.pcmdline contains org.apache.hadoop) - -- macro: parent_java_running_datastax - condition: (proc.pname=java and proc.pcmdline contains com.datastax) - -- macro: nginx_starting_nginx - condition: (proc.pname=nginx and proc.cmdline contains "/usr/sbin/nginx -c /etc/nginx/nginx.conf") - -- macro: nginx_running_aws_s3_cp - condition: (proc.pname=nginx and proc.cmdline startswith "sh -c /usr/local/bin/aws s3 cp") - -- macro: consul_running_net_scripts - condition: (proc.pname=consul and (proc.cmdline startswith "sh -c curl" or proc.cmdline startswith "sh -c nc")) - -- macro: consul_running_alert_checks - condition: (proc.pname=consul and proc.cmdline startswith "sh -c /bin/consul-alerts") - -- macro: serf_script - condition: (proc.cmdline startswith "sh -c serf") - -- macro: check_process_status - condition: (proc.cmdline startswith "sh -c kill -0 ") - -# In some cases, you may want to consider node processes run directly -# in containers as protected shell spawners. Examples include using -# pm2-docker or pm2 start some-app.js --no-daemon-mode as the direct -# entrypoint of the container, and when the node app is a long-lived -# server using something like express. -# -# However, there are other uses of node related to build pipelines for -# which node is not really a server but instead a general scripting -# tool. In these cases, shells are very likely and in these cases you -# don't want to consider node processes protected shell spawners. -# -# We have to choose one of these cases, so we consider node processes -# as unprotected by default. If you want to consider any node process -# run in a container as a protected shell spawner, override the below -# macro to remove the "never_true" clause, which allows it to take effect. -- macro: possibly_node_in_container - condition: (never_true and (proc.pname=node and proc.aname[3]=docker-containe)) - -# Similarly, you may want to consider any shell spawned by apache -# tomcat as suspect. The famous apache struts attack (CVE-2017-5638) -# could be exploited to do things like spawn shells. -# -# However, many applications *do* use tomcat to run arbitrary shells, -# as a part of build pipelines, etc. -# -# Like for node, we make this case opt-in. -- macro: possibly_parent_java_running_tomcat - condition: (never_true and proc.pname=java and proc.pcmdline contains org.apache.catalina.startup.Bootstrap) - -- macro: protected_shell_spawner - condition: > - (proc.aname in (protected_shell_spawning_binaries) - or parent_java_running_zookeeper - or parent_java_running_kafka - or parent_java_running_elasticsearch - or parent_java_running_activemq - or parent_java_running_cassandra - or parent_java_running_jboss_wildfly - or parent_java_running_glassfish - or parent_java_running_hadoop - or parent_java_running_datastax - or possibly_parent_java_running_tomcat - or possibly_node_in_container) - -- list: mesos_shell_binaries - items: [mesos-docker-ex, mesos-slave, mesos-health-ch] - -# Note that runsv is both in protected_shell_spawner and the -# exclusions by pname. This means that runsv can itself spawn shells -# (the ./run and ./finish scripts), but the processes runsv can not -# spawn shells. -- rule: Run shell untrusted - desc: an attempt to spawn a shell below a non-shell application. Specific applications are monitored. - condition: > - spawned_process - and shell_procs - and proc.pname exists - and protected_shell_spawner - and not proc.pname in (shell_binaries, gitlab_binaries, cron_binaries, user_known_shell_spawn_binaries, - needrestart_binaries, - mesos_shell_binaries, - erl_child_setup, exechealthz, - PM2, PassengerWatchd, c_rehash, svlogd, logrotate, hhvm, serf, - lb-controller, nvidia-installe, runsv, statsite, erlexec, calico-node, - "puma reactor") - and not proc.cmdline in (known_shell_spawn_cmdlines) - and not proc.aname in (unicorn_launche) - and not consul_running_net_scripts - and not consul_running_alert_checks - and not nginx_starting_nginx - and not nginx_running_aws_s3_cp - and not run_by_package_mgmt_binaries - and not serf_script - and not check_process_status - and not run_by_foreman - and not python_mesos_marathon_scripting - and not splunk_running_forwarder - and not postgres_running_wal_e - and not redis_running_prepost_scripts - and not rabbitmq_running_scripts - and not rabbitmqctl_running_scripts - and not run_by_appdynamics - and not user_shell_container_exclusions - output: > - Shell spawned by untrusted binary (user=%user.name user_loginuid=%user.loginuid shell=%proc.name parent=%proc.pname - cmdline=%proc.cmdline pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3] - aname[4]=%proc.aname[4] aname[5]=%proc.aname[5] aname[6]=%proc.aname[6] aname[7]=%proc.aname[7] container_id=%container.id image=%container.image.repository) - priority: DEBUG - tags: [shell, mitre_execution] - -- macro: allowed_openshift_registry_root - condition: > - (container.image.repository startswith openshift3/ or - container.image.repository startswith registry.redhat.io/openshift3/ or - container.image.repository startswith registry.access.redhat.com/openshift3/) - -# Source: https://docs.openshift.com/enterprise/3.2/install_config/install/disconnected_install.html -- macro: openshift_image - condition: > - (allowed_openshift_registry_root and - (container.image.repository endswith /logging-deployment or - container.image.repository endswith /logging-elasticsearch or - container.image.repository endswith /logging-kibana or - container.image.repository endswith /logging-fluentd or - container.image.repository endswith /logging-auth-proxy or - container.image.repository endswith /metrics-deployer or - container.image.repository endswith /metrics-hawkular-metrics or - container.image.repository endswith /metrics-cassandra or - container.image.repository endswith /metrics-heapster or - container.image.repository endswith /ose-haproxy-router or - container.image.repository endswith /ose-deployer or - container.image.repository endswith /ose-sti-builder or - container.image.repository endswith /ose-docker-builder or - container.image.repository endswith /ose-pod or - container.image.repository endswith /ose-node or - container.image.repository endswith /ose-docker-registry or - container.image.repository endswith /prometheus-node-exporter or - container.image.repository endswith /image-inspector)) - -# https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html -# official AWS EKS registry list. AWS has different ECR repo per region -- macro: allowed_aws_ecr_registry_root_for_eks - condition: > - (container.image.repository startswith "602401143452.dkr.ecr" or - container.image.repository startswith "877085696533.dkr.ecr" or - container.image.repository startswith "800184023465.dkr.ecr" or - container.image.repository startswith "918309763551.dkr.ecr" or - container.image.repository startswith "961992271922.dkr.ecr" or - container.image.repository startswith "590381155156.dkr.ecr" or - container.image.repository startswith "558608220178.dkr.ecr" or - container.image.repository startswith "151742754352.dkr.ecr" or - container.image.repository startswith "013241004608.dkr.ecr") - - -- macro: aws_eks_core_images - condition: > - (allowed_aws_ecr_registry_root_for_eks and - (container.image.repository endswith ".amazonaws.com/amazon-k8s-cni" or - container.image.repository endswith ".amazonaws.com/eks/kube-proxy")) - - -- macro: aws_eks_image_sensitive_mount - condition: > - (allowed_aws_ecr_registry_root_for_eks and container.image.repository endswith ".amazonaws.com/amazon-k8s-cni") - -# These images are allowed both to run with --privileged and to mount -# sensitive paths from the host filesystem. -# -# NOTE: This list is only provided for backwards compatibility with -# older local falco rules files that may have been appending to -# trusted_images. To make customizations, it's better to add images to -# either privileged_images or falco_sensitive_mount_images. -- list: trusted_images - items: [] - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to specify additional containers that are -# trusted and therefore allowed to run privileged *and* with sensitive -# mounts. -# -# Like trusted_images, this is deprecated in favor of -# user_privileged_containers and user_sensitive_mount_containers and -# is only provided for backwards compatibility. -# -# In this file, it just takes one of the images in trusted_containers -# and repeats it. -- macro: user_trusted_containers - condition: (never_true) - -- list: sematext_images - items: [docker.io/sematext/sematext-agent-docker, docker.io/sematext/agent, docker.io/sematext/logagent, - registry.access.redhat.com/sematext/sematext-agent-docker, - registry.access.redhat.com/sematext/agent, - registry.access.redhat.com/sematext/logagent] - -# These container images are allowed to run with --privileged and full set of capabilities -- list: falco_privileged_images - items: [ - docker.io/calico/node, - calico/node, - docker.io/cloudnativelabs/kube-router, - docker.io/docker/ucp-agent, - docker.io/falcosecurity/falco, - docker.io/mesosphere/mesos-slave, - docker.io/rook/toolbox, - docker.io/sysdig/sysdig, - falcosecurity/falco, - gcr.io/google_containers/kube-proxy, - gcr.io/google-containers/startup-script, - gcr.io/projectcalico-org/node, - gke.gcr.io/kube-proxy, - gke.gcr.io/gke-metadata-server, - gke.gcr.io/netd-amd64, - gke.gcr.io/watcher-daemonset, - gcr.io/google-containers/prometheus-to-sd, - k8s.gcr.io/ip-masq-agent-amd64, - k8s.gcr.io/kube-proxy, - k8s.gcr.io/prometheus-to-sd, - public.ecr.aws/falcosecurity/falco, - quay.io/calico/node, - sysdig/sysdig, - sematext_images - ] - -- macro: falco_privileged_containers - condition: (openshift_image or - user_trusted_containers or - aws_eks_core_images or - container.image.repository in (trusted_images) or - container.image.repository in (falco_privileged_images) or - container.image.repository startswith istio/proxy_ or - container.image.repository startswith quay.io/sysdig/) - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to specify additional containers that are -# allowed to run privileged -# -# In this file, it just takes one of the images in falco_privileged_images -# and repeats it. -- macro: user_privileged_containers - condition: (never_true) - -# These container images are allowed to mount sensitive paths from the -# host filesystem. -- list: falco_sensitive_mount_images - items: [ - docker.io/sysdig/sysdig, sysdig/sysdig, - docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco, - gcr.io/google_containers/hyperkube, - gcr.io/google_containers/kube-proxy, docker.io/calico/node, - docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul, - docker.io/datadog/docker-dd-agent, docker.io/datadog/agent, docker.io/docker/ucp-agent, docker.io/gliderlabs/logspout, - docker.io/netdata/netdata, docker.io/google/cadvisor, docker.io/prom/node-exporter, - amazon/amazon-ecs-agent, prom/node-exporter, amazon/cloudwatch-agent - ] - -- macro: falco_sensitive_mount_containers - condition: (user_trusted_containers or - aws_eks_image_sensitive_mount or - container.image.repository in (trusted_images) or - container.image.repository in (falco_sensitive_mount_images) or - container.image.repository startswith quay.io/sysdig/) - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to specify additional containers that are -# allowed to perform sensitive mounts. -# -# In this file, it just takes one of the images in falco_sensitive_mount_images -# and repeats it. -- macro: user_sensitive_mount_containers - condition: (never_true) - -- rule: Launch Privileged Container - desc: Detect the initial process started in a privileged container. Exceptions are made for known trusted images. - condition: > - container_started and container - and container.privileged=true - and not falco_privileged_containers - and not user_privileged_containers - output: Privileged container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag) - priority: INFO - tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement] - -# These capabilities were used in the past to escape from containers -- macro: excessively_capable_container - condition: > - (thread.cap_permitted contains CAP_SYS_ADMIN - or thread.cap_permitted contains CAP_SYS_MODULE - or thread.cap_permitted contains CAP_SYS_RAWIO - or thread.cap_permitted contains CAP_SYS_PTRACE - or thread.cap_permitted contains CAP_SYS_BOOT - or thread.cap_permitted contains CAP_SYSLOG - or thread.cap_permitted contains CAP_DAC_READ_SEARCH - or thread.cap_permitted contains CAP_NET_ADMIN - or thread.cap_permitted contains CAP_BPF) - -- rule: Launch Excessively Capable Container - desc: Detect container started with a powerful set of capabilities. Exceptions are made for known trusted images. - condition: > - container_started and container - and excessively_capable_container - and not falco_privileged_containers - and not user_privileged_containers - output: Excessively capable container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag cap_permitted=%thread.cap_permitted) - priority: INFO - tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement] - - -# For now, only considering a full mount of /etc as -# sensitive. Ideally, this would also consider all subdirectories -# below /etc as well, but the globbing mechanism -# doesn't allow exclusions of a full pattern, only single characters. -- macro: sensitive_mount - condition: (container.mount.dest[/proc*] != "N/A" or - container.mount.dest[/var/run/docker.sock] != "N/A" or - container.mount.dest[/var/run/crio/crio.sock] != "N/A" or - container.mount.dest[/run/containerd/containerd.sock] != "N/A" or - container.mount.dest[/var/lib/kubelet] != "N/A" or - container.mount.dest[/var/lib/kubelet/pki] != "N/A" or - container.mount.dest[/] != "N/A" or - container.mount.dest[/home/admin] != "N/A" or - container.mount.dest[/etc] != "N/A" or - container.mount.dest[/etc/kubernetes] != "N/A" or - container.mount.dest[/etc/kubernetes/manifests] != "N/A" or - container.mount.dest[/root*] != "N/A") - -# The steps libcontainer performs to set up the root program for a container are: -# - clone + exec self to a program runc:[0:PARENT] -# - clone a program runc:[1:CHILD] which sets up all the namespaces -# - clone a second program runc:[2:INIT] + exec to the root program. -# The parent of runc:[2:INIT] is runc:0:PARENT] -# As soon as 1:CHILD is created, 0:PARENT exits, so there's a race -# where at the time 2:INIT execs the root program, 0:PARENT might have -# already exited, or might still be around. So we handle both. -# We also let runc:[1:CHILD] count as the parent process, which can occur -# when we lose events and lose track of state. - -- macro: container_entrypoint - condition: (not proc.pname exists or proc.pname in (runc:[0:PARENT], runc:[1:CHILD], runc, docker-runc, exe, docker-runc-cur)) - -- rule: Launch Sensitive Mount Container - desc: > - Detect the initial process started by a container that has a mount from a sensitive host directory - (i.e. /proc). Exceptions are made for known trusted images. - condition: > - container_started and container - and sensitive_mount - and not falco_sensitive_mount_containers - and not user_sensitive_mount_containers - output: Container with sensitive mount started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag mounts=%container.mounts) - priority: INFO - tags: [container, cis, mitre_lateral_movement] - -# In a local/user rules file, you could override this macro to -# explicitly enumerate the container images that you want to run in -# your environment. In this main falco rules file, there isn't any way -# to know all the containers that can run, so any container is -# allowed, by using a filter that is guaranteed to evaluate to true. -# In the overridden macro, the condition would look something like -# (container.image.repository = vendor/container-1 or -# container.image.repository = vendor/container-2 or ...) - -- macro: allowed_containers - condition: (container.id exists) - -- rule: Launch Disallowed Container - desc: > - Detect the initial process started by a container that is not in a list of allowed containers. - condition: container_started and container and not allowed_containers - output: Container started and not in allowed list (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag) - priority: WARNING - tags: [container, mitre_lateral_movement] - -- macro: user_known_system_user_login - condition: (never_true) - -# Anything run interactively by root -# - condition: evt.type != switch and user.name = root and proc.name != sshd and interactive -# output: "Interactive root (%user.name %proc.name %evt.dir %evt.type %evt.args %fd.name)" -# priority: WARNING - -- rule: System user interactive - desc: an attempt to run interactive commands by a system (i.e. non-login) user - condition: spawned_process and system_users and interactive and not user_known_system_user_login - output: "System user ran an interactive command (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline container_id=%container.id image=%container.image.repository)" - priority: INFO - tags: [users, mitre_remote_access_tools] - -# In some cases, a shell is expected to be run in a container. For example, configuration -# management software may do this, which is expected. -- macro: user_expected_terminal_shell_in_container_conditions - condition: (never_true) - -- rule: Terminal shell in container - desc: A shell was used as the entrypoint/exec point into a container with an attached terminal. - condition: > - spawned_process and container - and shell_procs and proc.tty != 0 - and container_entrypoint - and not user_expected_terminal_shell_in_container_conditions - output: > - A shell was spawned in a container with an attached terminal (user=%user.name user_loginuid=%user.loginuid %container.info - shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [container, shell, mitre_execution] - -# For some container types (mesos), there isn't a container image to -# work with, and the container name is autogenerated, so there isn't -# any stable aspect of the software to work with. In this case, we -# fall back to allowing certain command lines. - -- list: known_shell_spawn_cmdlines - items: [ - '"sh -c uname -p 2> /dev/null"', - '"sh -c uname -s 2>&1"', - '"sh -c uname -r 2>&1"', - '"sh -c uname -v 2>&1"', - '"sh -c uname -a 2>&1"', - '"sh -c ruby -v 2>&1"', - '"sh -c getconf CLK_TCK"', - '"sh -c getconf PAGESIZE"', - '"sh -c LC_ALL=C LANG=C /sbin/ldconfig -p 2>/dev/null"', - '"sh -c LANG=C /sbin/ldconfig -p 2>/dev/null"', - '"sh -c /sbin/ldconfig -p 2>/dev/null"', - '"sh -c stty -a 2>/dev/null"', - '"sh -c stty -a < /dev/tty"', - '"sh -c stty -g < /dev/tty"', - '"sh -c node index.js"', - '"sh -c node index"', - '"sh -c node ./src/start.js"', - '"sh -c node app.js"', - '"sh -c node -e \"require(''nan'')\""', - '"sh -c node -e \"require(''nan'')\")"', - '"sh -c node $NODE_DEBUG_OPTION index.js "', - '"sh -c crontab -l 2"', - '"sh -c lsb_release -a"', - '"sh -c lsb_release -is 2>/dev/null"', - '"sh -c whoami"', - '"sh -c node_modules/.bin/bower-installer"', - '"sh -c /bin/hostname -f 2> /dev/null"', - '"sh -c locale -a"', - '"sh -c -t -i"', - '"sh -c openssl version"', - '"bash -c id -Gn kafadmin"', - '"sh -c /bin/sh -c ''date +%%s''"', - '"sh -c /usr/share/lighttpd/create-mime.conf.pl"' - ] - -# This list allows for easy additions to the set of commands allowed -# to run shells in containers without having to without having to copy -# and override the entire run shell in container macro. Once -# https://github.com/draios/falco/issues/255 is fixed this will be a -# bit easier, as someone could append of any of the existing lists. -- list: user_known_shell_spawn_binaries - items: [] - -# This macro allows for easy additions to the set of commands allowed -# to run shells in containers without having to override the entire -# rule. Its default value is an expression that always is false, which -# becomes true when the "not ..." in the rule is applied. -- macro: user_shell_container_exclusions - condition: (never_true) - -- macro: login_doing_dns_lookup - condition: (proc.name=login and fd.l4proto=udp and fd.sport=53) - -# sockfamily ip is to exclude certain processes (like 'groups') that communicate on unix-domain sockets -# systemd can listen on ports to launch things like sshd on demand -- rule: System procs network activity - desc: any network activity performed by system binaries that are not expected to send or receive any network traffic - condition: > - (fd.sockfamily = ip and (system_procs or proc.name in (shell_binaries))) - and (inbound_outbound) - and not proc.name in (known_system_procs_network_activity_binaries) - and not login_doing_dns_lookup - and not user_expected_system_procs_network_activity_conditions - output: > - Known system binary sent/received network traffic - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline connection=%fd.name container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network, mitre_exfiltration] - -# This list allows easily whitelisting system proc names that are -# expected to communicate on the network. -- list: known_system_procs_network_activity_binaries - items: [systemd, hostid, id] - -# This macro allows specifying conditions under which a system binary -# is allowed to communicate on the network. For instance, only specific -# proc.cmdline values could be allowed to be more granular in what is -# allowed. -- macro: user_expected_system_procs_network_activity_conditions - condition: (never_true) - -# When filled in, this should look something like: -# (proc.env contains "HTTP_PROXY=http://my.http.proxy.com ") -# The trailing space is intentional so avoid matching on prefixes of -# the actual proxy. -- macro: allowed_ssh_proxy_env - condition: (always_true) - -- list: http_proxy_binaries - items: [curl, wget] - -- macro: http_proxy_procs - condition: (proc.name in (http_proxy_binaries)) - -- rule: Program run with disallowed http proxy env - desc: An attempt to run a program with a disallowed HTTP_PROXY environment variable - condition: > - spawned_process and - http_proxy_procs and - not allowed_ssh_proxy_env and - proc.env icontains HTTP_PROXY - output: > - Program run with disallowed HTTP_PROXY environment variable - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline env=%proc.env parent=%proc.pname container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [host, users] - -# In some environments, any attempt by a interpreted program (perl, -# python, ruby, etc) to listen for incoming connections or perform -# outgoing connections might be suspicious. These rules are not -# enabled by default, but you can modify the following macros to -# enable them. - -- macro: consider_interpreted_inbound - condition: (never_true) - -- macro: consider_interpreted_outbound - condition: (never_true) - -- rule: Interpreted procs inbound network activity - desc: Any inbound network activity performed by any interpreted program (perl, python, ruby, etc.) - condition: > - (inbound and consider_interpreted_inbound - and interpreted_procs) - output: > - Interpreted program received/listened for network traffic - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline connection=%fd.name container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network, mitre_exfiltration] - -- rule: Interpreted procs outbound network activity - desc: Any outbound network activity performed by any interpreted program (perl, python, ruby, etc.) - condition: > - (outbound and consider_interpreted_outbound - and interpreted_procs) - output: > - Interpreted program performed outgoing network connection - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline connection=%fd.name container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network, mitre_exfiltration] - -- list: openvpn_udp_ports - items: [1194, 1197, 1198, 8080, 9201] - -- list: l2tp_udp_ports - items: [500, 1701, 4500, 10000] - -- list: statsd_ports - items: [8125] - -- list: ntp_ports - items: [123] - -# Some applications will connect a udp socket to an address only to -# test connectivity. Assuming the udp connect works, they will follow -# up with a tcp connect that actually sends/receives data. -# -# With that in mind, we listed a few commonly seen ports here to avoid -# some false positives. In addition, we make the main rule opt-in, so -# it's disabled by default. - -- list: test_connect_ports - items: [0, 9, 80, 3306] - -- macro: do_unexpected_udp_check - condition: (never_true) - -- list: expected_udp_ports - items: [53, openvpn_udp_ports, l2tp_udp_ports, statsd_ports, ntp_ports, test_connect_ports] - -- macro: expected_udp_traffic - condition: fd.port in (expected_udp_ports) - -- rule: Unexpected UDP Traffic - desc: UDP traffic not on port 53 (DNS) or other commonly used ports - condition: (inbound_outbound) and do_unexpected_udp_check and fd.l4proto=udp and not expected_udp_traffic - output: > - Unexpected UDP Traffic Seen - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline connection=%fd.name proto=%fd.l4proto evt=%evt.type %evt.args container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network, mitre_exfiltration] - -# With the current restriction on system calls handled by falco -# (e.g. excluding read/write/sendto/recvfrom/etc, this rule won't -# trigger). -# - rule: Ssh error in syslog -# desc: any ssh errors (failed logins, disconnects, ...) sent to syslog -# condition: syslog and ssh_error_message and evt.dir = < -# output: "sshd sent error message to syslog (error=%evt.buffer)" -# priority: WARNING - -- macro: somebody_becoming_themselves - condition: ((user.name=nobody and evt.arg.uid=nobody) or - (user.name=www-data and evt.arg.uid=www-data) or - (user.name=_apt and evt.arg.uid=_apt) or - (user.name=postfix and evt.arg.uid=postfix) or - (user.name=pki-agent and evt.arg.uid=pki-agent) or - (user.name=pki-acme and evt.arg.uid=pki-acme) or - (user.name=nfsnobody and evt.arg.uid=nfsnobody) or - (user.name=postgres and evt.arg.uid=postgres)) - -- macro: nrpe_becoming_nagios - condition: (proc.name=nrpe and evt.arg.uid=nagios) - -# In containers, the user name might be for a uid that exists in the -# container but not on the host. (See -# https://github.com/draios/sysdig/issues/954). So in that case, allow -# a setuid. -- macro: known_user_in_container - condition: (container and user.name != "N/A") - -# Add conditions to this macro (probably in a separate file, -# overwriting this macro) to allow for specific combinations of -# programs changing users by calling setuid. -# -# In this file, it just takes one of the condition in the base macro -# and repeats it. -- macro: user_known_non_sudo_setuid_conditions - condition: user.name=root - -# sshd, mail programs attempt to setuid to root even when running as non-root. Excluding here to avoid meaningless FPs -- rule: Non sudo setuid - desc: > - an attempt to change users by calling setuid. sudo/su are excluded. users "root" and "nobody" - suing to itself are also excluded, as setuid calls typically involve dropping privileges. - condition: > - evt.type=setuid and evt.dir=> - and (known_user_in_container or not container) - and not (user.name=root or user.uid=0) - and not somebody_becoming_themselves - and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries, - nomachine_binaries) - and not proc.name startswith "runc:" - and not java_running_sdjagent - and not nrpe_becoming_nagios - and not user_known_non_sudo_setuid_conditions - output: > - Unexpected setuid call by non-sudo, non-root program (user=%user.name user_loginuid=%user.loginuid cur_uid=%user.uid parent=%proc.pname - command=%proc.cmdline uid=%evt.arg.uid container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [users, mitre_privilege_escalation] - -- macro: user_known_user_management_activities - condition: (never_true) - -- macro: chage_list - condition: (proc.name=chage and (proc.cmdline contains "-l" or proc.cmdline contains "--list")) - -- rule: User mgmt binaries - desc: > - activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded. - Activity in containers is also excluded--some containers create custom users on top - of a base linux distribution at startup. - Some innocuous command lines that don't actually change anything are excluded. - condition: > - spawned_process and proc.name in (user_mgmt_binaries) and - not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and - not proc.pname in (cron_binaries, systemd, systemd.postins, udev.postinst, run-parts) and - not proc.cmdline startswith "passwd -S" and - not proc.cmdline startswith "useradd -D" and - not proc.cmdline startswith "systemd --version" and - not run_by_qualys and - not run_by_sumologic_securefiles and - not run_by_yum and - not run_by_ms_oms and - not run_by_google_accounts_daemon and - not chage_list and - not user_known_user_management_activities - output: > - User management binary command run outside of container - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4]) - priority: NOTICE - tags: [host, users, mitre_persistence] - -- list: allowed_dev_files - items: [ - /dev/null, /dev/stdin, /dev/stdout, /dev/stderr, - /dev/random, /dev/urandom, /dev/console, /dev/kmsg - ] - -- macro: user_known_create_files_below_dev_activities - condition: (never_true) - -# (we may need to add additional checks against false positives, see: -# https://bugs.launchpad.net/ubuntu/+source/rkhunter/+bug/86153) -- rule: Create files below dev - desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev. - condition: > - fd.directory = /dev and - (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT)) - and not proc.name in (dev_creation_binaries) - and not fd.name in (allowed_dev_files) - and not fd.name startswith /dev/tty - and not user_known_create_files_below_dev_activities - output: "File created below /dev by untrusted program (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline file=%fd.name container_id=%container.id image=%container.image.repository)" - priority: ERROR - tags: [filesystem, mitre_persistence] - - -# In a local/user rules file, you could override this macro to -# explicitly enumerate the container images that you want to allow -# access to EC2 metadata. In this main falco rules file, there isn't -# any way to know all the containers that should have access, so any -# container is allowed, by repeating the "container" macro. In the -# overridden macro, the condition would look something like -# (container.image.repository = vendor/container-1 or -# container.image.repository = vendor/container-2 or ...) -- macro: ec2_metadata_containers - condition: container - -# On EC2 instances, 169.254.169.254 is a special IP used to fetch -# metadata about the instance. It may be desirable to prevent access -# to this IP from containers. -- rule: Contact EC2 Instance Metadata Service From Container - desc: Detect attempts to contact the EC2 Instance Metadata Service from a container - condition: outbound and fd.sip="169.254.169.254" and container and not ec2_metadata_containers - output: Outbound connection to EC2 instance metadata service (command=%proc.cmdline connection=%fd.name %container.info image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [network, aws, container, mitre_discovery] - - -# This rule is not enabled by default, since this rule is for cloud environment(GCP, AWS and Azure) only. -# If you want to enable this rule, overwrite the first macro, -# And you can filter the container that you want to allow access to metadata by overwriting the second macro. -- macro: consider_metadata_access - condition: (never_true) - -- macro: user_known_metadata_access - condition: (k8s.ns.name = "kube-system") - -# On GCP, AWS and Azure, 169.254.169.254 is a special IP used to fetch -# metadata about the instance. The metadata could be used to get credentials by attackers. -- rule: Contact cloud metadata service from container - desc: Detect attempts to contact the Cloud Instance Metadata Service from a container - condition: outbound and fd.sip="169.254.169.254" and container and consider_metadata_access and not user_known_metadata_access - output: Outbound connection to cloud instance metadata service (command=%proc.cmdline connection=%fd.name %container.info image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [network, container, mitre_discovery] - -# Containers from IBM Cloud -- list: ibm_cloud_containers - items: - - icr.io/ext/sysdig/agent - - registry.ng.bluemix.net/armada-master/metrics-server-amd64 - - registry.ng.bluemix.net/armada-master/olm - -# In a local/user rules file, list the namespace or container images that are -# allowed to contact the K8s API Server from within a container. This -# might cover cases where the K8s infrastructure itself is running -# within a container. -- macro: k8s_containers - condition: > - (container.image.repository in (gcr.io/google_containers/hyperkube-amd64, - gcr.io/google_containers/kube2sky, - docker.io/sysdig/sysdig, docker.io/falcosecurity/falco, - sysdig/sysdig, falcosecurity/falco, - fluent/fluentd-kubernetes-daemonset, prom/prometheus, - ibm_cloud_containers, - public.ecr.aws/falcosecurity/falco) - or (k8s.ns.name = "kube-system")) - -- macro: k8s_api_server - condition: (fd.sip.name="kubernetes.default.svc.cluster.local") - -- macro: user_known_contact_k8s_api_server_activities - condition: (never_true) - -- rule: Contact K8S API Server From Container - desc: Detect attempts to contact the K8S API Server from a container - condition: > - evt.type=connect and evt.dir=< and - (fd.typechar=4 or fd.typechar=6) and - container and - not k8s_containers and - k8s_api_server and - not user_known_contact_k8s_api_server_activities - output: Unexpected connection to K8s API Server from container (command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag connection=%fd.name) - priority: NOTICE - tags: [network, k8s, container, mitre_discovery] - -# In a local/user rules file, list the container images that are -# allowed to contact NodePort services from within a container. This -# might cover cases where the K8s infrastructure itself is running -# within a container. -# -# By default, all containers are allowed to contact NodePort services. -- macro: nodeport_containers - condition: container - -- rule: Unexpected K8s NodePort Connection - desc: Detect attempts to use K8s NodePorts from a container - condition: (inbound_outbound) and fd.sport >= 30000 and fd.sport <= 32767 and container and not nodeport_containers - output: Unexpected K8s NodePort Connection (command=%proc.cmdline connection=%fd.name container_id=%container.id image=%container.image.repository) - priority: NOTICE - tags: [network, k8s, container, mitre_port_knocking] - -- list: network_tool_binaries - items: [nc, ncat, nmap, dig, tcpdump, tshark, ngrep, telnet, mitmproxy, socat, zmap] - -- macro: network_tool_procs - condition: (proc.name in (network_tool_binaries)) - -# In a local/user rules file, create a condition that matches legitimate uses -# of a package management process inside a container. -# -# For example: -# - macro: user_known_package_manager_in_container -# condition: proc.cmdline="dpkg -l" -- macro: user_known_package_manager_in_container - condition: (never_true) - -# Container is supposed to be immutable. Package management should be done in building the image. -- rule: Launch Package Management Process in Container - desc: Package management process ran inside container - condition: > - spawned_process - and container - and user.name != "_apt" - and package_mgmt_procs - and not package_mgmt_ancestor_procs - and not user_known_package_manager_in_container - output: > - Package management process launched in container (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: ERROR - tags: [process, mitre_persistence] - -- rule: Netcat Remote Code Execution in Container - desc: Netcat Program runs inside container that allows remote code execution - condition: > - spawned_process and container and - ((proc.name = "nc" and (proc.args contains "-e" or proc.args contains "-c")) or - (proc.name = "ncat" and (proc.args contains "--sh-exec" or proc.args contains "--exec" or proc.args contains "-e " - or proc.args contains "-c " or proc.args contains "--lua-exec")) - ) - output: > - Netcat runs inside container that allows remote code execution (user=%user.name user_loginuid=%user.loginuid - command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: WARNING - tags: [network, process, mitre_execution] - -- macro: user_known_network_tool_activities - condition: (never_true) - -- rule: Launch Suspicious Network Tool in Container - desc: Detect network tools launched inside container - condition: > - spawned_process and container and network_tool_procs and not user_known_network_tool_activities - output: > - Network tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname - container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [network, process, mitre_discovery, mitre_exfiltration] - -# This rule is not enabled by default, as there are legitimate use -# cases for these tools on hosts. If you want to enable it, modify the -# following macro. -- macro: consider_network_tools_on_host - condition: (never_true) - -- rule: Launch Suspicious Network Tool on Host - desc: Detect network tools launched on the host - condition: > - spawned_process and - not container and - consider_network_tools_on_host and - network_tool_procs and - not user_known_network_tool_activities - output: > - Network tool launched on host (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname) - priority: NOTICE - tags: [network, process, mitre_discovery, mitre_exfiltration] - -- list: grep_binaries - items: [grep, egrep, fgrep] - -- macro: grep_commands - condition: (proc.name in (grep_binaries)) - -# a less restrictive search for things that might be passwords/ssh/user etc. -- macro: grep_more - condition: (never_true) - -- macro: private_key_or_password - condition: > - (proc.args icontains "BEGIN PRIVATE" or - proc.args icontains "BEGIN RSA PRIVATE" or - proc.args icontains "BEGIN DSA PRIVATE" or - proc.args icontains "BEGIN EC PRIVATE" or - (grep_more and - (proc.args icontains " pass " or - proc.args icontains " ssh " or - proc.args icontains " user ")) - ) - -- rule: Search Private Keys or Passwords - desc: > - Detect grep private keys or passwords activity. - condition: > - (spawned_process and - ((grep_commands and private_key_or_password) or - (proc.name = "find" and (proc.args contains "id_rsa" or proc.args contains "id_dsa"))) - ) - output: > - Grep private keys or passwords activities found - (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline container_id=%container.id container_name=%container.name - image=%container.image.repository:%container.image.tag) - priority: - WARNING - tags: [process, mitre_credential_access] - -- list: log_directories - items: [/var/log, /dev/log] - -- list: log_files - items: [syslog, auth.log, secure, kern.log, cron, user.log, dpkg.log, last.log, yum.log, access_log, mysql.log, mysqld.log] - -- macro: access_log_files - condition: (fd.directory in (log_directories) or fd.filename in (log_files)) - -# a placeholder for whitelist log files that could be cleared. Recommend the macro as (fd.name startswith "/var/log/app1*") -- macro: allowed_clear_log_files - condition: (never_true) - -- macro: trusted_logging_images - condition: (container.image.repository endswith "splunk/fluentd-hec" or - container.image.repository endswith "fluent/fluentd-kubernetes-daemonset" or - container.image.repository endswith "openshift3/ose-logging-fluentd" or - container.image.repository endswith "containernetworking/azure-npm") - -- rule: Clear Log Activities - desc: Detect clearing of critical log files - condition: > - open_write and - access_log_files and - evt.arg.flags contains "O_TRUNC" and - not trusted_logging_images and - not allowed_clear_log_files - output: > - Log files were tampered (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline file=%fd.name container_id=%container.id image=%container.image.repository) - priority: - WARNING - tags: [file, mitre_defense_evasion] - -- list: data_remove_commands - items: [shred, mkfs, mke2fs] - -- macro: clear_data_procs - condition: (proc.name in (data_remove_commands)) - -- macro: user_known_remove_data_activities - condition: (never_true) - -- rule: Remove Bulk Data from Disk - desc: Detect process running to clear bulk data from disk - condition: spawned_process and clear_data_procs and not user_known_remove_data_activities - output: > - Bulk data has been removed from disk (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline file=%fd.name container_id=%container.id image=%container.image.repository) - priority: - WARNING - tags: [process, mitre_persistence] - -# here `ash_history` will match both `bash_history` and `ash_history` -- macro: modify_shell_history - condition: > - (modify and ( - evt.arg.name endswith "ash_history" or - evt.arg.name endswith "zsh_history" or - evt.arg.name contains "fish_read_history" or - evt.arg.name endswith "fish_history" or - evt.arg.oldpath endswith "ash_history" or - evt.arg.oldpath endswith "zsh_history" or - evt.arg.oldpath contains "fish_read_history" or - evt.arg.oldpath endswith "fish_history" or - evt.arg.path endswith "ash_history" or - evt.arg.path endswith "zsh_history" or - evt.arg.path contains "fish_read_history" or - evt.arg.path endswith "fish_history")) - -# here `ash_history` will match both `bash_history` and `ash_history` -- macro: truncate_shell_history - condition: > - (open_write and ( - fd.name endswith "ash_history" or - fd.name endswith "zsh_history" or - fd.name contains "fish_read_history" or - fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC") - -- macro: var_lib_docker_filepath - condition: (evt.arg.name startswith /var/lib/docker or fd.name startswith /var/lib/docker) - -- rule: Delete or rename shell history - desc: Detect shell history deletion - condition: > - (modify_shell_history or truncate_shell_history) and - not var_lib_docker_filepath and - not proc.name in (docker_binaries) - output: > - Shell history had been deleted or renamed (user=%user.name user_loginuid=%user.loginuid type=%evt.type command=%proc.cmdline fd.name=%fd.name name=%evt.arg.name path=%evt.arg.path oldpath=%evt.arg.oldpath %container.info) - priority: - WARNING - tags: [process, mitre_defense_evasion] - -# This rule is deprecated and will/should never be triggered. Keep it here for backport compatibility. -# Rule Delete or rename shell history is the preferred rule to use now. -- rule: Delete Bash History - desc: Detect bash history deletion - condition: > - ((spawned_process and proc.name in (shred, rm, mv) and proc.args contains "bash_history") or - (open_write and fd.name contains "bash_history" and evt.arg.flags contains "O_TRUNC")) - output: > - Shell history had been deleted or renamed (user=%user.name user_loginuid=%user.loginuid type=%evt.type command=%proc.cmdline fd.name=%fd.name name=%evt.arg.name path=%evt.arg.path oldpath=%evt.arg.oldpath %container.info) - priority: - WARNING - tags: [process, mitre_defense_evasion] - -- macro: consider_all_chmods - condition: (always_true) - -- list: user_known_chmod_applications - items: [hyperkube, kubelet, k3s-agent] - -# This macro should be overridden in user rules as needed. This is useful if a given application -# should not be ignored altogether with the user_known_chmod_applications list, but only in -# specific conditions. -- macro: user_known_set_setuid_or_setgid_bit_conditions - condition: (never_true) - -- rule: Set Setuid or Setgid bit - desc: > - When the setuid or setgid bits are set for an application, - this means that the application will run with the privileges of the owning user or group respectively. - Detect setuid or setgid bits set via chmod - condition: > - consider_all_chmods and chmod and (evt.arg.mode contains "S_ISUID" or evt.arg.mode contains "S_ISGID") - and not proc.name in (user_known_chmod_applications) - and not exe_running_docker_save - and not user_known_set_setuid_or_setgid_bit_conditions - enabled: false - output: > - Setuid or setgid bit is set via chmod (fd=%evt.arg.fd filename=%evt.arg.filename mode=%evt.arg.mode user=%user.name user_loginuid=%user.loginuid process=%proc.name - command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: - NOTICE - tags: [process, mitre_persistence] - -- list: exclude_hidden_directories - items: [/root/.cassandra] - -# To use this rule, you should modify consider_hidden_file_creation. -- macro: consider_hidden_file_creation - condition: (never_true) - -- macro: user_known_create_hidden_file_activities - condition: (never_true) - -- rule: Create Hidden Files or Directories - desc: Detect hidden files or directories created - condition: > - ((modify and evt.arg.newpath contains "/.") or - (mkdir and evt.arg.path contains "/.") or - (open_write and evt.arg.flags contains "O_CREAT" and fd.name contains "/." and not fd.name pmatch (exclude_hidden_directories))) and - consider_hidden_file_creation and - not user_known_create_hidden_file_activities - and not exe_running_docker_save - output: > - Hidden file or directory created (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline - file=%fd.name newpath=%evt.arg.newpath container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: - NOTICE - tags: [file, mitre_persistence] - -- list: remote_file_copy_binaries - items: [rsync, scp, sftp, dcp] - -- macro: remote_file_copy_procs - condition: (proc.name in (remote_file_copy_binaries)) - -# Users should overwrite this macro to specify conditions under which a -# Custom condition for use of remote file copy tool in container -- macro: user_known_remote_file_copy_activities - condition: (never_true) - -- rule: Launch Remote File Copy Tools in Container - desc: Detect remote file copy tools launched in container - condition: > - spawned_process - and container - and remote_file_copy_procs - and not user_known_remote_file_copy_activities - output: > - Remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname - container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [network, process, mitre_lateral_movement, mitre_exfiltration] - -- rule: Create Symlink Over Sensitive Files - desc: Detect symlink created over sensitive files - condition: > - create_symlink and - (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names)) - output: > - Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname) - priority: WARNING - tags: [file, mitre_exfiltration] - -- rule: Create Hardlink Over Sensitive Files - desc: Detect hardlink created over sensitive files - condition: > - create_hardlink and - (evt.arg.oldpath in (sensitive_file_names)) - output: > - Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname) - priority: WARNING - tags: [file, mitre_exfiltration] - -- list: miner_ports - items: [ - 25, 3333, 3334, 3335, 3336, 3357, 4444, - 5555, 5556, 5588, 5730, 6099, 6666, 7777, - 7778, 8000, 8001, 8008, 8080, 8118, 8333, - 8888, 8899, 9332, 9999, 14433, 14444, - 45560, 45700 - ] - -- list: miner_domains - items: [ - "asia1.ethpool.org","ca.minexmr.com", - "cn.stratum.slushpool.com","de.minexmr.com", - "eth-ar.dwarfpool.com","eth-asia.dwarfpool.com", - "eth-asia1.nanopool.org","eth-au.dwarfpool.com", - "eth-au1.nanopool.org","eth-br.dwarfpool.com", - "eth-cn.dwarfpool.com","eth-cn2.dwarfpool.com", - "eth-eu.dwarfpool.com","eth-eu1.nanopool.org", - "eth-eu2.nanopool.org","eth-hk.dwarfpool.com", - "eth-jp1.nanopool.org","eth-ru.dwarfpool.com", - "eth-ru2.dwarfpool.com","eth-sg.dwarfpool.com", - "eth-us-east1.nanopool.org","eth-us-west1.nanopool.org", - "eth-us.dwarfpool.com","eth-us2.dwarfpool.com", - "eu.stratum.slushpool.com","eu1.ethermine.org", - "eu1.ethpool.org","fr.minexmr.com", - "mine.moneropool.com","mine.xmrpool.net", - "pool.minexmr.com","pool.monero.hashvault.pro", - "pool.supportxmr.com","sg.minexmr.com", - "sg.stratum.slushpool.com","stratum-eth.antpool.com", - "stratum-ltc.antpool.com","stratum-zec.antpool.com", - "stratum.antpool.com","us-east.stratum.slushpool.com", - "us1.ethermine.org","us1.ethpool.org", - "us2.ethermine.org","us2.ethpool.org", - "xmr-asia1.nanopool.org","xmr-au1.nanopool.org", - "xmr-eu1.nanopool.org","xmr-eu2.nanopool.org", - "xmr-jp1.nanopool.org","xmr-us-east1.nanopool.org", - "xmr-us-west1.nanopool.org","xmr.crypto-pool.fr", - "xmr.pool.minergate.com", "rx.unmineable.com", - "ss.antpool.com","dash.antpool.com", - "eth.antpool.com","zec.antpool.com", - "xmc.antpool.com","btm.antpool.com", - "stratum-dash.antpool.com","stratum-xmc.antpool.com", - "stratum-btm.antpool.com" - ] - -- list: https_miner_domains - items: [ - "ca.minexmr.com", - "cn.stratum.slushpool.com", - "de.minexmr.com", - "fr.minexmr.com", - "mine.moneropool.com", - "mine.xmrpool.net", - "pool.minexmr.com", - "sg.minexmr.com", - "stratum-eth.antpool.com", - "stratum-ltc.antpool.com", - "stratum-zec.antpool.com", - "stratum.antpool.com", - "xmr.crypto-pool.fr", - "ss.antpool.com", - "stratum-dash.antpool.com", - "stratum-xmc.antpool.com", - "stratum-btm.antpool.com", - "btm.antpool.com" - ] - -- list: http_miner_domains - items: [ - "ca.minexmr.com", - "de.minexmr.com", - "fr.minexmr.com", - "mine.moneropool.com", - "mine.xmrpool.net", - "pool.minexmr.com", - "sg.minexmr.com", - "xmr.crypto-pool.fr" - ] - -# Add rule based on crypto mining IOCs -- macro: minerpool_https - condition: (fd.sport="443" and fd.sip.name in (https_miner_domains)) - -- macro: minerpool_http - condition: (fd.sport="80" and fd.sip.name in (http_miner_domains)) - -- macro: minerpool_other - condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains)) - -- macro: net_miner_pool - condition: (evt.type in (sendto, sendmsg, connect) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other))) - -- macro: trusted_images_query_miner_domain_dns - condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco)) - -# The rule is disabled by default. -# Note: falco will send DNS request to resolve miner pool domain which may trigger alerts in your environment. -- rule: Detect outbound connections to common miner pool ports - desc: Miners typically connect to miner pools on common ports. - condition: net_miner_pool and not trusted_images_query_miner_domain_dns - enabled: false - output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository) - priority: CRITICAL - tags: [network, mitre_execution] - -- rule: Detect crypto miners using the Stratum protocol - desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp' - condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl") - output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository) - priority: CRITICAL - tags: [process, mitre_execution] - -- list: k8s_client_binaries - items: [docker, kubectl, crictl] - -- list: user_known_k8s_ns_kube_system_images - items: [ - k8s.gcr.io/fluentd-gcp-scaler, - k8s.gcr.io/node-problem-detector/node-problem-detector - ] - -- list: user_known_k8s_images - items: [ - mcr.microsoft.com/aks/hcp/hcp-tunnel-front - ] - -# Whitelist for known docker client binaries run inside container -# - k8s.gcr.io/fluentd-gcp-scaler in GCP/GKE -- macro: user_known_k8s_client_container - condition: > - (k8s.ns.name="kube-system" and container.image.repository in (user_known_k8s_ns_kube_system_images)) or container.image.repository in (user_known_k8s_images) - -- macro: user_known_k8s_client_container_parens - condition: (user_known_k8s_client_container) - -- rule: The docker client is executed in a container - desc: Detect a k8s client tool executed inside a container - condition: spawned_process and container and not user_known_k8s_client_container_parens and proc.name in (k8s_client_binaries) - output: "Docker or kubernetes client executed in container (user=%user.name user_loginuid=%user.loginuid %container.info parent=%proc.pname cmdline=%proc.cmdline image=%container.image.repository:%container.image.tag)" - priority: WARNING - tags: [container, mitre_execution] - - -# This rule is enabled by default. -# If you want to disable it, modify the following macro. -- macro: consider_packet_socket_communication - condition: (always_true) - -- list: user_known_packet_socket_binaries - items: [] - -- rule: Packet socket created in container - desc: Detect new packet socket at the device driver (OSI Layer 2) level in a container. Packet socket could be used for ARP Spoofing and privilege escalation(CVE-2020-14386) by attacker. - condition: evt.type=socket and evt.arg[0]=AF_PACKET and consider_packet_socket_communication and container and not proc.name in (user_known_packet_socket_binaries) - output: Packet socket was created in a container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline socket_info=%evt.args container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [network, mitre_discovery] - -# Change to (always_true) to enable rule 'Network connection outside local subnet' -- macro: enabled_rule_network_only_subnet - condition: (never_true) - -# Namespaces where the rule is enforce -- list: namespace_scope_network_only_subnet - items: [] - -- macro: network_local_subnet - condition: > - fd.rnet in (rfc_1918_addresses) or - fd.ip = "0.0.0.0" or - fd.net = "127.0.0.0/8" - -# # How to test: -# # Change macro enabled_rule_network_only_subnet to condition: always_true -# # Add 'default' to namespace_scope_network_only_subnet -# # Run: -# kubectl run --generator=run-pod/v1 -n default -i --tty busybox --image=busybox --rm -- wget google.com -O /var/google.html -# # Check logs running - -- rule: Network Connection outside Local Subnet - desc: Detect traffic to image outside local subnet. - condition: > - enabled_rule_network_only_subnet and - inbound_outbound and - container and - not network_local_subnet and - k8s.ns.name in (namespace_scope_network_only_subnet) - output: > - Network connection outside local subnet - (command=%proc.cmdline connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id - image=%container.image.repository namespace=%k8s.ns.name - fd.rip.name=%fd.rip.name fd.lip.name=%fd.lip.name fd.cip.name=%fd.cip.name fd.sip.name=%fd.sip.name) - priority: WARNING - tags: [network] - -- macro: allowed_port - condition: (never_true) - -- list: allowed_image - items: [] # add image to monitor, i.e.: bitnami/nginx - -- list: authorized_server_port - items: [] # add port to allow, i.e.: 80 - -# # How to test: -# kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" -# kubectl expose deployment nginx-app --port=80 --name=nginx-http --type=LoadBalancer -# # On minikube: -# minikube service nginx-http -# # On general K8s: -# kubectl get services -# kubectl cluster-info -# # Visit the Nginx service and port, should not fire. -# # Change rule to different port, then different process name, and test again that it fires. - -- rule: Outbound or Inbound Traffic not to Authorized Server Process and Port - desc: Detect traffic that is not to authorized server process and port. - condition: > - allowed_port and - inbound_outbound and - container and - container.image.repository in (allowed_image) and - not proc.name in (authorized_server_binary) and - not fd.sport in (authorized_server_port) - output: > - Network connection outside authorized port and binary - (command=%proc.cmdline connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id - image=%container.image.repository) - priority: WARNING - tags: [network] - -- macro: user_known_stand_streams_redirect_activities - condition: (never_true) - -- macro: dup - condition: evt.type in (dup, dup2, dup3) - -- rule: Redirect STDOUT/STDIN to Network Connection in Container - desc: Detect redirecting stdout/stdin to network connection in container (potential reverse shell). - condition: dup and container and evt.rawres in (0, 1, 2) and fd.type in ("ipv4", "ipv6") and not user_known_stand_streams_redirect_activities - output: > - Redirect stdout/stdin to network connection (user=%user.name user_loginuid=%user.loginuid %container.info process=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty container_id=%container.id image=%container.image.repository fd.name=%fd.name fd.num=%fd.num fd.type=%fd.type fd.sip=%fd.sip) - priority: NOTICE - -# The two Container Drift rules below will fire when a new executable is created in a container. -# There are two ways to create executables - file is created with execution permissions or permissions change of existing file. -# We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container. -# The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) - -# an activity that might be malicious or non-compliant. -# Two things to pay attention to: -# 1) In most cases, 'docker cp' will not be identified, but the assumption is that if an attacker gained access to the container runtime daemon, they are already privileged -# 2) Drift rules will be noisy in environments in which containers are built (e.g. docker build) -# These two rules are not enabled by default. Use `never_true` in macro condition to enable them. - -- macro: user_known_container_drift_activities - condition: (always_true) - -- rule: Container Drift Detected (chmod) - desc: New executable created in a container due to chmod - condition: > - chmod and - consider_all_chmods and - container and - not runc_writing_exec_fifo and - not runc_writing_var_lib_docker and - not user_known_container_drift_activities and - evt.rawres>=0 and - ((evt.arg.mode contains "S_IXUSR") or - (evt.arg.mode contains "S_IXGRP") or - (evt.arg.mode contains "S_IXOTH")) - output: Drift detected (chmod), new executable created in a container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline filename=%evt.arg.filename name=%evt.arg.name mode=%evt.arg.mode event=%evt.type) - priority: ERROR - -# **************************************************************************** -# * "Container Drift Detected (open+create)" requires FALCO_ENGINE_VERSION 6 * -# **************************************************************************** -- rule: Container Drift Detected (open+create) - desc: New executable created in a container due to open+create - condition: > - evt.type in (open,openat,openat2,creat) and - evt.is_open_exec=true and - container and - not runc_writing_exec_fifo and - not runc_writing_var_lib_docker and - not user_known_container_drift_activities and - evt.rawres>=0 - output: Drift detected (open+create), new executable created in a container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline filename=%evt.arg.filename name=%evt.arg.name mode=%evt.arg.mode event=%evt.type) - priority: ERROR - -- list: c2_server_ip_list - items: [] - -- rule: Outbound Connection to C2 Servers - desc: Detect outbound connection to command & control servers - condition: outbound and fd.sip in (c2_server_ip_list) - output: Outbound connection to C2 server (command=%proc.cmdline connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id image=%container.image.repository) - priority: WARNING - tags: [network] - -- list: white_listed_modules - items: [] - -- rule: Linux Kernel Module Injection Detected - desc: Detect kernel module was injected (from container). - condition: spawned_process and container and proc.name=insmod and not proc.args in (white_listed_modules) - output: Linux Kernel Module injection using insmod detected (user=%user.name user_loginuid=%user.loginuid parent_process=%proc.pname module=%proc.args %container.info image=%container.image.repository:%container.image.tag) - priority: WARNING - tags: [process] - -- list: run_as_root_image_list - items: [] - -- macro: user_known_run_as_root_container - condition: (container.image.repository in (run_as_root_image_list)) - -# The rule is disabled by default and should be enabled when non-root container policy has been applied. -# Note the rule will not work as expected when usernamespace is applied, e.g. userns-remap is enabled. -- rule: Container Run as Root User - desc: Detected container running as root user - condition: spawned_process and container and proc.vpid=1 and user.uid=0 and not user_known_run_as_root_container - enabled: false - output: Container launched with root user privilege (uid=%user.uid container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: INFO - tags: [container, process] - -# This rule helps detect CVE-2021-3156: -# A privilege escalation to root through heap-based buffer overflow -- rule: Sudo Potential Privilege Escalation - desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root. - condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \) - output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)" - priority: CRITICAL - tags: [filesystem, mitre_privilege_escalation] - -- rule: Debugfs Launched in Privileged Container - desc: Detect file system debugger debugfs launched inside a privileged container which might lead to container escape. - condition: > - spawned_process and container - and container.privileged=true - and proc.name=debugfs - output: Debugfs launched started in a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag) - priority: WARNING - tags: [container, cis, mitre_lateral_movement] - -- macro: mount_info - condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h")) - -- macro: user_known_mount_in_privileged_containers - condition: (never_true) - -- rule: Mount Launched in Privileged Container - desc: Detect file system mount happened inside a privileged container which might lead to container escape. - condition: > - spawned_process and container - and container.privileged=true - and proc.name=mount - and not mount_info - and not user_known_mount_in_privileged_containers - output: Mount was executed inside a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag) - priority: WARNING - tags: [container, cis, mitre_lateral_movement] - -- macro: consider_userfaultfd_activities - condition: (always_true) - -- list: user_known_userfaultfd_processes - items: [] - -- rule: Unprivileged Delegation of Page Faults Handling to a Userspace Process - desc: Detect a successful unprivileged userfaultfd syscall which might act as an attack primitive to exploit other bugs - condition: > - consider_userfaultfd_activities and - evt.type = userfaultfd and - user.uid != 0 and - (evt.rawres >= 0 or evt.res != -1) and - not proc.name in (user_known_userfaultfd_processes) - output: An userfaultfd syscall was successfully executed by an unprivileged user (user=%user.name user_loginuid=%user.loginuid process=%proc.name command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag) - priority: CRITICAL - tags: [syscall, mitre_defense_evasion] - -- list: ingress_remote_file_copy_binaries - items: [wget] - -- macro: ingress_remote_file_copy_procs - condition: (proc.name in (ingress_remote_file_copy_binaries)) - -# Users should overwrite this macro to specify conditions under which a -# Custom condition for use of ingress remote file copy tool in container -- macro: user_known_ingress_remote_file_copy_activities - condition: (never_true) - -- macro: curl_download - condition: proc.name = curl and - (proc.cmdline contains " -o " or - proc.cmdline contains " --output " or - proc.cmdline contains " -O " or - proc.cmdline contains " --remote-name ") - -- rule: Launch Ingress Remote File Copy Tools in Container - desc: Detect ingress remote file copy tools launched in container - condition: > - spawned_process and - container and - (ingress_remote_file_copy_procs or curl_download) and - not user_known_ingress_remote_file_copy_activities - output: > - Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname - container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) - priority: NOTICE - tags: [network, process, mitre_command_and_control] - -# This rule helps detect CVE-2021-4034: -# A privilege escalation to root through memory corruption -- rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034) - desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system" - condition: - spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = '' - output: - "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)" - priority: CRITICAL - tags: [process, mitre_privilege_escalation] - - -- rule: Detect release_agent File Container Escapes - desc: "This rule detect an attempt to exploit a container escape using release_agent file. By running a container with certains capabilities, a privileged user can modify release_agent file and escape from the container" - condition: - open_write and container and fd.name endswith release_agent and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE) and thread.cap_effective contains CAP_SYS_ADMIN - output: - "Detect an attempt to exploit a container escape using release_agent file (user=%user.name user_loginuid=%user.loginuid filename=%fd.name %container.info image=%container.image.repository:%container.image.tag cap_effective=%thread.cap_effective)" - priority: CRITICAL - tags: [container, mitre_privilege_escalation, mitre_lateral_movement] - -# Rule for detecting potential Log4Shell (CVE-2021-44228) exploitation -# Note: Not compatible with Java 17+, which uses read() syscalls -- macro: java_network_read - condition: (evt.type=recvfrom and fd.type in (ipv4, ipv6) and proc.name=java) - -- rule: Java Process Class File Download - desc: Detected Java process downloading a class file which could indicate a successful exploit of the log4shell Log4j vulnerability (CVE-2021-44228) - condition: > - java_network_read and evt.buffer bcontains cafebabe - output: Java process class file download (user=%user.name user_loginname=%user.loginname user_loginuid=%user.loginuid event=%evt.type connection=%fd.name server_ip=%fd.sip server_port=%fd.sport proto=%fd.l4proto process=%proc.name command=%proc.cmdline parent=%proc.pname buffer=%evt.buffer container_id=%container.id image=%container.image.repository) - priority: CRITICAL - tags: [mitre_initial_access] - -# Application rules have moved to application_rules.yaml. Please look -# there if you want to enable them by adding to -# falco_rules.local.yaml. diff --git a/charts/falco/falco/charts/falco/rules/k8s_audit_rules.yaml b/charts/falco/falco/charts/falco/rules/k8s_audit_rules.yaml deleted file mode 100644 index 447602578..000000000 --- a/charts/falco/falco/charts/falco/rules/k8s_audit_rules.yaml +++ /dev/null @@ -1,743 +0,0 @@ -# -# Copyright (C) 2022 The Falco Authors. -# -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -- required_engine_version: 12 - -- required_plugin_versions: - - name: k8saudit - version: 0.1.0 - - name: json - version: 0.3.0 - -# Like always_true/always_false, but works with k8s audit events -- macro: k8s_audit_always_true - condition: (jevt.rawtime exists) - -- macro: k8s_audit_never_true - condition: (jevt.rawtime=0) - -# Generally only consider audit events once the response has completed -- list: k8s_audit_stages - items: ["ResponseComplete"] - -# Generally exclude users starting with "system:" -- macro: non_system_user - condition: (not ka.user.name startswith "system:") - -# This macro selects the set of Audit Events used by the below rules. -- macro: kevt - condition: (jevt.value[/stage] in (k8s_audit_stages)) - -- macro: kevt_started - condition: (jevt.value[/stage]=ResponseStarted) - -# If you wish to restrict activity to a specific set of users, override/append to this list. -# users created by kops are included -- list: vertical_pod_autoscaler_users - items: ["vpa-recommender", "vpa-updater"] - -- list: allowed_k8s_users - items: [ - "minikube", "minikube-user", "kubelet", "kops", "admin", "kube", "kube-proxy", "kube-apiserver-healthcheck", - "kubernetes-admin", - vertical_pod_autoscaler_users, - cluster-autoscaler, - "system:addon-manager", - "cloud-controller-manager", - "system:kube-controller-manager" - ] - -- list: eks_allowed_k8s_users - items: [ - "eks:node-manager", - "eks:certificate-controller", - "eks:fargate-scheduler", - "eks:k8s-metrics", - "eks:authenticator", - "eks:cluster-event-watcher", - "eks:nodewatcher", - "eks:pod-identity-mutating-webhook" - ] -- -- rule: Disallowed K8s User - desc: Detect any k8s operation by users outside of an allowed set of users. - condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users) - output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# In a local/user rules file, you could override this macro to -# explicitly enumerate the container images that you want to run in -# your environment. In this main falco rules file, there isn't any way -# to know all the containers that can run, so any container is -# allowed, by using the always_true macro. In the overridden macro, the condition -# would look something like (ka.req.pod.containers.image.repository in (my-repo/my-image)) -- macro: allowed_k8s_containers - condition: (k8s_audit_always_true) - -- macro: response_successful - condition: (ka.response.code startswith 2) - -- macro: kget - condition: ka.verb=get - -- macro: kcreate - condition: ka.verb=create - -- macro: kmodify - condition: (ka.verb in (create,update,patch)) - -- macro: kdelete - condition: ka.verb=delete - -- macro: pod - condition: ka.target.resource=pods and not ka.target.subresource exists - -- macro: pod_subresource - condition: ka.target.resource=pods and ka.target.subresource exists - -- macro: deployment - condition: ka.target.resource=deployments - -- macro: service - condition: ka.target.resource=services - -- macro: configmap - condition: ka.target.resource=configmaps - -- macro: namespace - condition: ka.target.resource=namespaces - -- macro: serviceaccount - condition: ka.target.resource=serviceaccounts - -- macro: clusterrole - condition: ka.target.resource=clusterroles - -- macro: clusterrolebinding - condition: ka.target.resource=clusterrolebindings - -- macro: role - condition: ka.target.resource=roles - -- macro: secret - condition: ka.target.resource=secrets - -- macro: health_endpoint - condition: ka.uri=/healthz - -- macro: live_endpoint - condition: ka.uri=/livez - -- macro: ready_endpoint - condition: ka.uri=/readyz - -- rule: Create Disallowed Pod - desc: > - Detect an attempt to start a pod with a container image outside of a list of allowed images. - condition: kevt and pod and kcreate and not allowed_k8s_containers - output: Pod started with container not in allowed list (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- rule: Create Privileged Pod - desc: > - Detect an attempt to start a pod with a privileged container - condition: kevt and pod and kcreate and ka.req.pod.containers.privileged intersects (true) and not ka.req.pod.containers.image.repository in (falco_privileged_images) - output: Pod started with privileged container (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- macro: sensitive_vol_mount - condition: > - (ka.req.pod.volumes.hostpath intersects (/proc, /var/run/docker.sock, /, /etc, /root, /var/run/crio/crio.sock, /home/admin, /var/lib/kubelet, /var/lib/kubelet/pki, /etc/kubernetes, /etc/kubernetes/manifests)) - -- rule: Create Sensitive Mount Pod - desc: > - Detect an attempt to start a pod with a volume from a sensitive host directory (i.e. /proc). - Exceptions are made for known trusted images. - condition: kevt and pod and kcreate and sensitive_vol_mount and not ka.req.pod.containers.image.repository in (falco_sensitive_mount_images) - output: Pod started with sensitive mount (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image volumes=%jevt.value[/requestObject/spec/volumes]) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# These container images are allowed to run with hostnetwork=true -- list: falco_hostnetwork_images - items: [ - gcr.io/google-containers/prometheus-to-sd, - gcr.io/projectcalico-org/typha, - gcr.io/projectcalico-org/node, - gke.gcr.io/gke-metadata-server, - gke.gcr.io/kube-proxy, - gke.gcr.io/netd-amd64, - k8s.gcr.io/ip-masq-agent-amd64 - k8s.gcr.io/prometheus-to-sd, - ] - -# Corresponds to K8s CIS Benchmark 1.7.4 -- rule: Create HostNetwork Pod - desc: Detect an attempt to start a pod using the host network. - condition: kevt and pod and kcreate and ka.req.pod.host_network intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostnetwork_images) - output: Pod started using host network (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- list: falco_hostpid_images - items: [] - -- rule: Create HostPid Pod - desc: Detect an attempt to start a pod using the host pid namespace. - condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images) - output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- list: falco_hostipc_images - items: [] - -- rule: Create HostIPC Pod - desc: Detect an attempt to start a pod using the host ipc namespace. - condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images) - output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- macro: user_known_node_port_service - condition: (k8s_audit_never_true) - -- rule: Create NodePort Service - desc: > - Detect an attempt to start a service with a NodePort service type - condition: kevt and service and kcreate and ka.req.service.type=NodePort and not user_known_node_port_service - output: NodePort Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace ports=%ka.req.service.ports) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- macro: contains_private_credentials - condition: > - (ka.req.configmap.obj contains "aws_access_key_id" or - ka.req.configmap.obj contains "aws-access-key-id" or - ka.req.configmap.obj contains "aws_s3_access_key_id" or - ka.req.configmap.obj contains "aws-s3-access-key-id" or - ka.req.configmap.obj contains "password" or - ka.req.configmap.obj contains "passphrase") - -- rule: Create/Modify Configmap With Private Credentials - desc: > - Detect creating/modifying a configmap containing a private credential (aws key, password, etc.) - condition: kevt and configmap and kmodify and contains_private_credentials - output: K8s configmap with private credential (user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# Corresponds to K8s CIS Benchmark, 1.1.1. -- rule: Anonymous Request Allowed - desc: > - Detect any request made by the anonymous user that was allowed - condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint - output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason)) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# Roughly corresponds to K8s CIS Benchmark, 1.1.12. In this case, -# notifies an attempt to exec/attach to a privileged container. - -# Ideally, we'd add a more stringent rule that detects attaches/execs -# to a privileged pod, but that requires the engine for k8s audit -# events to be stateful, so it could know if a container named in an -# attach request was created privileged or not. For now, we have a -# less severe rule that detects attaches/execs to any pod. -# -# For the same reason, you can't use things like image names/prefixes, -# as the event that creates the pod (which has the images) is a -# separate event than the actual exec/attach to the pod. - -- macro: user_known_exec_pod_activities - condition: (k8s_audit_never_true) - -- rule: Attach/Exec Pod - desc: > - Detect any attempt to attach/exec to a pod - condition: kevt_started and pod_subresource and kcreate and ka.target.subresource in (exec,attach) and not user_known_exec_pod_activities - output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command]) - priority: NOTICE - source: k8s_audit - tags: [k8s] - -- macro: user_known_pod_debug_activities - condition: (k8s_audit_never_true) - -# Only works when feature gate EphemeralContainers is enabled -- rule: EphemeralContainers Created - desc: > - Detect any ephemeral container created - condition: kevt and pod_subresource and kmodify and ka.target.subresource in (ephemeralcontainers) and not user_known_pod_debug_activities - output: Ephemeral container is created in pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace ephemeral_container_name=%jevt.value[/requestObject/ephemeralContainers/0/name] ephemeral_container_image=%jevt.value[/requestObject/ephemeralContainers/0/image]) - priority: NOTICE - source: k8s_audit - tags: [k8s] - -# In a local/user rules fie, you can append to this list to add additional allowed namespaces -- list: allowed_namespaces - items: [kube-system, kube-public, default] - -- rule: Create Disallowed Namespace - desc: Detect any attempt to create a namespace outside of a set of known namespaces - condition: kevt and namespace and kcreate and not ka.target.name in (allowed_namespaces) - output: Disallowed namespace created (user=%ka.user.name ns=%ka.target.name) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# Only defined for backwards compatibility. Use the more specific -# user_allowed_kube_namespace_image_list instead. -- list: user_trusted_image_list - items: [] - -- list: user_allowed_kube_namespace_image_list - items: [user_trusted_image_list] - -# Only defined for backwards compatibility. Use the more specific -# allowed_kube_namespace_image_list instead. -- list: k8s_image_list - items: [] - -- list: allowed_kube_namespace_image_list - items: [ - gcr.io/google-containers/prometheus-to-sd, - gcr.io/projectcalico-org/node, - gke.gcr.io/addon-resizer, - gke.gcr.io/heapster, - gke.gcr.io/gke-metadata-server, - k8s.gcr.io/ip-masq-agent-amd64, - k8s.gcr.io/kube-apiserver, - gke.gcr.io/kube-proxy, - gke.gcr.io/netd-amd64, - gke.gcr.io/watcher-daemonset, - k8s.gcr.io/addon-resizer - k8s.gcr.io/prometheus-to-sd, - k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64, - k8s.gcr.io/k8s-dns-kube-dns-amd64, - k8s.gcr.io/k8s-dns-sidecar-amd64, - k8s.gcr.io/metrics-server-amd64, - kope/kube-apiserver-healthcheck, - k8s_image_list - ] - -- macro: allowed_kube_namespace_pods - condition: (ka.req.pod.containers.image.repository in (user_allowed_kube_namespace_image_list) or - ka.req.pod.containers.image.repository in (allowed_kube_namespace_image_list)) - -# Detect any new pod created in the kube-system namespace -- rule: Pod Created in Kube Namespace - desc: Detect any attempt to create a pod in the kube-system or kube-public namespaces - condition: kevt and pod and kcreate and ka.target.namespace in (kube-system, kube-public) and not allowed_kube_namespace_pods - output: Pod created in kube namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- list: user_known_sa_list - items: [] - -- list: known_sa_list - items: [ - coredns, - coredns-autoscaler, - cronjob-controller, - daemon-set-controller, - deployment-controller, - disruption-controller, - endpoint-controller, - endpointslice-controller, - endpointslicemirroring-controller, - generic-garbage-collector, - horizontal-pod-autoscaler, - job-controller, - namespace-controller, - node-controller, - persistent-volume-binder, - pod-garbage-collector, - pv-protection-controller, - pvc-protection-controller, - replicaset-controller, - resourcequota-controller, - root-ca-cert-publisher, - service-account-controller, - statefulset-controller - ] - -- macro: trusted_sa - condition: (ka.target.name in (known_sa_list, user_known_sa_list)) - -# Detect creating a service account in the kube-system/kube-public namespace -- rule: Service Account Created in Kube Namespace - desc: Detect any attempt to create a serviceaccount in the kube-system or kube-public namespaces - condition: kevt and serviceaccount and kcreate and ka.target.namespace in (kube-system, kube-public) and response_successful and not trusted_sa - output: Service account created in kube namespace (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# Detect any modify/delete to any ClusterRole starting with -# "system:". "system:coredns" is excluded as changes are expected in -# normal operation. -- rule: System ClusterRole Modified/Deleted - desc: Detect any attempt to modify/delete a ClusterRole/Role starting with system - condition: kevt and (role or clusterrole) and (kmodify or kdelete) and (ka.target.name startswith "system:") and - not ka.target.name in (system:coredns, system:managed-certificate-controller) - output: System ClusterRole/Role modified or deleted (user=%ka.user.name role=%ka.target.name ns=%ka.target.namespace action=%ka.verb) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# Detect any attempt to create a ClusterRoleBinding to the cluster-admin user -# (expand this to any built-in cluster role that does "sensitive" things) -- rule: Attach to cluster-admin Role - desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user - condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin - output: Cluster Role Binding to cluster-admin role (user=%ka.user.name subject=%ka.req.binding.subjects) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- rule: ClusterRole With Wildcard Created - desc: Detect any attempt to create a Role/ClusterRole with wildcard resources or verbs - condition: kevt and (role or clusterrole) and kcreate and (ka.req.role.rules.resources intersects ("*") or ka.req.role.rules.verbs intersects ("*")) - output: Created Role/ClusterRole with wildcard (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- macro: writable_verbs - condition: > - (ka.req.role.rules.verbs intersects (create, update, patch, delete, deletecollection)) - -- rule: ClusterRole With Write Privileges Created - desc: Detect any attempt to create a Role/ClusterRole that can perform write-related actions - condition: kevt and (role or clusterrole) and kcreate and writable_verbs - output: Created Role/ClusterRole with write privileges (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules) - priority: NOTICE - source: k8s_audit - tags: [k8s] - -- rule: ClusterRole With Pod Exec Created - desc: Detect any attempt to create a Role/ClusterRole that can exec to pods - condition: kevt and (role or clusterrole) and kcreate and ka.req.role.rules.resources intersects ("pods/exec") - output: Created Role/ClusterRole with pod exec privileges (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# The rules below this point are less discriminatory and generally -# represent a stream of activity for a cluster. If you wish to disable -# these events, modify the following macro. -- macro: consider_activity_events - condition: (k8s_audit_always_true) - -- macro: kactivity - condition: (kevt and consider_activity_events) - -- rule: K8s Deployment Created - desc: Detect any attempt to create a deployment - condition: (kactivity and kcreate and deployment and response_successful) - output: K8s Deployment Created (user=%ka.user.name deployment=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Deployment Deleted - desc: Detect any attempt to delete a deployment - condition: (kactivity and kdelete and deployment and response_successful) - output: K8s Deployment Deleted (user=%ka.user.name deployment=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Service Created - desc: Detect any attempt to create a service - condition: (kactivity and kcreate and service and response_successful) - output: K8s Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Service Deleted - desc: Detect any attempt to delete a service - condition: (kactivity and kdelete and service and response_successful) - output: K8s Service Deleted (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s ConfigMap Created - desc: Detect any attempt to create a configmap - condition: (kactivity and kcreate and configmap and response_successful) - output: K8s ConfigMap Created (user=%ka.user.name configmap=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s ConfigMap Deleted - desc: Detect any attempt to delete a configmap - condition: (kactivity and kdelete and configmap and response_successful) - output: K8s ConfigMap Deleted (user=%ka.user.name configmap=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Namespace Created - desc: Detect any attempt to create a namespace - condition: (kactivity and kcreate and namespace and response_successful) - output: K8s Namespace Created (user=%ka.user.name namespace=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Namespace Deleted - desc: Detect any attempt to delete a namespace - condition: (kactivity and non_system_user and kdelete and namespace and response_successful) - output: K8s Namespace Deleted (user=%ka.user.name namespace=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Serviceaccount Created - desc: Detect any attempt to create a service account - condition: (kactivity and kcreate and serviceaccount and response_successful) - output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Serviceaccount Deleted - desc: Detect any attempt to delete a service account - condition: (kactivity and kdelete and serviceaccount and response_successful) - output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Role/Clusterrole Created - desc: Detect any attempt to create a cluster role/role - condition: (kactivity and kcreate and (clusterrole or role) and response_successful) - output: K8s Cluster Role Created (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Role/Clusterrole Deleted - desc: Detect any attempt to delete a cluster role/role - condition: (kactivity and kdelete and (clusterrole or role) and response_successful) - output: K8s Cluster Role Deleted (user=%ka.user.name role=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Role/Clusterrolebinding Created - desc: Detect any attempt to create a clusterrolebinding - condition: (kactivity and kcreate and clusterrolebinding and response_successful) - output: K8s Cluster Role Binding Created (user=%ka.user.name binding=%ka.target.name subjects=%ka.req.binding.subjects role=%ka.req.binding.role resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Role/Clusterrolebinding Deleted - desc: Detect any attempt to delete a clusterrolebinding - condition: (kactivity and kdelete and clusterrolebinding and response_successful) - output: K8s Cluster Role Binding Deleted (user=%ka.user.name binding=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Secret Created - desc: Detect any attempt to create a secret. Service account tokens are excluded. - condition: (kactivity and kcreate and secret and ka.target.namespace!=kube-system and non_system_user and response_successful) - output: K8s Secret Created (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Secret Deleted - desc: Detect any attempt to delete a secret. Service account tokens are excluded. - condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful) - output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: INFO - source: k8s_audit - tags: [k8s] - -- rule: K8s Secret Get Successfully - desc: > - Detect any attempt to get a secret. Service account tokens are excluded. - condition: > - secret and kget - and kactivity - and response_successful - output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: ERROR - source: k8s_audit - tags: [k8s] - -- rule: K8s Secret Get Unsuccessfully Tried - desc: > - Detect an unsuccessful attempt to get the secret. Service account tokens are excluded. - condition: > - secret and kget - and kactivity - and not response_successful - output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason) - priority: WARNING - source: k8s_audit - tags: [k8s] - -# This rule generally matches all events, and as a result is disabled -# by default. If you wish to enable these events, modify the -# following macro. -# condition: (jevt.rawtime exists) -- macro: consider_all_events - condition: (k8s_audit_never_true) - -- macro: kall - condition: (kevt and consider_all_events) - -- rule: All K8s Audit Events - desc: Match all K8s Audit Events - condition: kall - output: K8s Audit Event received (user=%ka.user.name verb=%ka.verb uri=%ka.uri obj=%jevt.obj) - priority: DEBUG - source: k8s_audit - tags: [k8s] - - -# This macro disables following rule, change to k8s_audit_never_true to enable it -- macro: allowed_full_admin_users - condition: (k8s_audit_always_true) - -# This list includes some of the default user names for an administrator in several K8s installations -- list: full_admin_k8s_users - items: ["admin", "kubernetes-admin", "kubernetes-admin@kubernetes", "kubernetes-admin@cluster.local", "minikube-user"] - -# This rules detect an operation triggered by an user name that is -# included in the list of those that are default administrators upon -# cluster creation. This may signify a permission setting too broader. -# As we can't check for role of the user on a general ka.* event, this -# may or may not be an administrator. Customize the full_admin_k8s_users -# list to your needs, and activate at your discretion. - -# # How to test: -# # Execute any kubectl command connected using default cluster user, as: -# kubectl create namespace rule-test - -- rule: Full K8s Administrative Access - desc: Detect any k8s operation by a user name that may be an administrator with full access. - condition: > - kevt - and non_system_user - and ka.user.name in (full_admin_k8s_users) - and not allowed_full_admin_users - output: K8s Operation performed by full admin user (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code) - priority: WARNING - source: k8s_audit - tags: [k8s] - -- macro: ingress - condition: ka.target.resource=ingresses - -- macro: ingress_tls - condition: (jevt.value[/requestObject/spec/tls] exists) - -# # How to test: -# # Create an ingress.yaml file with content: -# apiVersion: networking.k8s.io/v1beta1 -# kind: Ingress -# metadata: -# name: test-ingress -# annotations: -# nginx.ingress.kubernetes.io/rewrite-target: / -# spec: -# rules: -# - http: -# paths: -# - path: /testpath -# backend: -# serviceName: test -# servicePort: 80 -# # Execute: kubectl apply -f ingress.yaml - -- rule: Ingress Object without TLS Certificate Created - desc: Detect any attempt to create an ingress without TLS certification. - condition: > - (kactivity and kcreate and ingress and response_successful and not ingress_tls) - output: > - K8s Ingress Without TLS Cert Created (user=%ka.user.name ingress=%ka.target.name - namespace=%ka.target.namespace) - source: k8s_audit - priority: WARNING - tags: [k8s, network] - -- macro: node - condition: ka.target.resource=nodes - -- macro: allow_all_k8s_nodes - condition: (k8s_audit_always_true) - -- list: allowed_k8s_nodes - items: [] - -# # How to test: -# # Create a Falco monitored cluster with Kops -# # Increase the number of minimum nodes with: -# kops edit ig nodes -# kops apply --yes - -- rule: Untrusted Node Successfully Joined the Cluster - desc: > - Detect a node successfully joined the cluster outside of the list of allowed nodes. - condition: > - kevt and node - and kcreate - and response_successful - and not allow_all_k8s_nodes - and not ka.target.name in (allowed_k8s_nodes) - output: Node not in allowed list successfully joined the cluster (user=%ka.user.name node=%ka.target.name) - priority: ERROR - source: k8s_audit - tags: [k8s] - -- rule: Untrusted Node Unsuccessfully Tried to Join the Cluster - desc: > - Detect an unsuccessful attempt to join the cluster for a node not in the list of allowed nodes. - condition: > - kevt and node - and kcreate - and not response_successful - and not allow_all_k8s_nodes - and not ka.target.name in (allowed_k8s_nodes) - output: Node not in allowed list tried unsuccessfully to join the cluster (user=%ka.user.name node=%ka.target.name reason=%ka.response.reason) - priority: WARNING - source: k8s_audit - tags: [k8s] diff --git a/charts/falco/falco/charts/falco/templates/NOTES.txt b/charts/falco/falco/charts/falco/templates/NOTES.txt index ed8ba80c6..b077ff77b 100644 --- a/charts/falco/falco/charts/falco/templates/NOTES.txt +++ b/charts/falco/falco/charts/falco/templates/NOTES.txt @@ -1,8 +1,9 @@ +{{- if eq .Values.controller.kind "daemonset" }} Falco agents are spinning up on each node in your cluster. After a few seconds, they are going to start monitoring your containers looking for security issues. {{printf "\n" }} - +{{- end}} {{- if .Values.integrations }} WARNING: The following integrations have been deprecated and removed - gcscc @@ -18,7 +19,28 @@ No further action should be required. {{- if not .Values.falcosidekick.enabled }} Tip: You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick. -Full list of outputs: https://github.com/falcosecurity/charts/tree/master/falcosidekick. +Full list of outputs: https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick. You can enable its deployment with `--set falcosidekick.enabled=true` or in your values.yaml. -See: https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml for configuration values. +See: https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml for configuration values. + {{- end}} + + +{{- if (has .Values.driver.kind (list "module" "modern-bpf")) -}} +{{- println }} +WARNING(drivers): +{{- printf "\nThe driver kind: \"%s\" is an alias and might be removed in the future.\n" .Values.driver.kind -}} +{{- $driver := "" -}} +{{- if eq .Values.driver.kind "module" -}} +{{- $driver = "kmod" -}} +{{- else if eq .Values.driver.kind "modern-bpf" -}} +{{- $driver = "modern_ebpf" -}} +{{- end -}} +{{- printf "Please use \"%s\" instead." $driver}} +{{- end -}} + +{{- if and (not (empty .Values.falco.load_plugins)) (or .Values.falcoctl.artifact.follow.enabled .Values.falcoctl.artifact.install.enabled) }} + +WARNING: +{{ printf "It seems you are loading the following plugins %v, please make sure to install them by adding the correct reference to falcoctl.config.artifact.install.refs: %v" .Values.falco.load_plugins .Values.falcoctl.config.artifact.install.refs -}} +{{- end }} \ No newline at end of file diff --git a/charts/falco/falco/charts/falco/templates/_helpers.tpl b/charts/falco/falco/charts/falco/templates/_helpers.tpl index 2671f484d..f611a5397 100644 --- a/charts/falco/falco/charts/falco/templates/_helpers.tpl +++ b/charts/falco/falco/charts/falco/templates/_helpers.tpl @@ -30,6 +30,13 @@ Create chart name and version as used by the chart label. {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} {{- end }} +{{/* +Allow the release namespace to be overridden +*/}} +{{- define "falco.namespace" -}} +{{- default .Release.Namespace .Values.namespaceOverride -}} +{{- end -}} + {{/* Common labels */}} @@ -50,6 +57,19 @@ app.kubernetes.io/name: {{ include "falco.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} {{- end }} +{{/* +Renders a value that contains template. +Usage: +{{ include "falco.renderTemplate" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "falco.renderTemplate" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} + {{/* Create the name of the service account to use */}} @@ -83,6 +103,13 @@ Return the proper Falco driver loader image name {{- .Values.driver.loader.initContainer.image.tag | default .Chart.AppVersion -}} {{- end -}} +{{/* +Return the proper Falcoctl image name +*/}} +{{- define "falcoctl.image" -}} +{{ printf "%s/%s:%s" .Values.falcoctl.image.registry .Values.falcoctl.image.repository .Values.falcoctl.image.tag }} +{{- end -}} + {{/* Extract the unixSocket's directory path */}} @@ -148,4 +175,260 @@ Get port from .Values.falco.grpc.bind_addres. {{- else -}} {{- fail $error -}} {{- end -}} -{{- end -}} \ No newline at end of file +{{- end -}} + +{{/* +Disable the syscall source if some conditions are met. +By default the syscall source is always enabled in falco. If no syscall source is enabled, falco +exits. Here we check that no producers for syscalls event has been configured, and if true +we just disable the sycall source. +*/}} +{{- define "falco.configSyscallSource" -}} +{{- $userspaceDisabled := true -}} +{{- $gvisorDisabled := (ne .Values.driver.kind "gvisor") -}} +{{- $driverDisabled := (not .Values.driver.enabled) -}} +{{- if or (has "-u" .Values.extra.args) (has "--userspace" .Values.extra.args) -}} +{{- $userspaceDisabled = false -}} +{{- end -}} +{{- if and $driverDisabled $userspaceDisabled $gvisorDisabled }} +- --disable-source +- syscall +{{- end -}} +{{- end -}} + +{{/* +We need the falco binary in order to generate the configuration for gVisor. This init container +is deployed within the Falco pod when gVisor is enabled. The image is the same as the one of Falco we are +deploying and the configuration logic is a bash script passed as argument on the fly. This solution should +be temporary and will stay here until we move this logic to the falcoctl tool. +*/}} +{{- define "falco.gvisor.initContainer" -}} +- name: {{ .Chart.Name }}-gvisor-init + image: {{ include "falco.image" . }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + args: + - /bin/bash + - -c + - | + set -o errexit + set -o nounset + set -o pipefail + + root={{ .Values.driver.gvisor.runsc.root }} + config={{ .Values.driver.gvisor.runsc.config }} + + echo "* Configuring Falco+gVisor integration...". + # Check if gVisor is configured on the node. + echo "* Checking for /host${config} file..." + if [[ -f /host${config} ]]; then + echo "* Generating the Falco configuration..." + /usr/bin/falco --gvisor-generate-config=${root}/falco.sock > /host${root}/pod-init.json + sed -E -i.orig '/"ignore_missing" : true,/d' /host${root}/pod-init.json + if [[ -z $(grep pod-init-config /host${config}) ]]; then + echo "* Updating the runsc config file /host${config}..." + echo " pod-init-config = \"${root}/pod-init.json\"" >> /host${config} + fi + # Endpoint inside the container is different from outside, add + # "/host" to the endpoint path inside the container. + echo "* Setting the updated Falco configuration to /gvisor-config/pod-init.json..." + sed 's/"endpoint" : "\/run/"endpoint" : "\/host\/run/' /host${root}/pod-init.json > /gvisor-config/pod-init.json + else + echo "* File /host${config} not found." + echo "* Please make sure that the gVisor is configured in the current node and/or the runsc root and config file path are correct" + exit -1 + fi + echo "* Falco+gVisor correctly configured." + exit 0 + volumeMounts: + - mountPath: /host{{ .Values.driver.gvisor.runsc.path }} + name: runsc-path + readOnly: true + - mountPath: /host{{ .Values.driver.gvisor.runsc.root }} + name: runsc-root + - mountPath: /host{{ .Values.driver.gvisor.runsc.config }} + name: runsc-config + - mountPath: /gvisor-config + name: falco-gvisor-config +{{- end -}} + + +{{- define "falcoctl.initContainer" -}} +- name: falcoctl-artifact-install + image: {{ include "falcoctl.image" . }} + imagePullPolicy: {{ .Values.falcoctl.image.pullPolicy }} + args: + - artifact + - install + {{- with .Values.falcoctl.artifact.install.args }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.falcoctl.artifact.install.resources }} + resources: + {{- toYaml . | nindent 4 }} + {{- end }} + securityContext: + {{- if .Values.falcoctl.artifact.install.securityContext }} + {{- toYaml .Values.falcoctl.artifact.install.securityContext | nindent 4 }} + {{- end }} + volumeMounts: + - mountPath: {{ .Values.falcoctl.config.artifact.install.pluginsDir }} + name: plugins-install-dir + - mountPath: {{ .Values.falcoctl.config.artifact.install.rulesfilesDir }} + name: rulesfiles-install-dir + - mountPath: /etc/falcoctl + name: falcoctl-config-volume + {{- with .Values.falcoctl.artifact.install.mounts.volumeMounts }} + {{- toYaml . | nindent 4 }} + {{- end }} + env: + {{- if .Values.falcoctl.artifact.install.env }} + {{- include "falco.renderTemplate" ( dict "value" .Values.falcoctl.artifact.install.env "context" $) | nindent 4 }} + {{- end }} +{{- end -}} + +{{- define "falcoctl.sidecar" -}} +- name: falcoctl-artifact-follow + image: {{ include "falcoctl.image" . }} + imagePullPolicy: {{ .Values.falcoctl.image.pullPolicy }} + args: + - artifact + - follow + {{- with .Values.falcoctl.artifact.follow.args }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.falcoctl.artifact.follow.resources }} + resources: + {{- toYaml . | nindent 4 }} + {{- end }} + securityContext: + {{- if .Values.falcoctl.artifact.follow.securityContext }} + {{- toYaml .Values.falcoctl.artifact.follow.securityContext | nindent 4 }} + {{- end }} + volumeMounts: + - mountPath: {{ .Values.falcoctl.config.artifact.follow.pluginsDir }} + name: plugins-install-dir + - mountPath: {{ .Values.falcoctl.config.artifact.follow.rulesfilesDir }} + name: rulesfiles-install-dir + - mountPath: /etc/falcoctl + name: falcoctl-config-volume + {{- with .Values.falcoctl.artifact.follow.mounts.volumeMounts }} + {{- toYaml . | nindent 4 }} + {{- end }} + env: + {{- if .Values.falcoctl.artifact.follow.env }} + {{- include "falco.renderTemplate" ( dict "value" .Values.falcoctl.artifact.follow.env "context" $) | nindent 4 }} + {{- end }} +{{- end -}} + + +{{/* + Build configuration for k8smeta plugin and update the relevant variables. + * The configuration that needs to be built up is the initconfig section: + init_config: + collectorPort: 0 + collectorHostname: "" + nodeName: "" + The falco chart exposes this configuriotino through two variable: + * collectors.kubenetetes.collectorHostname; + * collectors.kubernetes.collectorPort; + If those two variable are not set, then we take those values from the k8smetacollector subchart. + The hostname is built using the name of the service that exposes the collector endpoints and the + port is directly taken form the service's port that exposes the gRPC endpoint. + We reuse the helpers from the k8smetacollector subchart, by passing down the variables. There is a + hardcoded values that is the chart name for the k8s-metacollector chart. + + * The falcoctl configuration is updated to allow plugin artifacts to be installed. The refs in the install + section are updated by adding the reference for the k8s meta plugin that needs to be installed. + NOTE: It seems that the named templates run during the validation process. And then again during the + render fase. In our case we are setting global variable that persist during the various phases. + We need to make the helper idempotent. +*/}} +{{- define "k8smeta.configuration" -}} +{{- if and .Values.collectors.kubernetes.enabled .Values.driver.enabled -}} +{{- $hostname := "" -}} +{{- if .Values.collectors.kubernetes.collectorHostname -}} +{{- $hostname = .Values.collectors.kubernetes.collectorHostname -}} +{{- else -}} +{{- $collectorContext := (dict "Release" .Release "Values" (index .Values "k8s-metacollector") "Chart" (dict "Name" "k8s-metacollector")) -}} +{{- $hostname = printf "%s.%s.svc" (include "k8s-metacollector.fullname" $collectorContext) (include "k8s-metacollector.namespace" $collectorContext) -}} +{{- end -}} +{{- $hasConfig := false -}} +{{- range .Values.falco.plugins -}} +{{- if eq (get . "name") "k8smeta" -}} +{{ $hasConfig = true -}} +{{- end -}} +{{- end -}} +{{- if not $hasConfig -}} +{{- $listenPort := default (index .Values "k8s-metacollector" "service" "ports" "broker-grpc" "port") .Values.collectors.kubernetes.collectorPort -}} +{{- $listenPort = int $listenPort -}} +{{- $pluginConfig := dict "name" "k8smeta" "library_path" "libk8smeta.so" "init_config" (dict "collectorHostname" $hostname "collectorPort" $listenPort "nodeName" "${FALCO_K8S_NODE_NAME}") -}} +{{- $newConfig := append .Values.falco.plugins $pluginConfig -}} +{{- $_ := set .Values.falco "plugins" ($newConfig | uniq) -}} +{{- $loadedPlugins := append .Values.falco.load_plugins "k8smeta" -}} +{{- $_ = set .Values.falco "load_plugins" ($loadedPlugins | uniq) -}} +{{- end -}} +{{- $_ := set .Values.falcoctl.config.artifact.install "refs" ((append .Values.falcoctl.config.artifact.install.refs .Values.collectors.kubernetes.pluginRef) | uniq)}} +{{- $_ = set .Values.falcoctl.config.artifact "allowedTypes" ((append .Values.falcoctl.config.artifact.allowedTypes "plugin") | uniq)}} +{{- end -}} +{{- end -}} + +{{/* +Based on the user input it populates the driver configuration in the falco config map. +*/}} +{{- define "falco.engineConfiguration" -}} +{{- if .Values.driver.enabled -}} +{{- $supportedDrivers := list "kmod" "ebpf" "modern_ebpf" "gvisor" "auto" -}} +{{- $aliasDrivers := list "module" "modern-bpf" -}} +{{- if and (not (has .Values.driver.kind $supportedDrivers)) (not (has .Values.driver.kind $aliasDrivers)) -}} +{{- fail (printf "unsupported driver kind: \"%s\". Supported drivers %s, alias %s" .Values.driver.kind $supportedDrivers $aliasDrivers) -}} +{{- end -}} +{{- if or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") -}} +{{- $kmodConfig := dict "kind" "kmod" "kmod" (dict "buf_size_preset" .Values.driver.kmod.bufSizePreset "drop_failed_exit" .Values.driver.kmod.dropFailedExit) -}} +{{- $_ := set .Values.falco "engine" $kmodConfig -}} +{{- else if eq .Values.driver.kind "ebpf" -}} +{{- $ebpfConfig := dict "kind" "ebpf" "ebpf" (dict "buf_size_preset" .Values.driver.ebpf.bufSizePreset "drop_failed_exit" .Values.driver.ebpf.dropFailedExit "probe" .Values.driver.ebpf.path) -}} +{{- $_ := set .Values.falco "engine" $ebpfConfig -}} +{{- else if or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf") -}} +{{- $ebpfConfig := dict "kind" "modern_ebpf" "modern_ebpf" (dict "buf_size_preset" .Values.driver.modernEbpf.bufSizePreset "drop_failed_exit" .Values.driver.modernEbpf.dropFailedExit "cpus_for_each_buffer" .Values.driver.modernEbpf.cpusForEachBuffer) -}} +{{- $_ := set .Values.falco "engine" $ebpfConfig -}} +{{- else if eq .Values.driver.kind "gvisor" -}} +{{- $root := printf "/host%s/k8s.io" .Values.driver.gvisor.runsc.root -}} +{{- $gvisorConfig := dict "kind" "gvisor" "gvisor" (dict "config" "/gvisor-config/pod-init.json" "root" $root) -}} +{{- $_ := set .Values.falco "engine" $gvisorConfig -}} +{{- else if eq .Values.driver.kind "auto" -}} +{{- $engineConfig := dict "kind" "modern_ebpf" "kmod" (dict "buf_size_preset" .Values.driver.kmod.bufSizePreset "drop_failed_exit" .Values.driver.kmod.dropFailedExit) "ebpf" (dict "buf_size_preset" .Values.driver.ebpf.bufSizePreset "drop_failed_exit" .Values.driver.ebpf.dropFailedExit "probe" .Values.driver.ebpf.path) "modern_ebpf" (dict "buf_size_preset" .Values.driver.modernEbpf.bufSizePreset "drop_failed_exit" .Values.driver.modernEbpf.dropFailedExit "cpus_for_each_buffer" .Values.driver.modernEbpf.cpusForEachBuffer) -}} +{{- $_ := set .Values.falco "engine" $engineConfig -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +It returns "true" if the driver loader has to be enabled, otherwise false. +*/}} +{{- define "driverLoader.enabled" -}} +{{- if or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf") (eq .Values.driver.kind "gvisor") (not .Values.driver.enabled) (not .Values.driver.loader.enabled) -}} +false +{{- else -}} +true +{{- end -}} +{{- end -}} + +{{/* +Based on the use input it populates the metrics configuration in the falco config map. +*/}} +{{- define "falco.metricsConfiguration" -}} +{{- if .Values.metrics.enabled -}} +{{- $_ := set .Values.falco.webserver "prometheus_metrics_enabled" true -}} +{{- $_ = set .Values.falco.webserver "enabled" true -}} +{{- $_ = set .Values.falco.metrics "enabled" .Values.metrics.enabled -}} +{{- $_ = set .Values.falco.metrics "interval" .Values.metrics.interval -}} +{{- $_ = set .Values.falco.metrics "output_rule" .Values.metrics.outputRule -}} +{{- $_ = set .Values.falco.metrics "rules_counters_enabled" .Values.metrics.rulesCountersEnabled -}} +{{- $_ = set .Values.falco.metrics "resource_utilization_enabled" .Values.metrics.resourceUtilizationEnabled -}} +{{- $_ = set .Values.falco.metrics "state_counters_enabled" .Values.metrics.stateCountersEnabled -}} +{{- $_ = set .Values.falco.metrics "kernel_event_counters_enabled" .Values.metrics.kernelEventCountersEnabled -}} +{{- $_ = set .Values.falco.metrics "libbpf_stats_enabled" .Values.metrics.libbpfStatsEnabled -}} +{{- $_ = set .Values.falco.metrics "convert_memory_to_mb" .Values.metrics.convertMemoryToMB -}} +{{- $_ = set .Values.falco.metrics "include_empty_values" .Values.metrics.includeEmptyValues -}} +{{- end -}} +{{- end -}} diff --git a/charts/falco/falco/charts/falco/templates/certs-secret.yaml b/charts/falco/falco/charts/falco/templates/certs-secret.yaml index e23214527..176f15733 100644 --- a/charts/falco/falco/charts/falco/templates/certs-secret.yaml +++ b/charts/falco/falco/charts/falco/templates/certs-secret.yaml @@ -4,7 +4,7 @@ apiVersion: v1 kind: Secret metadata: name: {{ include "falco.fullname" $ }}-certs - namespace: {{ $.Release.Namespace }} + namespace: {{ include "falco.namespace" $ }} labels: {{- include "falco.labels" $ | nindent 4 }} type: Opaque diff --git a/charts/falco/falco/charts/falco/templates/client-certs-secret.yaml b/charts/falco/falco/charts/falco/templates/client-certs-secret.yaml new file mode 100644 index 000000000..cd643ee92 --- /dev/null +++ b/charts/falco/falco/charts/falco/templates/client-certs-secret.yaml @@ -0,0 +1,18 @@ +{{- if and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "falco.fullname" . }}-client-certs + namespace: {{ .Release.Namespace }} + labels: + {{- include "falco.labels" $ | nindent 4 }} +type: Opaque +data: + {{ $key := .Values.certs.client.key }} + client.key: {{ $key | b64enc | quote }} + {{ $crt := .Values.certs.client.crt }} + client.crt: {{ $crt | b64enc | quote }} + falcoclient.pem: {{ print $key $crt | b64enc | quote }} + ca.crt: {{ .Values.certs.ca.crt | b64enc | quote }} + ca.pem: {{ .Values.certs.ca.crt | b64enc | quote }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/configmap.yaml b/charts/falco/falco/charts/falco/templates/configmap.yaml index 01147c260..f48fc88e7 100644 --- a/charts/falco/falco/charts/falco/templates/configmap.yaml +++ b/charts/falco/falco/charts/falco/templates/configmap.yaml @@ -2,11 +2,13 @@ apiVersion: v1 kind: ConfigMap metadata: name: {{ include "falco.fullname" . }} - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} data: falco.yaml: |- {{- include "falco.falcosidekickConfig" . }} + {{- include "k8smeta.configuration" . -}} + {{- include "falco.engineConfiguration" . -}} + {{- include "falco.metricsConfiguration" . -}} {{- toYaml .Values.falco | nindent 4 }} -{{ (.Files.Glob "rules/*").AsConfig | indent 2 }} \ No newline at end of file diff --git a/charts/falco/falco/charts/falco/templates/daemonset.yaml b/charts/falco/falco/charts/falco/templates/daemonset.yaml index 9bc0886e4..ec4575748 100644 --- a/charts/falco/falco/charts/falco/templates/daemonset.yaml +++ b/charts/falco/falco/charts/falco/templates/daemonset.yaml @@ -3,9 +3,16 @@ apiVersion: apps/v1 kind: DaemonSet metadata: name: {{ include "falco.fullname" . }} - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} + {{- if .Values.controller.labels }} + {{- toYaml .Values.controller.labels | nindent 4 }} + {{- end }} + {{- if .Values.controller.annotations }} + annotations: + {{ toYaml .Values.controller.annotations | nindent 4 }} + {{- end }} spec: selector: matchLabels: @@ -16,4 +23,4 @@ spec: updateStrategy: {{- toYaml . | nindent 4 }} {{- end }} -{{- end }} \ No newline at end of file +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/deployment.yaml b/charts/falco/falco/charts/falco/templates/deployment.yaml index 412760778..708eb88f0 100644 --- a/charts/falco/falco/charts/falco/templates/deployment.yaml +++ b/charts/falco/falco/charts/falco/templates/deployment.yaml @@ -3,14 +3,24 @@ apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "falco.fullname" . }} - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} + {{- if .Values.controller.labels }} + {{- toYaml .Values.controller.labels | nindent 4 }} + {{- end }} + {{- if .Values.controller.annotations }} + annotations: + {{ toYaml .Values.controller.annotations | nindent 4 }} + {{- end }} spec: replicas: {{ .Values.controller.deployment.replicas }} + {{- if .Values.controller.deployment.revisionHistoryLimit }} + revisionHistoryLimit: {{ .Values.controller.deployment.revisionHistoryLimit }} + {{- end }} selector: matchLabels: {{- include "falco.selectorLabels" . | nindent 6 }} template: {{- include "falco.podTemplate" . | nindent 4 }} -{{- end }} \ No newline at end of file +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/falcoctl-configmap.yaml b/charts/falco/falco/charts/falco/templates/falcoctl-configmap.yaml new file mode 100644 index 000000000..7b769e870 --- /dev/null +++ b/charts/falco/falco/charts/falco/templates/falcoctl-configmap.yaml @@ -0,0 +1,13 @@ +{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "falco.fullname" . }}-falcoctl + namespace: {{ include "falco.namespace" . }} + labels: + {{- include "falco.labels" . | nindent 4 }} +data: + falcoctl.yaml: |- + {{- include "k8smeta.configuration" . -}} + {{- toYaml .Values.falcoctl.config | nindent 4 }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/grpc-service.yaml b/charts/falco/falco/charts/falco/templates/grpc-service.yaml index ffc38813d..cdfbe14ba 100644 --- a/charts/falco/falco/charts/falco/templates/grpc-service.yaml +++ b/charts/falco/falco/charts/falco/templates/grpc-service.yaml @@ -3,13 +3,13 @@ kind: Service apiVersion: v1 metadata: name: {{ include "falco.fullname" . }}-grpc - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} spec: clusterIP: None selector: - app: {{ include "falco.fullname" . }} + {{- include "falco.selectorLabels" . | nindent 4 }} ports: - protocol: TCP port: {{ include "grpc.port" . }} diff --git a/charts/falco/falco/charts/falco/templates/pod-template.tpl b/charts/falco/falco/charts/falco/templates/pod-template.tpl index 1ca5dfdd7..d062336d8 100644 --- a/charts/falco/falco/charts/falco/templates/pod-template.tpl +++ b/charts/falco/falco/charts/falco/templates/pod-template.tpl @@ -46,6 +46,10 @@ spec: imagePullSecrets: {{- toYaml . | nindent 4 }} {{- end }} + {{- if eq .Values.driver.kind "gvisor" }} + hostNetwork: true + hostPID: true + {{- end }} containers: - name: {{ .Chart.Name }} image: {{ include "falco.image" . }} @@ -56,25 +60,20 @@ spec: {{- include "falco.securityContext" . | nindent 8 }} args: - /usr/bin/falco + {{- include "falco.configSyscallSource" . | indent 8 }} {{- with .Values.collectors }} {{- if .enabled }} + {{- if .docker.enabled }} + - --cri + - /var/run/{{ base .docker.socket }} + {{- end }} {{- if .containerd.enabled }} - --cri - - /run/containerd/containerd.sock + - /run/containerd/{{ base .containerd.socket }} {{- end }} {{- if .crio.enabled }} - --cri - - /run/crio/crio.sock - {{- end }} - {{- if .kubernetes.enabled }} - - -K - - {{ .kubernetes.apiAuth }} - - -k - - {{ .kubernetes.apiUrl }} - {{- if .kubernetes.enableNodeFilter }} - - --k8s-node - - "$(FALCO_K8S_NODE_NAME)" - {{- end }} + - /run/crio/{{ base .crio.socket }} {{- end }} - -pk {{- end }} @@ -83,24 +82,21 @@ spec: {{- toYaml . | nindent 8 }} {{- end }} env: + - name: HOST_ROOT + value: /host + - name: FALCO_HOSTNAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName - name: FALCO_K8S_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - {{- if or (not .Values.driver.enabled) (and .Values.driver.loader.enabled .Values.driver.loader.initContainer.enabled) }} - - name: SKIP_DRIVER_LOADER - value: - {{- end }} - {{- if and .Values.driver.enabled (eq .Values.driver.kind "ebpf") }} - - name: FALCO_BPF_PROBE - value: {{ .Values.driver.ebpf.path }} + {{- if .Values.extra.env }} + {{- include "falco.renderTemplate" ( dict "value" .Values.extra.env "context" $) | nindent 8 }} {{- end }} - {{- range $key, $value := .Values.extra.env }} - - name: "{{ $key }}" - value: "{{ $value }}" - {{- end }} - {{- if .Values.falco.webserver.enabled }} tty: {{ .Values.tty }} + {{- if .Values.falco.webserver.enabled }} livenessProbe: initialDelaySeconds: {{ .Values.healthChecks.livenessProbe.initialDelaySeconds }} timeoutSeconds: {{ .Values.healthChecks.livenessProbe.timeoutSeconds }} @@ -123,13 +119,27 @@ spec: {{- end }} {{- end }} volumeMounts: + {{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }} + {{- if has "rulesfile" .Values.falcoctl.config.artifact.allowedTypes }} + - mountPath: /etc/falco + name: rulesfiles-install-dir + {{- end }} + {{- if has "plugin" .Values.falcoctl.config.artifact.allowedTypes }} + - mountPath: /usr/share/falco/plugins + name: plugins-install-dir + {{- end }} + {{- end }} + {{- if eq (include "driverLoader.enabled" .) "true" }} + - mountPath: /etc/falco/config.d + name: specialized-falco-configs + {{- end }} - mountPath: /root/.falco name: root-falco-fs {{- if or .Values.driver.enabled .Values.mounts.enforceProcMount }} - mountPath: /host/proc name: proc-fs {{- end }} - {{- if and .Values.driver.enabled (not .Values.driver.loader.initContainer.enabled) }} + {{- if and .Values.driver.enabled (not .Values.driver.loader.enabled) }} readOnly: true - mountPath: /host/boot name: boot-fs @@ -139,14 +149,18 @@ spec: - mountPath: /host/usr name: usr-fs readOnly: true + {{- end }} + {{- if .Values.driver.enabled }} - mountPath: /host/etc name: etc-fs readOnly: true - {{- end }} - {{- if and .Values.driver.enabled (eq .Values.driver.kind "module") }} + {{- end -}} + {{- if and .Values.driver.enabled (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) }} - mountPath: /host/dev name: dev-fs readOnly: true + - name: sys-fs + mountPath: /sys/module {{- end }} {{- if and .Values.driver.enabled (and (eq .Values.driver.kind "ebpf") (contains "falco-no-driver" .Values.image.repository)) }} - name: debugfs @@ -155,21 +169,22 @@ spec: {{- with .Values.collectors }} {{- if .enabled }} {{- if .docker.enabled }} - - mountPath: /host/var/run/docker.sock + - mountPath: /host/var/run/ name: docker-socket {{- end }} {{- if .containerd.enabled }} - - mountPath: /host/run/containerd/containerd.sock + - mountPath: /host/run/containerd/ name: containerd-socket {{- end }} {{- if .crio.enabled }} - - mountPath: /host/run/crio/crio.sock + - mountPath: /host/run/crio/ name: crio-socket {{- end }} {{- end }} {{- end }} - - mountPath: /etc/falco - name: config-volume + - mountPath: /etc/falco/falco.yaml + name: falco-yaml + subPath: falco.yaml {{- if .Values.customRules }} - mountPath: /etc/falco/rules.d name: rules-volume @@ -179,20 +194,53 @@ spec: name: certs-volume readOnly: true {{- end }} + {{- if or .Values.certs.existingClientSecret (and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt) }} + - mountPath: /etc/falco/certs/client + name: client-certs-volume + readOnly: true + {{- end }} {{- include "falco.unixSocketVolumeMount" . | nindent 8 -}} {{- with .Values.mounts.volumeMounts }} {{- toYaml . | nindent 8 }} {{- end }} + {{- if eq .Values.driver.kind "gvisor" }} + - mountPath: /usr/local/bin/runsc + name: runsc-path + readOnly: true + - mountPath: /host{{ .Values.driver.gvisor.runsc.root }} + name: runsc-root + - mountPath: /host{{ .Values.driver.gvisor.runsc.config }} + name: runsc-config + - mountPath: /gvisor-config + name: falco-gvisor-config + {{- end }} + {{- if .Values.falcoctl.artifact.follow.enabled }} + {{- include "falcoctl.sidecar" . | nindent 4 }} + {{- end }} initContainers: {{- with .Values.extra.initContainers }} {{- toYaml . | nindent 4 }} {{- end }} - {{- if .Values.driver.enabled }} - {{- if and .Values.driver.loader.enabled .Values.driver.loader.initContainer.enabled }} + {{- if eq .Values.driver.kind "gvisor" }} + {{- include "falco.gvisor.initContainer" . | nindent 4 }} + {{- end }} + {{- if eq (include "driverLoader.enabled" .) "true" }} {{- include "falco.driverLoader.initContainer" . | nindent 4 }} {{- end }} + {{- if .Values.falcoctl.artifact.install.enabled }} + {{- include "falcoctl.initContainer" . | nindent 4 }} {{- end }} volumes: + {{- if eq (include "driverLoader.enabled" .) "true" }} + - name: specialized-falco-configs + emptyDir: {} + {{- end }} + {{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }} + - name: plugins-install-dir + emptyDir: {} + - name: rulesfiles-install-dir + emptyDir: {} + {{- end }} - name: root-falco-fs emptyDir: {} {{- if .Values.driver.enabled }} @@ -209,10 +257,13 @@ spec: hostPath: path: /etc {{- end }} - {{- if and .Values.driver.enabled (eq .Values.driver.kind "module") }} + {{- if and .Values.driver.enabled (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) }} - name: dev-fs hostPath: path: /dev + - name: sys-fs + hostPath: + path: /sys/module {{- end }} {{- if and .Values.driver.enabled (and (eq .Values.driver.kind "ebpf") (contains "falco-no-driver" .Values.image.repository)) }} - name: debugfs @@ -224,17 +275,17 @@ spec: {{- if .docker.enabled }} - name: docker-socket hostPath: - path: {{ .docker.socket }} + path: {{ dir .docker.socket }} {{- end }} {{- if .containerd.enabled }} - name: containerd-socket hostPath: - path: {{ .containerd.socket }} + path: {{ dir .containerd.socket }} {{- end }} {{- if .crio.enabled }} - name: crio-socket hostPath: - path: {{ .crio.socket }} + path: {{ dir .crio.socket }} {{- end }} {{- end }} {{- end }} @@ -243,22 +294,33 @@ spec: hostPath: path: /proc {{- end }} - - name: config-volume + {{- if eq .Values.driver.kind "gvisor" }} + - name: runsc-path + hostPath: + path: {{ .Values.driver.gvisor.runsc.path }}/runsc + type: File + - name: runsc-root + hostPath: + path: {{ .Values.driver.gvisor.runsc.root }} + - name: runsc-config + hostPath: + path: {{ .Values.driver.gvisor.runsc.config }} + type: File + - name: falco-gvisor-config + emptyDir: {} + {{- end }} + - name: falcoctl-config-volume + configMap: + name: {{ include "falco.fullname" . }}-falcoctl + items: + - key: falcoctl.yaml + path: falcoctl.yaml + - name: falco-yaml configMap: name: {{ include "falco.fullname" . }} items: - - key: falco.yaml - path: falco.yaml - - key: falco_rules.yaml - path: falco_rules.yaml - - key: falco_rules.local.yaml - path: falco_rules.local.yaml - - key: application_rules.yaml - path: rules.available/application_rules.yaml - - key: k8s_audit_rules.yaml - path: k8s_audit_rules.yaml - - key: aws_cloudtrail_rules.yaml - path: aws_cloudtrail_rules.yaml + - key: falco.yaml + path: falco.yaml {{- if .Values.customRules }} - name: rules-volume configMap: @@ -273,20 +335,36 @@ spec: secretName: {{ include "falco.fullname" . }}-certs {{- end }} {{- end }} + {{- if or .Values.certs.existingClientSecret (and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt) }} + - name: client-certs-volume + secret: + {{- if .Values.certs.existingClientSecret }} + secretName: {{ .Values.certs.existingClientSecret }} + {{- else }} + secretName: {{ include "falco.fullname" . }}-client-certs + {{- end }} + {{- end }} {{- include "falco.unixSocketVolume" . | nindent 4 -}} {{- with .Values.mounts.volumes }} {{- toYaml . | nindent 4 }} {{- end }} -{{- end -}} + {{- end -}} {{- define "falco.driverLoader.initContainer" -}} - name: {{ .Chart.Name }}-driver-loader image: {{ include "falco.driverLoader.image" . }} imagePullPolicy: {{ .Values.driver.loader.initContainer.image.pullPolicy }} - {{- with .Values.driver.loader.initContainer.args }} args: + {{- with .Values.driver.loader.initContainer.args }} {{- toYaml . | nindent 4 }} {{- end }} + {{- if eq .Values.driver.kind "module" }} + - kmod + {{- else if eq .Values.driver.kind "modern-bpf"}} + - modern_ebpf + {{- else }} + - {{ .Values.driver.kind }} + {{- end }} {{- with .Values.driver.loader.initContainer.resources }} resources: {{- toYaml . | nindent 4 }} @@ -294,7 +372,7 @@ spec: securityContext: {{- if .Values.driver.loader.initContainer.securityContext }} {{- toYaml .Values.driver.loader.initContainer.securityContext | nindent 4 }} - {{- else if eq .Values.driver.kind "module" }} + {{- else if (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) }} privileged: true {{- end }} volumeMounts: @@ -314,25 +392,42 @@ spec: - mountPath: /host/etc name: etc-fs readOnly: true + - mountPath: /etc/falco/config.d + name: specialized-falco-configs env: - {{- if eq .Values.driver.kind "ebpf" }} - - name: FALCO_BPF_PROBE - value: {{ .Values.driver.ebpf.path }} + - name: HOST_ROOT + value: /host + {{- if .Values.driver.loader.initContainer.env }} + {{- include "falco.renderTemplate" ( dict "value" .Values.driver.loader.initContainer.env "context" $) | nindent 4 }} {{- end }} - {{- range $key, $value := .Values.driver.loader.initContainer.env }} - - name: "{{ $key }}" - value: "{{ $value }}" + {{- if eq .Values.driver.kind "auto" }} + - name: FALCOCTL_DRIVER_CONFIG_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: FALCOCTL_DRIVER_CONFIG_CONFIGMAP + value: {{ include "falco.fullname" . }} + {{- else }} + - name: FALCOCTL_DRIVER_CONFIG_UPDATE_FALCO + value: "false" {{- end }} {{- end -}} {{- define "falco.securityContext" -}} {{- $securityContext := dict -}} {{- if .Values.driver.enabled -}} - {{- if eq .Values.driver.kind "module" -}} + {{- if (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) -}} {{- $securityContext := set $securityContext "privileged" true -}} {{- end -}} {{- if eq .Values.driver.kind "ebpf" -}} {{- if .Values.driver.ebpf.leastPrivileged -}} + {{- $securityContext := set $securityContext "capabilities" (dict "add" (list "SYS_ADMIN" "SYS_RESOURCE" "SYS_PTRACE")) -}} + {{- else -}} + {{- $securityContext := set $securityContext "privileged" true -}} + {{- end -}} + {{- end -}} + {{- if (or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf")) -}} + {{- if .Values.driver.modernEbpf.leastPrivileged -}} {{- $securityContext := set $securityContext "capabilities" (dict "add" (list "BPF" "SYS_RESOURCE" "PERFMON" "SYS_PTRACE")) -}} {{- else -}} {{- $securityContext := set $securityContext "privileged" true -}} diff --git a/charts/falco/falco/charts/falco/templates/role.yaml b/charts/falco/falco/charts/falco/templates/role.yaml new file mode 100644 index 000000000..e24877726 --- /dev/null +++ b/charts/falco/falco/charts/falco/templates/role.yaml @@ -0,0 +1,17 @@ +{{- if and .Values.rbac.create (eq .Values.driver.kind "auto")}} +kind: Role +apiVersion: {{ include "rbac.apiVersion" . }} +metadata: + name: {{ include "falco.fullname" . }} + labels: + {{- include "falco.labels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - update +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/clusterrolebinding.yaml b/charts/falco/falco/charts/falco/templates/roleBinding.yaml similarity index 70% rename from charts/falco/falco/charts/falco/templates/clusterrolebinding.yaml rename to charts/falco/falco/charts/falco/templates/roleBinding.yaml index 74f9f65f3..4981156c3 100644 --- a/charts/falco/falco/charts/falco/templates/clusterrolebinding.yaml +++ b/charts/falco/falco/charts/falco/templates/roleBinding.yaml @@ -1,5 +1,5 @@ -{{- if .Values.rbac.create }} -kind: ClusterRoleBinding +{{- if and .Values.rbac.create (eq .Values.driver.kind "auto")}} +kind: RoleBinding apiVersion: {{ include "rbac.apiVersion" . }} metadata: name: {{ include "falco.fullname" . }} @@ -8,9 +8,9 @@ metadata: subjects: - kind: ServiceAccount name: {{ include "falco.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} roleRef: - kind: ClusterRole + kind: Role name: {{ include "falco.fullname" . }} apiGroup: rbac.authorization.k8s.io {{- end }} diff --git a/charts/falco/falco/charts/falco/templates/rules-configmap.yaml b/charts/falco/falco/charts/falco/templates/rules-configmap.yaml index a63ec4a5b..4739bec11 100644 --- a/charts/falco/falco/charts/falco/templates/rules-configmap.yaml +++ b/charts/falco/falco/charts/falco/templates/rules-configmap.yaml @@ -3,7 +3,7 @@ apiVersion: v1 kind: ConfigMap metadata: name: {{ include "falco.fullname" . }}-rules - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} data: diff --git a/charts/falco/falco/charts/falco/templates/securitycontextconstraints.yaml b/charts/falco/falco/charts/falco/templates/securitycontextconstraints.yaml index 7a9dc286a..9d2cf59ca 100644 --- a/charts/falco/falco/charts/falco/templates/securitycontextconstraints.yaml +++ b/charts/falco/falco/charts/falco/templates/securitycontextconstraints.yaml @@ -6,7 +6,7 @@ metadata: kubernetes.io/description: | This provides the minimum requirements Falco to run in Openshift. name: {{ include "falco.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} allowHostDirVolumePlugin: true @@ -34,10 +34,10 @@ seccompProfiles: supplementalGroups: type: RunAsAny users: -- system:serviceaccount:{{ .Release.Namespace }}:{{ include "falco.serviceAccountName" . }} +- system:serviceaccount:{{ include "falco.namespace" . }}:{{ include "falco.serviceAccountName" . }} volumes: -- hostPath +- configMap - emptyDir +- hostPath - secret -- configMap -{{- end }} \ No newline at end of file +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/service.yaml b/charts/falco/falco/charts/falco/templates/service.yaml new file mode 100644 index 000000000..d2093ec22 --- /dev/null +++ b/charts/falco/falco/charts/falco/templates/service.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.metrics.enabled .Values.metrics.service.create }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "falco.fullname" . }}-metrics + namespace: {{ include "falco.namespace" . }} + labels: + {{- include "falco.labels" . | nindent 4 }} + type: "falco-metrics" +spec: + type: {{ .Values.metrics.service.type }} + ports: + - port: {{ .Values.metrics.service.ports.metrics.port }} + targetPort: {{ .Values.metrics.service.ports.metrics.targetPort }} + protocol: {{ .Values.metrics.service.ports.metrics.protocol }} + name: "metrics" + selector: + {{- include "falco.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/serviceMonitor.yaml b/charts/falco/falco/charts/falco/templates/serviceMonitor.yaml new file mode 100644 index 000000000..0dea6dd6e --- /dev/null +++ b/charts/falco/falco/charts/falco/templates/serviceMonitor.yaml @@ -0,0 +1,48 @@ +{{- if .Values.serviceMonitor.create }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "falco.fullname" . }} + {{- if .Values.serviceMonitor.namespace }} + namespace: {{ tpl .Values.serviceMonitor.namespace . }} + {{- else }} + namespace: {{ include "falco.namespace" . }} + {{- end }} + labels: + {{- include "falco.labels" . | nindent 4 }} + {{- with .Values.serviceMonitor.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + endpoints: + - port: "{{ .Values.serviceMonitor.endpointPort }}" + {{- with .Values.serviceMonitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + honorLabels: true + path: {{ .Values.serviceMonitor.path }} + scheme: {{ .Values.serviceMonitor.scheme }} + {{- with .Values.serviceMonitor.tlsConfig }} + tlsConfig: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.serviceMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 8 }} + {{- end }} + jobLabel: "{{ .Release.Name }}" + selector: + matchLabels: + {{- include "falco.selectorLabels" . | nindent 6 }} + type: "falco-metrics" + namespaceSelector: + matchNames: + - {{ include "falco.namespace" . }} + {{- with .Values.serviceMonitor.targetLabels }} + targetLabels: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/serviceaccount.yaml b/charts/falco/falco/charts/falco/templates/serviceaccount.yaml index 3387dfbb2..65493eb2f 100644 --- a/charts/falco/falco/charts/falco/templates/serviceaccount.yaml +++ b/charts/falco/falco/charts/falco/templates/serviceaccount.yaml @@ -1,13 +1,14 @@ + {{- if .Values.serviceAccount.create -}} apiVersion: v1 kind: ServiceAccount metadata: name: {{ include "falco.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} + namespace: {{ include "falco.namespace" . }} labels: {{- include "falco.labels" . | nindent 4 }} {{- with .Values.serviceAccount.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} -{{- end }} \ No newline at end of file +{{- end }} diff --git a/charts/falco/falco/charts/falco/templates/services.yaml b/charts/falco/falco/charts/falco/templates/services.yaml index 799b94b4f..d105a7dcc 100644 --- a/charts/falco/falco/charts/falco/templates/services.yaml +++ b/charts/falco/falco/charts/falco/templates/services.yaml @@ -5,6 +5,7 @@ apiVersion: v1 kind: Service metadata: name: {{ include "falco.fullname" $dot }}-{{ $service.name }} + namespace: {{ include "falco.namespace" $dot }} labels: {{- include "falco.labels" $dot | nindent 4 }} spec: diff --git a/charts/falco/falco/charts/falco/tests/unit/consts.go b/charts/falco/falco/charts/falco/tests/unit/consts.go new file mode 100644 index 000000000..54c4db5d5 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/consts.go @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +const ( + releaseName = "rendered-resources" + patternK8sMetacollectorFiles = `# Source: falco/charts/k8s-metacollector/templates/([^\n]+)` + k8sMetaPluginName = "k8smeta" +) diff --git a/charts/falco/falco/charts/falco/tests/unit/doc.go b/charts/falco/falco/charts/falco/tests/unit/doc.go new file mode 100644 index 000000000..244855831 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/doc.go @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package unit contains the unit tests for the Falco chart. +package unit diff --git a/charts/falco/falco/charts/falco/tests/unit/driverConfig_test.go b/charts/falco/falco/charts/falco/tests/unit/driverConfig_test.go new file mode 100644 index 000000000..750161e51 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/driverConfig_test.go @@ -0,0 +1,333 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "fmt" + "path/filepath" + "strings" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + corev1 "k8s.io/api/core/v1" +) + +func TestDriverConfigInFalcoConfig(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + testCases := []struct { + name string + values map[string]string + expected func(t *testing.T, config any) + }{ + { + "defaultValues", + nil, + func(t *testing.T, config any) { + require.Len(t, config, 4, "should have four items") + kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, float64(4), bufSizePreset) + require.False(t, dropFailedExit) + }, + }, + { + "kind=kmod", + map[string]string{ + "driver.kind": "kmod", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config) + require.NoError(t, err) + require.Equal(t, "kmod", kind) + require.Equal(t, float64(4), bufSizePreset) + require.False(t, dropFailedExit) + }, + }, + { + "kind=module(alias)", + map[string]string{ + "driver.kind": "module", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config) + require.NoError(t, err) + require.Equal(t, "kmod", kind) + require.Equal(t, float64(4), bufSizePreset) + require.False(t, dropFailedExit) + }, + }, + { + "kmod=config", + map[string]string{ + "driver.kmod.bufSizePreset": "6", + "driver.kmod.dropFailedExit": "true", + "driver.kind": "module", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config) + require.NoError(t, err) + require.Equal(t, "kmod", kind) + require.Equal(t, float64(6), bufSizePreset) + require.True(t, dropFailedExit) + }, + }, + { + "ebpf=config", + map[string]string{ + "driver.kind": "ebpf", + "driver.ebpf.bufSizePreset": "6", + "driver.ebpf.dropFailedExit": "true", + "driver.ebpf.path": "testing/Path/ebpf", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "ebpf", kind) + require.Equal(t, "testing/Path/ebpf", path) + require.Equal(t, float64(6), bufSizePreset) + require.True(t, dropFailedExit) + }, + }, + { + "kind=ebpf", + map[string]string{ + "driver.kind": "ebpf", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "ebpf", kind) + require.Equal(t, "${HOME}/.falco/falco-bpf.o", path) + require.Equal(t, float64(4), bufSizePreset) + require.False(t, dropFailedExit) + }, + }, + { + "kind=modern_ebpf", + map[string]string{ + "driver.kind": "modern_ebpf", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, float64(4), bufSizePreset) + require.Equal(t, float64(2), cpusForEachBuffer) + require.False(t, dropFailedExit) + }, + }, + { + "kind=modern-bpf(alias)", + map[string]string{ + "driver.kind": "modern-bpf", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, float64(4), bufSizePreset) + require.Equal(t, float64(2), cpusForEachBuffer) + require.False(t, dropFailedExit) + }, + }, + { + "modernEbpf=config", + map[string]string{ + "driver.kind": "modern-bpf", + "driver.modernEbpf.bufSizePreset": "6", + "driver.modernEbpf.dropFailedExit": "true", + "driver.modernEbpf.cpusForEachBuffer": "8", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, float64(6), bufSizePreset) + require.Equal(t, float64(8), cpusForEachBuffer) + require.True(t, dropFailedExit) + }, + }, + { + "kind=gvisor", + map[string]string{ + "driver.kind": "gvisor", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, config, root, err := getGvisorConfig(config) + require.NoError(t, err) + require.Equal(t, "gvisor", kind) + require.Equal(t, "/gvisor-config/pod-init.json", config) + require.Equal(t, "/host/run/containerd/runsc/k8s.io", root) + }, + }, + { + "gvisor=config", + map[string]string{ + "driver.kind": "gvisor", + "driver.gvisor.runsc.root": "/my/root/test", + }, + func(t *testing.T, config any) { + require.Len(t, config, 2, "should have only two items") + kind, config, root, err := getGvisorConfig(config) + require.NoError(t, err) + require.Equal(t, "gvisor", kind) + require.Equal(t, "/gvisor-config/pod-init.json", config) + require.Equal(t, "/host/my/root/test/k8s.io", root) + }, + }, + { + "kind=auto", + map[string]string{ + "driver.kind": "auto", + }, + func(t *testing.T, config any) { + require.Len(t, config, 4, "should have four items") + // Check that configuration for kmod has been set. + kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, float64(4), bufSizePreset) + require.False(t, dropFailedExit) + // Check that configuration for ebpf has been set. + kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, "${HOME}/.falco/falco-bpf.o", path) + require.Equal(t, float64(4), bufSizePreset) + require.False(t, dropFailedExit) + // Check that configuration for modern_ebpf has been set. + kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config) + require.NoError(t, err) + require.Equal(t, "modern_ebpf", kind) + require.Equal(t, float64(4), bufSizePreset) + require.Equal(t, float64(2), cpusForEachBuffer) + require.False(t, dropFailedExit) + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.name, func(t *testing.T) { + t.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"}) + + var cm corev1.ConfigMap + helm.UnmarshalK8SYaml(t, output, &cm) + var config map[string]interface{} + + helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config) + engine := config["engine"] + testCase.expected(t, engine) + }) + } +} + +func TestDriverConfigWithUnsupportedDriver(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + values := map[string]string{ + "driver.kind": "notExisting", + } + options := &helm.Options{SetValues: values} + _, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"}) + require.Error(t, err) + require.True(t, strings.Contains(err.Error(), + "unsupported driver kind: \"notExisting\". Supported drivers [kmod ebpf modern_ebpf gvisor auto], alias [module modern-bpf]")) +} + +func getKmodConfig(config interface{}) (kind string, bufSizePreset float64, dropFailedExit bool, err error) { + configMap, ok := config.(map[string]interface{}) + if !ok { + err = fmt.Errorf("can't assert type of config") + return + } + + kind = configMap["kind"].(string) + kmod := configMap["kmod"].(map[string]interface{}) + bufSizePreset = kmod["buf_size_preset"].(float64) + dropFailedExit = kmod["drop_failed_exit"].(bool) + + return +} + +func getEbpfConfig(config interface{}) (kind, path string, bufSizePreset float64, dropFailedExit bool, err error) { + configMap, ok := config.(map[string]interface{}) + if !ok { + err = fmt.Errorf("can't assert type of config") + return + } + + kind = configMap["kind"].(string) + ebpf := configMap["ebpf"].(map[string]interface{}) + bufSizePreset = ebpf["buf_size_preset"].(float64) + dropFailedExit = ebpf["drop_failed_exit"].(bool) + path = ebpf["probe"].(string) + + return +} + +func getModernEbpfConfig(config interface{}) (kind string, bufSizePreset, cpusForEachBuffer float64, dropFailedExit bool, err error) { + configMap, ok := config.(map[string]interface{}) + if !ok { + err = fmt.Errorf("can't assert type of config") + return + } + + kind = configMap["kind"].(string) + modernEbpf := configMap["modern_ebpf"].(map[string]interface{}) + bufSizePreset = modernEbpf["buf_size_preset"].(float64) + dropFailedExit = modernEbpf["drop_failed_exit"].(bool) + cpusForEachBuffer = modernEbpf["cpus_for_each_buffer"].(float64) + + return +} + +func getGvisorConfig(cfg interface{}) (kind, config, root string, err error) { + configMap, ok := cfg.(map[string]interface{}) + if !ok { + err = fmt.Errorf("can't assert type of config") + return + } + + kind = configMap["kind"].(string) + gvisor := configMap["gvisor"].(map[string]interface{}) + config = gvisor["config"].(string) + root = gvisor["root"].(string) + + return +} diff --git a/charts/falco/falco/charts/falco/tests/unit/driverLoader_test.go b/charts/falco/falco/charts/falco/tests/unit/driverLoader_test.go new file mode 100644 index 000000000..6e4fe4273 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/driverLoader_test.go @@ -0,0 +1,265 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "path/filepath" + "testing" + + v1 "k8s.io/api/core/v1" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" +) + +var ( + namespaceEnvVar = v1.EnvVar{ + Name: "FALCOCTL_DRIVER_CONFIG_NAMESPACE", + ValueFrom: &v1.EnvVarSource{ + FieldRef: &v1.ObjectFieldSelector{ + APIVersion: "", + FieldPath: "metadata.namespace", + }, + }} + + configmapEnvVar = v1.EnvVar{ + Name: "FALCOCTL_DRIVER_CONFIG_CONFIGMAP", + Value: releaseName + "-falco", + } + + updateConfigMapEnvVar = v1.EnvVar{ + Name: "FALCOCTL_DRIVER_CONFIG_UPDATE_FALCO", + Value: "false", + } +) + +// TestDriverLoaderEnabled tests the helper that enables the driver loader based on the configuration. +func TestDriverLoaderEnabled(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + testCases := []struct { + name string + values map[string]string + expected func(t *testing.T, initContainer any) + }{ + { + "defaultValues", + nil, + func(t *testing.T, initContainer any) { + container, ok := initContainer.(v1.Container) + require.True(t, ok) + + require.Contains(t, container.Args, "auto") + require.True(t, *container.SecurityContext.Privileged) + require.Contains(t, container.Env, namespaceEnvVar) + require.Contains(t, container.Env, configmapEnvVar) + require.NotContains(t, container.Env, updateConfigMapEnvVar) + + // Check that the expected volumes are there. + volumeMounts(t, container.VolumeMounts) + }, + }, + { + "driver.kind=modern-bpf", + map[string]string{ + "driver.kind": "modern-bpf", + }, + func(t *testing.T, initContainer any) { + require.Equal(t, initContainer, nil) + }, + }, + { + "driver.kind=modern_ebpf", + map[string]string{ + "driver.kind": "modern_ebpf", + }, + func(t *testing.T, initContainer any) { + require.Equal(t, initContainer, nil) + }, + }, + { + "driver.kind=gvisor", + map[string]string{ + "driver.kind": "gvisor", + }, + func(t *testing.T, initContainer any) { + require.Equal(t, initContainer, nil) + }, + }, + { + "driver.disabled", + map[string]string{ + "driver.enabled": "false", + }, + func(t *testing.T, initContainer any) { + require.Equal(t, initContainer, nil) + }, + }, + { + "driver.loader.disabled", + map[string]string{ + "driver.loader.enabled": "false", + }, + func(t *testing.T, initContainer any) { + require.Equal(t, initContainer, nil) + }, + }, + { + "driver.kind=kmod", + map[string]string{ + "driver.kind": "kmod", + }, + func(t *testing.T, initContainer any) { + container, ok := initContainer.(v1.Container) + require.True(t, ok) + + require.Contains(t, container.Args, "kmod") + require.True(t, *container.SecurityContext.Privileged) + require.NotContains(t, container.Env, namespaceEnvVar) + require.NotContains(t, container.Env, configmapEnvVar) + require.Contains(t, container.Env, updateConfigMapEnvVar) + + // Check that the expected volumes are there. + volumeMounts(t, container.VolumeMounts) + }, + }, + { + "driver.kind=module", + map[string]string{ + "driver.kind": "module", + }, + func(t *testing.T, initContainer any) { + container, ok := initContainer.(v1.Container) + require.True(t, ok) + + require.Contains(t, container.Args, "kmod") + require.True(t, *container.SecurityContext.Privileged) + require.NotContains(t, container.Env, namespaceEnvVar) + require.NotContains(t, container.Env, configmapEnvVar) + require.Contains(t, container.Env, updateConfigMapEnvVar) + + // Check that the expected volumes are there. + volumeMounts(t, container.VolumeMounts) + }, + }, + { + "driver.kind=ebpf", + map[string]string{ + "driver.kind": "ebpf", + }, + func(t *testing.T, initContainer any) { + container, ok := initContainer.(v1.Container) + require.True(t, ok) + + require.Contains(t, container.Args, "ebpf") + require.Nil(t, container.SecurityContext) + require.NotContains(t, container.Env, namespaceEnvVar) + require.Contains(t, container.Env, updateConfigMapEnvVar) + require.NotContains(t, container.Env, configmapEnvVar) + + // Check that the expected volumes are there. + volumeMounts(t, container.VolumeMounts) + }, + }, + { + "driver.kind=kmod&driver.loader.disabled", + map[string]string{ + "driver.kind": "kmod", + "driver.loader.enabled": "false", + }, + func(t *testing.T, initContainer any) { + require.Equal(t, initContainer, nil) + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.name, func(t *testing.T) { + t.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/daemonset.yaml"}) + + var ds appsv1.DaemonSet + helm.UnmarshalK8SYaml(t, output, &ds) + for i := range ds.Spec.Template.Spec.InitContainers { + if ds.Spec.Template.Spec.InitContainers[i].Name == "falco-driver-loader" { + testCase.expected(t, ds.Spec.Template.Spec.InitContainers[i]) + return + } + } + testCase.expected(t, nil) + }) + } +} + +// volumenMounts checks that the expected volume mounts have been configured. +func volumeMounts(t *testing.T, volumeMounts []v1.VolumeMount) { + rootFalcoFS := v1.VolumeMount{ + Name: "root-falco-fs", + ReadOnly: false, + MountPath: "/root/.falco", + } + require.Contains(t, volumeMounts, rootFalcoFS) + + procFS := v1.VolumeMount{ + Name: "proc-fs", + ReadOnly: true, + MountPath: "/host/proc", + } + require.Contains(t, volumeMounts, procFS) + + bootFS := v1.VolumeMount{ + Name: "boot-fs", + ReadOnly: true, + MountPath: "/host/boot", + } + require.Contains(t, volumeMounts, bootFS) + + libModulesFS := v1.VolumeMount{ + Name: "lib-modules", + ReadOnly: false, + MountPath: "/host/lib/modules", + } + require.Contains(t, volumeMounts, libModulesFS) + + usrFS := v1.VolumeMount{ + Name: "usr-fs", + ReadOnly: true, + MountPath: "/host/usr", + } + require.Contains(t, volumeMounts, usrFS) + + etcFS := v1.VolumeMount{ + Name: "etc-fs", + ReadOnly: true, + MountPath: "/host/etc", + } + require.Contains(t, volumeMounts, etcFS) + + specializedFalcoConfigs := v1.VolumeMount{ + Name: "specialized-falco-configs", + ReadOnly: false, + MountPath: "/etc/falco/config.d", + } + require.Contains(t, volumeMounts, specializedFalcoConfigs) +} diff --git a/charts/falco/falco/charts/falco/tests/unit/k8smetacollectorDependency_test.go b/charts/falco/falco/charts/falco/tests/unit/k8smetacollectorDependency_test.go new file mode 100644 index 000000000..88e3954f7 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/k8smetacollectorDependency_test.go @@ -0,0 +1,520 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "encoding/json" + "fmt" + "path/filepath" + "regexp" + "strings" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + corev1 "k8s.io/api/core/v1" + "slices" +) + +const chartPath = "../../" + +// Using the default values we want to test that all the expected resources for the k8s-metacollector are rendered. +func TestRenderedResourcesWithDefaultValues(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + options := &helm.Options{} + // Template the chart using the default values.yaml file. + output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil) + require.NoError(t, err) + + // Extract all rendered files from the output. + re := regexp.MustCompile(patternK8sMetacollectorFiles) + matches := re.FindAllStringSubmatch(output, -1) + require.Len(t, matches, 0) + +} + +func TestRenderedResourcesWhenNotEnabled(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + // Template files that we expect to be rendered. + templateFiles := []string{ + "clusterrole.yaml", + "clusterrolebinding.yaml", + "deployment.yaml", + "service.yaml", + "serviceaccount.yaml", + } + + require.NoError(t, err) + + options := &helm.Options{SetValues: map[string]string{ + "collectors.kubernetes.enabled": "true", + }} + + // Template the chart using the default values.yaml file. + output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil) + require.NoError(t, err) + + // Extract all rendered files from the output. + re := regexp.MustCompile(patternK8sMetacollectorFiles) + matches := re.FindAllStringSubmatch(output, -1) + + var renderedTemplates []string + for _, match := range matches { + // Filter out test templates. + if !strings.Contains(match[1], "test-") { + renderedTemplates = append(renderedTemplates, match[1]) + } + } + + // Assert that the rendered resources are equal tho the expected ones. + require.Equal(t, len(renderedTemplates), len(templateFiles), "should be equal") + + for _, rendered := range renderedTemplates { + require.True(t, slices.Contains(templateFiles, rendered), "template files should contain all the rendered files") + } +} + +func TestPluginConfigurationInFalcoConfig(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + testCases := []struct { + name string + values map[string]string + expected func(t *testing.T, config any) + }{ + { + "defaultValues", + nil, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(45000), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + { + "overrideK8s-metacollectorNamespace", + map[string]string{ + "k8s-metacollector.namespaceOverride": "test", + }, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(45000), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.test.svc", releaseName), hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + { + "overrideK8s-metacollectorName", + map[string]string{ + "k8s-metacollector.fullnameOverride": "collector", + }, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(45000), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, "collector.default.svc", hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + + { + "overrideK8s-metacollectorNamespaceAndName", + map[string]string{ + "k8s-metacollector.namespaceOverride": "test", + "k8s-metacollector.fullnameOverride": "collector", + }, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(45000), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, "collector.test.svc", hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + { + "set CollectorHostname", + map[string]string{ + "collectors.kubernetes.collectorHostname": "test", + }, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(45000), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, "test", hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + + { + "set CollectorHostname and namespace name", + map[string]string{ + "collectors.kubernetes.collectorHostname": "test-with-override", + "k8s-metacollector.namespaceOverride": "test", + "k8s-metacollector.fullnameOverride": "collector", + }, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(45000), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, "test-with-override", hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + + { + "set collectorPort", + map[string]string{ + "collectors.kubernetes.collectorPort": "8888", + }, + func(t *testing.T, config any) { + plugin := config.(map[string]interface{}) + // Get init config. + initConfig, ok := plugin["init_config"] + require.True(t, ok) + initConfigMap := initConfig.(map[string]interface{}) + // Check that the collector port is correctly set. + port := initConfigMap["collectorPort"] + require.Equal(t, float64(8888), port.(float64)) + // Check that the collector nodeName is correctly set. + nodeName := initConfigMap["nodeName"] + require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string)) + // Check that the collector hostname is correctly set. + hostName := initConfigMap["collectorHostname"] + require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string)) + + // Check that the library path is set. + libPath := plugin["library_path"] + require.Equal(t, "libk8smeta.so", libPath) + }, + }, + { + "drive disabled", + map[string]string{ + "driver.enabled": "false", + }, + func(t *testing.T, config any) { + require.Nil(t, config) + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.name, func(t *testing.T) { + t.Parallel() + + // Enable the collector. + if testCase.values != nil { + testCase.values["collectors.kubernetes.enabled"] = "true" + } else { + testCase.values = map[string]string{"collectors.kubernetes.enabled": "true"} + } + + options := &helm.Options{SetValues: testCase.values} + output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"}) + + var cm corev1.ConfigMap + helm.UnmarshalK8SYaml(t, output, &cm) + var config map[string]interface{} + + helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config) + plugins := config["plugins"] + pluginsArray := plugins.([]interface{}) + found := false + // Find the k8smeta plugin configuration. + for _, plugin := range pluginsArray { + if name, ok := plugin.(map[string]interface{})["name"]; ok && name == k8sMetaPluginName { + testCase.expected(t, plugin) + found = true + } + } + if found { + // Check that the plugin has been added to the ones that need to be loaded. + loadplugins := config["load_plugins"] + require.True(t, slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName)) + } else { + testCase.expected(t, nil) + loadplugins := config["load_plugins"] + require.True(t, !slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName)) + } + }) + } +} + +// Test that the helper does not overwrite user's configuration. +func TestPluginConfigurationUniqueEntries(t *testing.T) { + t.Parallel() + + pluginsJSON := `[ + { + "init_config": null, + "library_path": "libk8saudit.so", + "name": "k8saudit", + "open_params": "http://:9765/k8s-audit" + }, + { + "library_path": "libcloudtrail.so", + "name": "cloudtrail" + }, + { + "init_config": "", + "library_path": "libjson.so", + "name": "json" + }, + { + "init_config": { + "collectorHostname": "rendered-resources-k8s-metacollector.default.svc", + "collectorPort": 45000, + "nodeName": "${FALCO_K8S_NODE_NAME}" + }, + "library_path": "libk8smeta.so", + "name": "k8smeta" + } +]` + + loadPluginsJSON := `[ + "k8smeta", + "k8saudit" +]` + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + options := &helm.Options{SetJsonValues: map[string]string{ + "falco.plugins": pluginsJSON, + "falco.load_plugins": loadPluginsJSON, + }, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}} + output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"}) + + var cm corev1.ConfigMap + helm.UnmarshalK8SYaml(t, output, &cm) + var config map[string]interface{} + + helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config) + plugins := config["plugins"] + + out, err := json.MarshalIndent(plugins, "", " ") + require.NoError(t, err) + require.Equal(t, pluginsJSON, string(out)) + pluginsArray := plugins.([]interface{}) + // Find the k8smeta plugin configuration. + numConfigK8smeta := 0 + for _, plugin := range pluginsArray { + if name, ok := plugin.(map[string]interface{})["name"]; ok && name == k8sMetaPluginName { + numConfigK8smeta++ + } + } + + require.Equal(t, 1, numConfigK8smeta) + + // Check that the plugin has been added to the ones that need to be loaded. + loadplugins := config["load_plugins"] + require.Len(t, loadplugins.([]interface{}), 2) + require.True(t, slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName)) +} + +// Test that the helper does not overwrite user's configuration. +func TestFalcoctlRefs(t *testing.T) { + t.Parallel() + + pluginsJSON := `[ + { + "init_config": null, + "library_path": "libk8saudit.so", + "name": "k8saudit", + "open_params": "http://:9765/k8s-audit" + }, + { + "library_path": "libcloudtrail.so", + "name": "cloudtrail" + }, + { + "init_config": "", + "library_path": "libjson.so", + "name": "json" + }, + { + "init_config": { + "collectorHostname": "rendered-resources-k8s-metacollector.default.svc", + "collectorPort": 45000, + "nodeName": "${FALCO_K8S_NODE_NAME}" + }, + "library_path": "libk8smeta.so", + "name": "k8smeta" + } + ]` + + testFunc := func(t *testing.T, config any) { + // Get artifact configuration map. + configMap := config.(map[string]interface{}) + artifactConfig := (configMap["artifact"]).(map[string]interface{}) + // Test allowed types. + allowedTypes := artifactConfig["allowedTypes"] + require.Len(t, allowedTypes, 2) + require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin")) + require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile")) + // Test plugin reference. + refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{}) + require.Len(t, refs, 2) + require.True(t, slices.Contains(refs, "falco-rules:3")) + require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0")) + } + + testCases := []struct { + name string + valuesJSON map[string]string + expected func(t *testing.T, config any) + }{ + { + "defaultValues", + nil, + testFunc, + }, + { + "setPluginConfiguration", + map[string]string{ + "falco.plugins": pluginsJSON, + }, + testFunc, + }, + { + "driver disabled", + map[string]string{ + "driver.enabled": "false", + }, + func(t *testing.T, config any) { + // Get artifact configuration map. + configMap := config.(map[string]interface{}) + artifactConfig := (configMap["artifact"]).(map[string]interface{}) + // Test plugin reference. + refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{}) + require.True(t, !slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0")) + }, + }, + } + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.name, func(t *testing.T) { + t.Parallel() + + options := &helm.Options{SetJsonValues: testCase.valuesJSON, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}} + output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/falcoctl-configmap.yaml"}) + + var cm corev1.ConfigMap + helm.UnmarshalK8SYaml(t, output, &cm) + var config map[string]interface{} + helm.UnmarshalK8SYaml(t, cm.Data["falcoctl.yaml"], &config) + testCase.expected(t, config) + }) + } +} diff --git a/charts/falco/falco/charts/falco/tests/unit/metricsConfig_test.go b/charts/falco/falco/charts/falco/tests/unit/metricsConfig_test.go new file mode 100644 index 000000000..2d0cc33da --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/metricsConfig_test.go @@ -0,0 +1,204 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "path/filepath" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + "gopkg.in/yaml.v3" + corev1 "k8s.io/api/core/v1" +) + +type metricsConfig struct { + Enabled bool `yaml:"enabled"` + ConvertMemoryToMB bool `yaml:"convert_memory_to_mb"` + IncludeEmptyValues bool `yaml:"include_empty_values"` + KernelEventCountersEnabled bool `yaml:"kernel_event_counters_enabled"` + ResourceUtilizationEnabled bool `yaml:"resource_utilization_enabled"` + RulesCountersEnabled bool `yaml:"rules_counters_enabled"` + LibbpfStatsEnabled bool `yaml:"libbpf_stats_enabled"` + OutputRule bool `yaml:"output_rule"` + StateCountersEnabled bool `yaml:"state_counters_enabled"` + Interval string `yaml:"interval"` +} + +type webServerConfig struct { + Enabled bool `yaml:"enabled"` + K8sHealthzEndpoint string `yaml:"k8s_healthz_endpoint"` + ListenPort string `yaml:"listen_port"` + PrometheusMetricsEnabled bool `yaml:"prometheus_metrics_enabled"` + SSLCertificate string `yaml:"ssl_certificate"` + SSLEnabled bool `yaml:"ssl_enabled"` + Threadiness int `yaml:"threadiness"` +} + +func TestMetricsConfigInFalcoConfig(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + testCases := []struct { + name string + values map[string]string + expected func(t *testing.T, metricsConfig, webServerConfig any) + }{ + { + "defaultValues", + nil, + func(t *testing.T, metricsConfig, webServerConfig any) { + require.Len(t, metricsConfig, 10, "should have ten items") + + metrics, err := getMetricsConfig(metricsConfig) + require.NoError(t, err) + require.NotNil(t, metrics) + require.True(t, metrics.ConvertMemoryToMB) + require.False(t, metrics.Enabled) + require.False(t, metrics.IncludeEmptyValues) + require.True(t, metrics.KernelEventCountersEnabled) + require.True(t, metrics.ResourceUtilizationEnabled) + require.True(t, metrics.RulesCountersEnabled) + require.Equal(t, "1h", metrics.Interval) + require.True(t, metrics.LibbpfStatsEnabled) + require.True(t, metrics.OutputRule) + require.True(t, metrics.StateCountersEnabled) + + webServer, err := getWebServerConfig(webServerConfig) + require.NoError(t, err) + require.NotNil(t, webServer) + require.True(t, webServer.Enabled) + require.False(t, webServer.PrometheusMetricsEnabled) + }, + }, + { + "metricsEnabled", + map[string]string{ + "metrics.enabled": "true", + }, + func(t *testing.T, metricsConfig, webServerConfig any) { + require.Len(t, metricsConfig, 10, "should have ten items") + + metrics, err := getMetricsConfig(metricsConfig) + require.NoError(t, err) + require.NotNil(t, metrics) + require.True(t, metrics.ConvertMemoryToMB) + require.True(t, metrics.Enabled) + require.False(t, metrics.IncludeEmptyValues) + require.True(t, metrics.KernelEventCountersEnabled) + require.True(t, metrics.ResourceUtilizationEnabled) + require.True(t, metrics.RulesCountersEnabled) + require.Equal(t, "1h", metrics.Interval) + require.True(t, metrics.LibbpfStatsEnabled) + require.False(t, metrics.OutputRule) + require.True(t, metrics.StateCountersEnabled) + + webServer, err := getWebServerConfig(webServerConfig) + require.NoError(t, err) + require.NotNil(t, webServer) + require.True(t, webServer.Enabled) + require.True(t, webServer.PrometheusMetricsEnabled) + }, + }, + { + "Flip/Change Values", + map[string]string{ + "metrics.enabled": "true", + "metrics.convertMemoryToMB": "false", + "metrics.includeEmptyValues": "true", + "metrics.kernelEventCountersEnabled": "false", + "metrics.resourceUtilizationEnabled": "false", + "metrics.rulesCountersEnabled": "false", + "metrics.libbpfStatsEnabled": "false", + "metrics.outputRule": "false", + "metrics.stateCountersEnabled": "false", + "metrics.interval": "1s", + }, + func(t *testing.T, metricsConfig, webServerConfig any) { + require.Len(t, metricsConfig, 10, "should have ten items") + + metrics, err := getMetricsConfig(metricsConfig) + require.NoError(t, err) + require.NotNil(t, metrics) + require.False(t, metrics.ConvertMemoryToMB) + require.True(t, metrics.Enabled) + require.True(t, metrics.IncludeEmptyValues) + require.False(t, metrics.KernelEventCountersEnabled) + require.False(t, metrics.ResourceUtilizationEnabled) + require.False(t, metrics.RulesCountersEnabled) + require.Equal(t, "1s", metrics.Interval) + require.False(t, metrics.LibbpfStatsEnabled) + require.False(t, metrics.OutputRule) + require.False(t, metrics.StateCountersEnabled) + + webServer, err := getWebServerConfig(webServerConfig) + require.NoError(t, err) + require.NotNil(t, webServer) + require.True(t, webServer.Enabled) + require.True(t, webServer.PrometheusMetricsEnabled) + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.name, func(t *testing.T) { + t.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"}) + + var cm corev1.ConfigMap + helm.UnmarshalK8SYaml(t, output, &cm) + var config map[string]interface{} + + helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config) + metrics := config["metrics"] + webServer := config["webserver"] + testCase.expected(t, metrics, webServer) + }) + } +} + +func getMetricsConfig(config any) (*metricsConfig, error) { + var metrics metricsConfig + + metricsByte, err := yaml.Marshal(config) + if err != nil { + return nil, err + } + + if err := yaml.Unmarshal(metricsByte, &metrics); err != nil { + return nil, err + } + + return &metrics, nil +} + +func getWebServerConfig(config any) (*webServerConfig, error) { + var webServer webServerConfig + webServerByte, err := yaml.Marshal(config) + if err != nil { + return nil, err + } + if err := yaml.Unmarshal(webServerByte, &webServer); err != nil { + return nil, err + } + return &webServer, nil +} diff --git a/charts/falco/falco/charts/falco/tests/unit/serviceAccount_test.go b/charts/falco/falco/charts/falco/tests/unit/serviceAccount_test.go new file mode 100644 index 000000000..277325396 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/serviceAccount_test.go @@ -0,0 +1,59 @@ +package unit + +import ( + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/require" + corev1 "k8s.io/api/core/v1" + "path/filepath" + "strings" + "testing" +) + +func TestServiceAccount(t *testing.T) { + t.Parallel() + + helmChartPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + testCases := []struct { + name string + values map[string]string + expected func(t *testing.T, sa *corev1.ServiceAccount) + }{ + { + "defaultValues", + nil, + func(t *testing.T, sa *corev1.ServiceAccount) { + require.Equal(t, sa.Name, "rendered-resources-falco") + }, + }, + { + "kind=auto", + map[string]string{ + "serviceAccount.create": "false", + }, + func(t *testing.T, sa *corev1.ServiceAccount) { + require.Equal(t, sa.Name, "") + }, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.name, func(t *testing.T) { + t.Parallel() + + options := &helm.Options{SetValues: testCase.values} + output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"}) + if err != nil { + require.True(t, strings.Contains(err.Error(), "Error: could not find template templates/serviceaccount.yaml in chart")) + } + + var sa corev1.ServiceAccount + helm.UnmarshalK8SYaml(t, output, &sa) + + testCase.expected(t, &sa) + }) + } +} diff --git a/charts/falco/falco/charts/falco/tests/unit/serviceMonitorTemplate_test.go b/charts/falco/falco/charts/falco/tests/unit/serviceMonitorTemplate_test.go new file mode 100644 index 000000000..b2fcb3745 --- /dev/null +++ b/charts/falco/falco/charts/falco/tests/unit/serviceMonitorTemplate_test.go @@ -0,0 +1,93 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright 2024 The Falco Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package unit + +import ( + "encoding/json" + "path/filepath" + "reflect" + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" +) + +type serviceMonitorTemplateTest struct { + suite.Suite + chartPath string + releaseName string + namespace string + templates []string +} + +func TestServiceMonitorTemplate(t *testing.T) { + t.Parallel() + + chartFullPath, err := filepath.Abs(chartPath) + require.NoError(t, err) + + suite.Run(t, &serviceMonitorTemplateTest{ + Suite: suite.Suite{}, + chartPath: chartFullPath, + releaseName: "falco-test", + namespace: "falco-namespace-test", + templates: []string{"templates/serviceMonitor.yaml"}, + }) +} + +func (s *serviceMonitorTemplateTest) TestCreationDefaultValues() { + // Render the servicemonitor and check that it has not been rendered. + _, err := helm.RenderTemplateE(s.T(), &helm.Options{}, s.chartPath, s.releaseName, s.templates) + s.Error(err, "should error") + s.Equal("error while running command: exit status 1; Error: could not find template templates/serviceMonitor.yaml in chart", err.Error()) +} + +func (s *serviceMonitorTemplateTest) TestEndpoint() { + defaultEndpointsJSON := `[ + { + "port": "metrics", + "interval": "15s", + "scrapeTimeout": "10s", + "honorLabels": true, + "path": "/metrics", + "scheme": "http" + } +]` + var defaultEndpoints []monitoringv1.Endpoint + err := json.Unmarshal([]byte(defaultEndpointsJSON), &defaultEndpoints) + s.NoError(err) + + options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}} + output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates) + + var svcMonitor monitoringv1.ServiceMonitor + helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor) + + s.Len(svcMonitor.Spec.Endpoints, 1, "should have only one endpoint") + s.True(reflect.DeepEqual(svcMonitor.Spec.Endpoints[0], defaultEndpoints[0])) +} + +func (s *serviceMonitorTemplateTest) TestNamespaceSelector() { + options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}} + output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates) + + var svcMonitor monitoringv1.ServiceMonitor + helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor) + s.Len(svcMonitor.Spec.NamespaceSelector.MatchNames, 1) + s.Equal("default", svcMonitor.Spec.NamespaceSelector.MatchNames[0]) +} diff --git a/charts/falco/falco/charts/falco/values-gvisor-gke.yaml b/charts/falco/falco/charts/falco/values-gvisor-gke.yaml new file mode 100644 index 000000000..d38f7621c --- /dev/null +++ b/charts/falco/falco/charts/falco/values-gvisor-gke.yaml @@ -0,0 +1,63 @@ +# Default values to deploy Falco on GKE with gVisor. + +# Affinity constraint for pods' scheduling. +# Needed to deploy Falco on the gVisor enabled nodes. +affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: sandbox.gke.io/runtime + operator: In + values: + - gvisor + +# Tolerations to allow Falco to run on Kubernetes 1.6 masters. +# Adds the neccesssary tolerations to allow Falco pods to be scheduled on the gVisor enabled nodes. +tolerations: + - effect: NoSchedule + key: node-role.kubernetes.io/master + - effect: NoSchedule + key: sandbox.gke.io/runtime + operator: Equal + value: gvisor + +# Enable gVisor and set the appropriate paths. +driver: + enabled: true + kind: gvisor + gvisor: + runsc: + path: /home/containerd/usr/local/sbin + root: /run/containerd/runsc + config: /run/containerd/runsc/config.toml + +# Enable the containerd collector to enrich the syscall events with metadata. +collectors: + enabled: true + containerd: + enabled: true + socket: /run/containerd/containerd.sock + +falcoctl: + artifact: + install: + # -- Enable the init container. We do not recommend installing plugins for security reasons since they are executable objects. + # We install only "rulesfiles". + enabled: true + follow: + # -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules. + enabled: true + config: + artifact: + install: + # -- List of artifacts to be installed by the falcoctl init container. + # We do not recommend installing (or following) plugins for security reasons since they are executable objects. + refs: [falco-rules:3] + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + # We do not recommend installing (or following) plugins for security reasons since they are executable objects. + refs: [falco-rules:3] + +# Set this to true to force Falco so output the logs as soon as they are emmitted. +tty: false diff --git a/charts/falco/falco/charts/falco/values-k8saudit.yaml b/charts/falco/falco/charts/falco/values-k8saudit.yaml index 39b2db786..1bc9953fe 100644 --- a/charts/falco/falco/charts/falco/values-k8saudit.yaml +++ b/charts/falco/falco/charts/falco/values-k8saudit.yaml @@ -1,11 +1,36 @@ +# -- Disable the drivers since we want to deploy only the k8saudit plugin. driver: enabled: false +# -- Disable the collectors, no syscall events to enrich with metadata. collectors: enabled: false +# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurabale. controller: kind: deployment + deployment: + # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. + # For more info check the section on Plugins in the README.md file. + replicas: 1 + + +falcoctl: + artifact: + install: + # -- Enable the init container. + enabled: true + follow: + # -- Enable the sidecar container. + enabled: true + config: + artifact: + install: + # -- List of artifacts to be installed by the falcoctl init container. + refs: [k8saudit-rules:0.7] + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: [k8saudit-rules:0.7] services: - name: k8saudit-webhook @@ -16,7 +41,7 @@ services: protocol: TCP falco: - rules_file: + rules_files: - /etc/falco/k8s_audit_rules.yaml - /etc/falco/rules.d plugins: @@ -30,4 +55,5 @@ falco: - name: json library_path: libjson.so init_config: "" + # Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container. load_plugins: [k8saudit, json] diff --git a/charts/falco/falco/charts/falco/values-syscall-k8saudit.yaml b/charts/falco/falco/charts/falco/values-syscall-k8saudit.yaml new file mode 100644 index 000000000..1dc91145e --- /dev/null +++ b/charts/falco/falco/charts/falco/values-syscall-k8saudit.yaml @@ -0,0 +1,62 @@ +# Enable the driver, and choose between the kernel module or the ebpf probe. +# Default value: kernel module. +driver: + enabled: true + kind: module + +# Enable the collectors used to enrich the events with metadata. +# Check the values.yaml file for fine-grained options. +collectors: + enabled: true + +# We set the controller to daemonset since we have the syscalls source enabled. +# It will ensure that every node on our cluster will be monitored by Falco. +# Please note that the api-server will use the "k8saudit-webhook" service to send +# audit logs to the falco instances. That means that when we have multiple instances of Falco +# we can not predict to which instance the audit logs will be sent. When testing please check all +# the Falco instance to make sure that at least one of them have received the audit logs. +controller: + kind: daemonset + +falcoctl: + artifact: + install: + # -- Enable the init container. + enabled: true + follow: + # -- Enable the sidecar container. + enabled: true + config: + artifact: + install: + # -- List of artifacts to be installed by the falcoctl init container. + refs: [falco-rules:3, k8saudit-rules:0.7] + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: [falco-rules:3, k8saudit-rules:0.7] + +services: + - name: k8saudit-webhook + type: NodePort + ports: + - port: 9765 # See plugin open_params + nodePort: 30007 + protocol: TCP + +falco: + rules_files: + - /etc/falco/falco_rules.yaml + - /etc/falco/k8s_audit_rules.yaml + - /etc/falco/rules.d + plugins: + - name: k8saudit + library_path: libk8saudit.so + init_config: + "" + # maxEventBytes: 1048576 + # sslCertificate: /etc/falco/falco.pem + open_params: "http://:9765/k8s-audit" + - name: json + library_path: libjson.so + init_config: "" + load_plugins: [k8saudit, json] diff --git a/charts/falco/falco/charts/falco/values.yaml b/charts/falco/falco/charts/falco/values.yaml index b8d76f327..bd8f2a61a 100644 --- a/charts/falco/falco/charts/falco/values.yaml +++ b/charts/falco/falco/charts/falco/values.yaml @@ -20,10 +20,11 @@ imagePullSecrets: [] nameOverride: "" # -- Same as nameOverride but for the fullname. fullnameOverride: "" +# -- Override the deployment namespace +namespaceOverride: "" -rbac: - # Create and use rbac resources when set to true. Needed to fetch k8s metadata from the api-server. - create: true +# -- Add additional pod annotations +podAnnotations: {} serviceAccount: # -- Specifies whether a service account should be created. @@ -34,8 +35,9 @@ serviceAccount: # If not set and create is true, a name is generated using the fullname template name: "" -# -- Add additional pod annotations -podAnnotations: {} +rbac: + # Create and use rbac resources when set to true. Needed to list and update configmaps in Falco's namespace. + create: true # -- Add additional pod labels podLabels: {} @@ -56,7 +58,7 @@ podSecurityContext: {} # 1) driver.enabled = false: # securityContext: {} # -# 2) driver.enabled = true and driver.kind = module: +# 2) driver.enabled = true and (driver.kind = module || driver.kind = modern-bpf): # securityContext: # privileged: true # @@ -88,6 +90,8 @@ resources: cpu: 100m memory: 512Mi # -- Maximum amount of resources that Falco container could get. + # If you are enabling more than one source in falco, than consider to increase + # the cpu limits. limits: cpu: 1000m memory: 1024Mi @@ -97,10 +101,12 @@ nodeSelector: {} # -- Affinity constraint for pods' scheduling. affinity: {} -# -- Tolerations to allow Falco to run on Kubernetes 1.6 masters. +# -- Tolerations to allow Falco to run on Kubernetes masters. tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master + - effect: NoSchedule + key: node-role.kubernetes.io/control-plane # -- Parameters used healthChecks: @@ -131,6 +137,10 @@ tty: false controller: # Available options: deployment, daemonset. kind: daemonset + # Annotations to add to the daemonset or deployment + annotations: {} + # -- Extra labels to add to the daemonset or deployment + labels: {} daemonset: updateStrategy: # You can also customize maxUnavailable or minReadySeconds if you @@ -142,6 +152,8 @@ controller: # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. # For more info check the section on Plugins in the README.md file. replicas: 1 + # -- Number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) + # revisionHistoryLimit: 1 # -- Network services configuration (scenario requirement) # Add here your services to be deployed together with Falco. @@ -154,6 +166,99 @@ services: # nodePort: 30007 # protocol: TCP +# -- metrics configures Falco to enable and expose the metrics. +metrics: + # -- enabled specifies whether the metrics should be enabled. + enabled: false + # -- interval is stats interval in Falco follows the time duration definitions + # used by Prometheus. + # https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations + # Time durations are specified as a number, followed immediately by one of the + # following units: + # ms - millisecond + # s - second + # m - minute + # h - hour + # d - day - assuming a day has always 24h + # w - week - assuming a week has always 7d + # y - year - assuming a year has always 365d + # Example of a valid time duration: 1h30m20s10ms + # A minimum interval of 100ms is enforced for metric collection. However, for + # production environments, we recommend selecting one of the following intervals + # for optimal monitoring: + # 15m + # 30m + # 1h + # 4h + # 6h + interval: 1h + # -- outputRule enables seamless metrics and performance monitoring, we + # recommend emitting metrics as the rule "Falco internal: metrics snapshot". + # This option is particularly useful when Falco logs are preserved in a data + # lake. Please note that to use this option, the Falco rules config `priority` + # must be set to `info` at a minimum. + outputRule: false + # -- rulesCountersEnabled specifies whether the counts for each rule should be emitted. + rulesCountersEnabled: true + # -- resourceUtilizationEnabled`: Emit CPU and memory usage metrics. CPU usage + # is reported as a percentage of one CPU and can be normalized to the total + # number of CPUs to determine overall usage. Memory metrics are provided in raw + # units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) + # and can be uniformly converted to megabytes (MB) using the + # `convert_memory_to_mb` functionality. In environments such as Kubernetes when + # deployed as daemonset, it is crucial to track Falco's container memory usage. + # To customize the path of the memory metric file, you can create an environment + # variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By + # default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to + # monitor container memory usage, which aligns with Kubernetes' + # `container_memory_working_set_bytes` metric. Finally, we emit the overall host + # CPU and memory usages, along with the total number of processes and open file + # descriptors (fds) on the host, obtained from the proc file system unrelated to + # Falco's monitoring. These metrics help assess Falco's usage in relation to the + # server's workload intensity. + resourceUtilizationEnabled: true + # stateCountersEnabled emits counters related to Falco's state engine, including + # added, removed threads or file descriptors (fds), and failed lookup, store, or + # retrieve actions in relation to Falco's underlying process cache table (threadtable). + # We also log the number of currently cached containers if applicable. + stateCountersEnabled: true + # kernelEventCountersEnabled emits kernel side event and drop counters, as + # an alternative to `syscall_event_drops`, but with some differences. These + # counters reflect monotonic values since Falco's start and are exported at a + # constant stats interval. + kernelEventCountersEnabled: true + # -- libbpfStatsEnabled exposes statistics similar to `bpftool prog show`, + # providing information such as the number of invocations of each BPF program + # attached by Falco and the time spent in each program measured in nanoseconds. + # To enable this feature, the kernel must be >= 5.1, and the kernel + # configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, + # or an equivalent statistics feature, is not available for non `*bpf*` drivers. + # Additionally, please be aware that the current implementation of `libbpf` does + # not support granularity of statistics at the bpf tail call level. + libbpfStatsEnabled: true + # -- convertMemoryToMB specifies whether the memory should be converted to mb. + convertMemoryToMB: true + # -- includeEmptyValues specifies whether the empty values should be included in the metrics. + includeEmptyValues: false + # -- service exposes the metrics service to be accessed from within the cluster. + # ref: https://kubernetes.io/docs/concepts/services-networking/service/ + service: + # -- create specifies whether a service should be created. + create: true + # -- type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible + # from within the cluster. + type: ClusterIP + # -- ports denotes all the ports on which the Service will listen. + ports: + # -- metrics denotes a listening service named "metrics". + metrics: + # -- port is the port on which the Service will listen. + port: 8765 + # -- targetPort is the port on which the Pod is listening. + targetPort: 8765 + # -- protocol specifies the network protocol that the Service should use for the associated port. + protocol: "TCP" + # File access configuration (scenario requirement) mounts: # -- A list of volumes you want to add to the Falco pods. @@ -168,27 +273,66 @@ driver: # -- Set it to false if you want to deploy Falco without the drivers. # Always set it to false when using Falco with plugins. enabled: true - # -- Tell Falco which driver to use. Available options: module (kernel driver) and ebpf (eBPF probe). - kind: module + # -- kind tells Falco which driver to use. Available options: kmod (kernel driver), ebpf (eBPF probe), modern_ebpf (modern eBPF probe). + kind: auto + # -- kmod holds the configuration for the kernel module. + kmod: + # -- bufSizePreset determines the size of the shared space between Falco and its drivers. + # This shared space serves as a temporary storage for syscall events. + bufSizePreset: 4 + # -- dropFailedExit if set true drops failed system call exit events before pushing them to userspace. + dropFailedExit: false # -- Configuration section for ebpf driver. ebpf: - # -- Path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init + # -- path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init # container deployed with the chart. - path: + path: "${HOME}/.falco/falco-bpf.o" # -- Needed to enable eBPF JIT at runtime for performance reasons. # Can be skipped if eBPF JIT is enabled from outside the container hostNetwork: false # -- Constrain Falco with capabilities instead of running a privileged container. - # This option is only supported with the eBPF driver and a kernel >= 5.8. # Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). + # Capabilities used: {CAP_SYS_RESOURCE, CAP_SYS_ADMIN, CAP_SYS_PTRACE}. + # On kernel versions >= 5.8 'CAP_PERFMON' and 'CAP_BPF' could replace 'CAP_SYS_ADMIN' but please pay attention to the 'kernel.perf_event_paranoid' value on your system. + # Usually 'kernel.perf_event_paranoid>2' means that you cannot use 'CAP_PERFMON' and you should fallback to 'CAP_SYS_ADMIN', but the behavior changes across different distros. + # Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-1 leastPrivileged: false + # -- bufSizePreset determines the size of the shared space between Falco and its drivers. + # This shared space serves as a temporary storage for syscall events. + bufSizePreset: 4 + # -- dropFailedExit if set true drops failed system call exit events before pushing them to userspace. + dropFailedExit: false + modernEbpf: + # -- Constrain Falco with capabilities instead of running a privileged container. + # Ensure the modern bpf driver is enabled (i.e., setting the `driver.kind` option to `modern-bpf`). + # Capabilities used: {CAP_SYS_RESOURCE, CAP_BPF, CAP_PERFMON, CAP_SYS_PTRACE}. + # Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 + leastPrivileged: false + # -- bufSizePreset determines the size of the shared space between Falco and its drivers. + # This shared space serves as a temporary storage for syscall events. + bufSizePreset: 4 + # -- dropFailedExit if set true drops failed system call exit events before pushing them to userspace. + dropFailedExit: false + # -- cpusForEachBuffer is the index that controls how many CPUs to assign to a single syscall buffer. + cpusForEachBuffer: 2 + + # -- Gvisor configuration. Based on your system you need to set the appropriate values. + # Please, remember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes. + gvisor: + # -- Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods. + runsc: + # -- Absolute path of the `runsc` binary in the k8s nodes. + path: /home/containerd/usr/local/sbin + # -- Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; + root: /run/containerd/runsc + # -- Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. + config: /run/containerd/runsc/config.toml + # -- Configuration for the Falco init container. loader: # -- Enable/disable the init container. enabled: true initContainer: - # -- Enable/disable the init container. - enabled: true image: # -- The image pull policy. pullPolicy: IfNotPresent @@ -199,12 +343,12 @@ driver: # -- Overrides the image tag whose default is the chart appVersion. tag: "" # -- Extra environment variables that will be pass onto Falco driver loader init container. - env: {} + env: [] # -- Arguments to pass to the Falco driver loader init container. args: [] # -- Resources requests and limits for the Falco driver loader init container. resources: {} - # -- Security context for the Falco driver loader init container. Overrides the default security context. If driver.mode == "module" you must at least set `privileged: true`. + # -- Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`. securityContext: {} # Collectors for data enrichment (scenario requirement) @@ -230,30 +374,31 @@ collectors: # -- The path of the CRI-O socket. socket: /run/crio/crio.sock + # -- kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy + # kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed + # to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 kubernetes: - # -- Enable Kubernetes meta data collection via a connection to the Kubernetes API server. - # When this option is disabled, Falco falls back to the container annotations to grap the meta data. + # -- enabled specifies whether the Kubernetes metadata should be collected using the k8smeta plugin and the k8s-metacollector component. + # It will deploy the k8s-metacollector external component that fetches Kubernetes metadata and pushes them to Falco instances. + # For more info see: + # https://github.com/falcosecurity/k8s-metacollector + # https://github.com/falcosecurity/charts/tree/master/charts/k8s-metacollector + # When this option is disabled, Falco falls back to the container annotations to grab the metadata. # In such a case, only the ID, name, namespace, labels of the pod will be available. - enabled: true - # -- The apiAuth value is to provide the authentication method Falco should use to connect to the Kubernetes API. - # The argument's documentation from Falco is provided here for reference: - # - # | :[:], --k8s-api-cert | :[:] - # Use the provided files names to authenticate user and (optionally) verify the K8S API server identity. - # Each entry must specify full (absolute, or relative to the current directory) path to the respective file. - # Private key password is optional (needed only if key is password protected). - # CA certificate is optional. For all files, only PEM file format is supported. - # Specifying CA certificate only is obsoleted - when single entry is provided - # for this option, it will be interpreted as the name of a file containing bearer token. - # Note that the format of this command-line option prohibits use of files whose names contain - # ':' or '#' characters in the file name. - # -- Provide the authentication method Falco should use to connect to the Kubernetes API. - apiAuth: /var/run/secrets/kubernetes.io/serviceaccount/token - ## -- Provide the URL Falco should use to connect to the Kubernetes API. - apiUrl: "https://$(KUBERNETES_SERVICE_HOST)" - # -- If true, only the current node (on which Falco is running) will be considered when requesting metadata of pods - # to the API server. Disabling this option may have a performance penalty on large clusters. - enableNodeFilter: true + enabled: false + # --pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: + # "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0. + pluginRef: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0" + # -- collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match + # k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override + # it, make sure to set here the address of the k8s-metacollector. + # It is used by the k8smeta plugin to connect to the k8s-metacollector. + collectorHostname: "" + # -- collectorPort designates the port on which the k8s-metacollector gRPC service listens. If not specified + # the value of the port named `broker-grpc` in k8s-metacollector.service.ports is used. The default values is 45000. + # It is used by the k8smeta plugin to connect to the k8s-metacollector. + collectorPort: "" + ########################### # Extras and customization # @@ -261,7 +406,7 @@ collectors: extra: # -- Extra environment variables that will be pass onto Falco containers. - env: {} + env: [] # -- Extra command-line arguments. args: [] # -- Additional initContainers for Falco pods. @@ -281,6 +426,13 @@ certs: ca: # -- CA certificate used by gRPC, webserver and AuditSink validation. crt: "" + existingClientSecret: "" + client: + # -- Key used by http mTLS client. + key: "" + # -- Certificate used by http mTLS client. + crt: "" + # -- Third party rules enabled for Falco. More info on the dedicated section in README.md file. customRules: {} @@ -298,7 +450,7 @@ customRules: # Falco integrations # ######################## -# -- For configuration values, see https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml +# -- For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml falcosidekick: # -- Enable falcosidekick deployment. enabled: false @@ -307,40 +459,290 @@ falcosidekick: # -- Listen port. Default value: 2801 listenPort: "" +#################### +# falcoctl config # +#################### +falcoctl: + image: + # -- The image pull policy. + pullPolicy: IfNotPresent + # -- The image registry to pull from. + registry: docker.io + # -- The image repository to pull from. + repository: falcosecurity/falcoctl + # -- The image tag to pull. + tag: "0.9.0" + artifact: + # -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before + # Falco starts. It provides them to Falco by using an emptyDir volume. + install: + enabled: true + # -- Extra environment variables that will be pass onto falcoctl-artifact-install init container. + env: [] + # -- Arguments to pass to the falcoctl-artifact-install init container. + args: ["--log-format=json"] + # -- Resources requests and limits for the falcoctl-artifact-install init container. + resources: {} + # -- Security context for the falcoctl init container. + securityContext: {} + # -- A list of volume mounts you want to add to the falcoctl-artifact-install init container. + mounts: + volumeMounts: [] + # -- Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for + # updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) + # that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the + # correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible + # with the running version of Falco before installing it. + follow: + enabled: true + # -- Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. + env: [] + # -- Arguments to pass to the falcoctl-artifact-follow sidecar container. + args: ["--log-format=json"] + # -- Resources requests and limits for the falcoctl-artifact-follow sidecar container. + resources: {} + # -- Security context for the falcoctl-artifact-follow sidecar container. + securityContext: {} + # -- A list of volume mounts you want to add to the falcoctl-artifact-follow sidecar container. + mounts: + volumeMounts: [] + # -- Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. + config: + # -- List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: + # https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview + indexes: + - name: falcosecurity + url: https://falcosecurity.github.io/falcoctl/index.yaml + # -- Configuration used by the artifact commands. + artifact: + # -- List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained + # in the list it will refuse to downloade and install that artifact. + allowedTypes: + - rulesfile + - plugin + install: + # -- Resolve the dependencies for artifacts. + resolveDeps: true + # -- List of artifacts to be installed by the falcoctl init container. + refs: [falco-rules:3] + # -- Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir + # mounted also by the Falco pod. + rulesfilesDir: /rulesfiles + # -- Same as the one above but for the artifacts. + pluginsDir: /plugins + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: [falco-rules:3] + # -- How often the tool checks for new versions of the followed artifacts. + every: 6h + # -- HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible + # with the running Falco instance. + falcoversions: http://localhost:8765/versions + # -- See the fields of the artifact.install section. + rulesfilesDir: /rulesfiles + # -- See the fields of the artifact.install section. + pluginsDir: /plugins + +# -- serviceMonitor holds the configuration for the ServiceMonitor CRD. +# A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should +# discover and scrape metrics from the Falco service. +serviceMonitor: + # -- create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. + # https://github.com/coreos/prometheus-operator + # Enable it only if the ServiceMonitor CRD is installed in your cluster. + create: false + # -- path at which the metrics are exposed by Falco. + path: /metrics + # -- labels set of labels to be applied to the ServiceMonitor resource. + # If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right + # label here in order for the ServiceMonitor to be selected for target discovery. + labels: {} + # -- selector set of labels that should match the labels on the Service targeted by the current serviceMonitor. + selector: {} + # -- interval specifies the time interval at which Prometheus should scrape metrics from the service. + interval: 15s + # -- scheme specifies network protocol used by the metrics endpoint. In this case HTTP. + scheme: http + # -- tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when + # scraping metrics from a service. It allows you to define the details of the TLS connection, such as + # CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support + # TLS configuration for the metrics endpoint. + tlsConfig: {} + # insecureSkipVerify: false + # caFile: /path/to/ca.crt + # certFile: /path/to/client.crt + # keyFile: /path/to/client.key + # -- scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. + # If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for + # that target. + scrapeTimeout: 10s + # -- relabelings configures the relabeling rules to apply the target’s metadata labels. + relabelings: [] + # -- targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. + targetLabels: [] + # -- endpointPort is the port in the Falco service that exposes the metrics service. Change the value if you deploy a custom service + # for Falco's metrics. + endpointPort: "metrics" + + ###################### # falco.yaml config # ###################### falco: - # File(s) or Directories containing Falco rules, loaded at startup. - # The name "rules_file" is only for backwards compatibility. - # If the entry is a file, it will be read directly. If the entry is a directory, - # every file in that directory will be read, in alphabetical order. - # - # falco_rules.yaml ships with the falco package and is overridden with - # every new software version. falco_rules.local.yaml is only created - # if it doesn't exist. If you want to customize the set of rules, add - # your customizations to falco_rules.local.yaml. - # - # The files will be read in the order presented here, so make sure if - # you have overrides they appear in later files. + ##################### + # Falco rules files # + ##################### + + # [Stable] `rules_file` + # + # Falco rules can be specified using files or directories, which are loaded at + # startup. The name "rules_file" is maintained for backwards compatibility. If + # the entry is a file, it will be read directly. If the entry is a directory, + # all files within that directory will be read in alphabetical order. + # + # The falco_rules.yaml file ships with the Falco package and is overridden with + # every new software version. falco_rules.local.yaml is only created if it + # doesn't already exist. + # + # To customize the set of rules, you can add your modifications to any file. + # It's important to note that the files or directories are read in the order + # specified here. In addition, rules are loaded by Falco in the order they + # appear within each rule file. + # + # If you have any customizations intended to override a previous configuration, + # make sure they appear in later files to take precedence. On the other hand, if + # the conditions of rules with the same event type(s) have the potential to + # overshadow each other, ensure that the more important rule appears first. This + # is because rules are evaluated on a "first match wins" basis, where the first + # rule that matches the conditions will be applied, and subsequent rules will + # not be evaluated for the same event type. + # + # By arranging the order of files and rules thoughtfully, you can ensure that + # desired customizations and rule behaviors are prioritized and applied as + # intended. # -- The location of the rules files that will be consumed by Falco. - rules_file: + rules_files: - /etc/falco/falco_rules.yaml - /etc/falco/falco_rules.local.yaml - /etc/falco/rules.d + # [Incubating] `rules` # - # Plugins that are available for use. These plugins are not loaded by - # default, as they require explicit configuration to point to - # cloudtrail log files. + # --- [Description] + # + # Falco rules can be enabled or disabled by name (with wildcards *) and/or by tag. + # + # This configuration is applied after all rules files have been loaded, including + # their overrides, and will take precedence over the enabled/disabled configuration + # specified or overridden in the rules files. + # + # The ordering matters and selections are evaluated in order. For instance, if you + # need to only enable a rule you would first disable all of them and then only + # enable what you need, regardless of the enabled status in the files. + # + # --- [Examples] + # + # Only enable two rules: + # + # rules: + # - disable: + # rule: "*" + # - enable: + # rule: Netcat Remote Code Execution in Container + # - enable: + # rule: Delete or rename shell history + # + # Disable all rules with a specific tag: + # + # rules: + # - disable: + # tag: network # - # To learn more about the supported formats for - # init_config/open_params for the cloudtrail plugin, see the README at - # https://github.com/falcosecurity/plugins/blob/master/plugins/cloudtrail/README.md. - # -- Plugins configuration. Add here all plugins and their configuration. Please + # [Incubating] `rule_matching` + # + # - Falco has to be performant when evaluating rules against events. To quickly + # understand which rules could trigger on a specific event, Falco maintains + # buckets of rules sharing the same event type in a map. Then, the lookup + # in each bucket is performed through linear search. The `rule_matching` + # configuration key's values are: + # - "first": when evaluating conditions of rules in a bucket, Falco will stop + # to evaluate rules if it finds a matching rules. Since rules are stored + # in buckets in the order they are defined in the rules files, this option + # could prevent other rules to trigger even if their condition is met, causing + # a shadowing problem. + # - "all": with this value Falco will continue evaluating all the rules + # stored in the bucket, so that multiple rules could be triggered upon one + # event. + + rule_matching: first + + + # [Incubating] `outputs_queue` + # + # -- Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter + # allows you to customize the queue capacity. Please refer to the official documentation: + # https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. + # On a healthy system with optimized Falco rules, the queue should not fill up. + # If it does, it is most likely happening due to the entire event flow being too slow, + # indicating that the server is under heavy load. + # + # `capacity`: the maximum number of items allowed in the queue is determined by this value. + # Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. + # In other words, when this configuration is set to 0, the number of allowed items is + # effectively set to the largest possible long value, disabling this setting. + # + # In the case of an unbounded queue, if the available memory on the system is consumed, + # the Falco process would be OOM killed. When using this option and setting the capacity, + # the current event would be dropped, and the event loop would continue. This behavior mirrors + # kernel-side event drops when the buffer between kernel space and user space is full. + outputs_queue: + capacity: 0 + + + ################# + # Falco plugins # + ################# + + # [Stable] `load_plugins` and `plugins` + # + # --- [Description] + # + # Falco plugins enable integration with other services in the your ecosystem. + # They allow Falco to extend its functionality and leverage data sources such as + # Kubernetes audit logs or AWS CloudTrail logs. This enables Falco to perform + # fast on-host detections beyond syscalls and container events. The plugin + # system will continue to evolve with more specialized functionality in future + # releases. + # + # Please refer to the plugins repo at + # https://github.com/falcosecurity/plugins/blob/master/plugins/ for detailed + # documentation on the available plugins. This repository provides comprehensive + # information about each plugin and how to utilize them with Falco. + # + # Please note that if your intention is to enrich Falco syscall logs with fields + # such as `k8s.ns.name`, `k8s.pod.name`, and `k8s.pod.*`, you do not need to use + # the `k8saudit` plugin. This information is automatically extracted from the + # container runtime socket. The `k8saudit` plugin is specifically designed to + # integrate with Kubernetes audit logs and is not required for basic enrichment + # of syscall logs with Kubernetes-related fields. + # + # --- [Usage] + # + # Disabled by default, indicated by an empty `load_plugins` list. Each plugin meant + # to be enabled needs to be listed as explicit list item. + # + # For example, if you want to use the `k8saudit` plugin, + # ensure it is configured appropriately and then change this to: + # load_plugins: [k8saudit, json] + # -- Add here all plugins and their configuration. Please # consult the plugins documentation for more info. Remember to add the plugins name in # "load_plugins: []" in order to load them in Falco. + load_plugins: [] + + # -- Customize subsettings for each enabled plugin. These settings will only be + # applied when the corresponding plugin is enabled using the `load_plugins` + # option. plugins: - name: k8saudit library_path: libk8saudit.so @@ -357,213 +759,181 @@ falco: library_path: libjson.so init_config: "" - # Setting this list to empty ensures that the above plugins are *not* - # loaded and enabled by default. If you want to use the above plugins, - # set a meaningful init_config/open_params for the cloudtrail plugin - # and then change this to: - # load_plugins: [cloudtrail, json] - # -- Add here the names of the plugins that you want to be loaded by Falco. Please make sure that - # plugins have ben configured under the "plugins" section before adding them here. - load_plugins: [] + ###################### + # Falco config files # + ###################### + + # [Stable] `config_files` + # + # Falco will load additional configs files specified here. + # Their loading is assumed to be made *after* main config file has been processed, + # exactly in the order they are specified. + # Therefore, loaded config files *can* override values from main config file. + # Also, nested include is not allowed, ie: included config files won't be able to include other config files. + # + # Like for 'rules_files', specifying a folder will load all the configs files present in it in a lexicographical order. + config_files: + - /etc/falco/config.d + # [Stable] `watch_config_files` + # + # Falco monitors configuration and rule files for changes and automatically + # reloads itself to apply the updated configuration when any modifications are + # detected. This feature is particularly useful when you want to make real-time + # changes to the configuration or rules of Falco without interrupting its + # operation or losing its state. For more information about Falco's state + # engine, please refer to the `base_syscalls` section. # -- Watch config file and rules files for modification. # When a file is modified, Falco will propagate new config, # by reloading itself. watch_config_files: true - # -- If true, the times displayed in log messages and output messages - # will be in ISO 8601. By default, times are displayed in the local - # time zone, as governed by /etc/localtime. + ########################## + # Falco outputs settings # + ########################## + + # [Stable] `time_format_iso_8601` + # + # -- When enabled, Falco will display log and output messages with times in the ISO + # 8601 format. By default, times are shown in the local time zone determined by + # the /etc/localtime configuration. time_format_iso_8601: false - # -- Whether to output events in json or text. + # [Stable] `priority` + # + # -- Any rule with a priority level more severe than or equal to the specified + # minimum level will be loaded and run by Falco. This allows you to filter and + # control the rules based on their severity, ensuring that only rules of a + # certain priority or higher are active and evaluated by Falco. Supported + # levels: "emergency", "alert", "critical", "error", "warning", "notice", + # "info", "debug" + priority: debug + + # [Stable] `json_output` + # + # -- When enabled, Falco will output alert messages and rules file + # loading/validation results in JSON format, making it easier for downstream + # programs to process and consume the data. By default, this option is disabled. json_output: false - # -- When using json output, whether or not to include the "output" property - # itself (e.g. "File below a known binary directory opened for writing - # (user=root ....") in the json output. + # [Stable] `json_include_output_property` + # + # -- When using JSON output in Falco, you have the option to include the "output" + # property itself in the generated JSON output. The "output" property provides + # additional information about the purpose of the rule. To reduce the logging + # volume, it is recommended to turn it off if it's not necessary for your use + # case. json_include_output_property: true - # -- When using json output, whether or not to include the "tags" property - # itself in the json output. If set to true, outputs caused by rules - # with no tags will have a "tags" field set to an empty array. If set to - # false, the "tags" field will not be included in the json output at all. + # [Stable] `json_include_tags_property` + # + # -- When using JSON output in Falco, you have the option to include the "tags" + # field of the rules in the generated JSON output. The "tags" field provides + # additional metadata associated with the rule. To reduce the logging volume, + # if the tags associated with the rule are not needed for your use case or can + # be added at a later stage, it is recommended to turn it off. json_include_tags_property: true - # -- Send information logs to syslog. Note these are *not* security - # notification logs! These are just Falco lifecycle (and possibly error) logs. - log_stderr: true - # -- Send information logs to stderr. Note these are *not* security - # notification logs! These are just Falco lifecycle (and possibly error) logs. - log_syslog: true - - # -- Minimum log level to include in logs. Note: these levels are - # separate from the priority field of rules. This refers only to the - # log level of falco's internal logging. Can be one of "emergency", - # "alert", "critical", "error", "warning", "notice", "info", "debug". - log_level: info - - # Falco is capable of managing the logs coming from libs. If enabled, - # the libs logger send its log records the same outputs supported by - # Falco (stderr and syslog). Disabled by default. - libs_logger: - # -- Enable the libs logger. - enabled: false - # -- Minimum log severity to include in the libs logs. Note: this value is - # separate from the log level of the Falco logger and does not affect it. - # Can be one of "fatal", "critical", "error", "warning", "notice", - # "info", "debug", "trace". - severity: debug - - # -- Minimum rule priority level to load and run. All rules having a - # priority more severe than this level will be loaded/run. Can be one - # of "emergency", "alert", "critical", "error", "warning", "notice", - # "informational", "debug". - priority: debug - - # -- Whether or not output to any of the output channels below is - # buffered. Defaults to false + # [Stable] `buffered_outputs` + # + # -- Enabling buffering for the output queue can offer performance optimization, + # efficient resource usage, and smoother data flow, resulting in a more reliable + # output mechanism. By default, buffering is disabled (false). buffered_outputs: false - # Falco uses a shared buffer between the kernel and userspace to pass - # system call information. When Falco detects that this buffer is - # full and system calls have been dropped, it can take one or more of - # the following actions: - # - ignore: do nothing (default when list of actions is empty) - # - log: log a DEBUG message noting that the buffer was full - # - alert: emit a Falco alert noting that the buffer was full - # - exit: exit Falco with a non-zero rc - # - # Notice it is not possible to ignore and log/alert messages at the same time. - # - # The rate at which log/alert messages are emitted is governed by a - # token bucket. The rate corresponds to one message every 30 seconds - # with a burst of one message (by default). - # - # The messages are emitted when the percentage of dropped system calls - # with respect the number of events in the last second - # is greater than the given threshold (a double in the range [0, 1]). + # [Stable] `outputs` # - # For debugging/testing it is possible to simulate the drops using - # the `simulate_drops: true`. In this case the threshold does not apply. - - syscall_event_drops: - # -- The messages are emitted when the percentage of dropped system calls - # with respect the number of events in the last second - # is greater than the given threshold (a double in the range [0, 1]). - threshold: .1 - # -- Actions to be taken when system calls were dropped from the circular buffer. - actions: - - log - - alert - # -- Rate at which log/alert messages are emitted. - rate: .03333 - # -- Max burst of messages emitted. - max_burst: 1 - - # Falco uses a shared buffer between the kernel and userspace to receive - # the events (eg., system call information) in userspace. + # -- A throttling mechanism, implemented as a token bucket, can be used to control + # the rate of Falco outputs. Each event source has its own rate limiter, + # ensuring that alerts from one source do not affect the throttling of others. + # The following options control the mechanism: + # - rate: the number of tokens (i.e. right to send a notification) gained per + # second. When 0, the throttling mechanism is disabled. Defaults to 0. + # - max_burst: the maximum number of tokens outstanding. Defaults to 1000. # - # Anyways, the underlying libraries can also timeout for various reasons. - # For example, there could have been issues while reading an event. - # Or the particular event needs to be skipped. - # Normally, it's very unlikely that Falco does not receive events consecutively. + # For example, setting the rate to 1 allows Falco to send up to 1000 + # notifications initially, followed by 1 notification per second. The burst + # capacity is fully restored after 1000 seconds of no activity. # - # Falco is able to detect such uncommon situation. + # Throttling can be useful in various scenarios, such as preventing notification + # floods, managing system load, controlling event processing, or complying with + # rate limits imposed by external systems or APIs. It allows for better resource + # utilization, avoids overwhelming downstream systems, and helps maintain a + # balanced and controlled flow of notifications. # - # Here you can configure the maximum number of consecutive timeouts without an event - # after which you want Falco to alert. - # By default this value is set to 1000 consecutive timeouts without an event at all. - # How this value maps to a time interval depends on the CPU frequency. + # With the default settings, the throttling mechanism is disabled. + outputs: + rate: 0 + max_burst: 1000 - syscall_event_timeouts: - # -- Maximum number of consecutive timeouts without an event - # after which you want Falco to alert. - max_consecutives: 1000 + ########################## + # Falco outputs channels # + ########################## - # Falco continuously monitors outputs performance. When an output channel does not allow - # to deliver an alert within a given deadline, an error is reported indicating - # which output is blocking notifications. - # The timeout error will be reported to the log according to the above log_* settings. - # Note that the notification will not be discarded from the output queue; thus, - # output channels may indefinitely remain blocked. - # An output timeout error indeed indicate a misconfiguration issue or I/O problems - # that cannot be recovered by Falco and should be fixed by the user. - # - # The "output_timeout" value specifies the duration in milliseconds to wait before - # considering the deadline exceed. - # - # With a 2000ms default, the notification consumer can block the Falco output - # for up to 2 seconds without reaching the timeout. - # -- Duration in milliseconds to wait before considering the output timeout deadline exceed. - output_timeout: 2000 + # Falco supports various output channels, such as syslog, stdout, file, gRPC, + # webhook, and more. You can enable or disable these channels as needed to + # control where Falco alerts and log messages are directed. This flexibility + # allows seamless integration with your preferred logging and alerting systems. + # Multiple outputs can be enabled simultaneously. - # A throttling mechanism implemented as a token bucket limits the - # rate of falco notifications. This throttling is controlled by the following configuration - # options: - # - rate: the number of tokens (i.e. right to send a notification) - # gained per second. Defaults to 1. - # - max_burst: the maximum number of tokens outstanding. Defaults to 1000. + # [Stable] `stdout_output` # - # With these defaults, falco could send up to 1000 notifications after - # an initial quiet period, and then up to 1 notification per second - # afterward. It would gain the full burst back after 1000 seconds of - # no activity. - - outputs: - # -- Number of tokens gained per second. - rate: 1 - # -- Maximum number of tokens outstanding. - max_burst: 1000 - - # Where security notifications should go. - # Multiple outputs can be enabled. + # -- Redirect logs to standard output. + stdout_output: + enabled: true + # [Stable] `syslog_output` + # + # -- Send logs to syslog. syslog_output: - # -- Enable syslog output for security notifications. enabled: true - # If keep_alive is set to true, the file will be opened once and - # continuously written to, with each output message on its own - # line. If keep_alive is set to false, the file will be re-opened - # for each output message. + # [Stable] `file_output` # - # Also, the file will be closed and reopened if falco is signaled with - # SIGUSR1. - + # -- When appending Falco alerts to a file, each new alert will be added to a new + # line. It's important to note that Falco does not perform log rotation for this + # file. If the `keep_alive` option is set to `true`, the file will be opened once + # and continuously written to, else the file will be reopened for each output + # message. Furthermore, the file will be closed and reopened if Falco receives + # the SIGUSR1 signal. file_output: - # -- Enable file output for security notifications. enabled: false - # -- Open file once or every time a new notification arrives. keep_alive: false - # -- The filename for logging notifications. filename: ./events.txt - stdout_output: - # -- Enable stdout output for security notifications. - enabled: true + # [Stable] `http_output` + # + # -- Send logs to an HTTP endpoint or webhook. + http_output: + enabled: false + url: "" + user_agent: "falcosecurity/falco" + # -- Tell Falco to not verify the remote server. + insecure: false + # -- Path to the CA certificate that can verify the remote server. + ca_cert: "" + # -- Path to a specific file that will be used as the CA certificate store. + ca_bundle: "" + # -- Path to a folder that will be used as the CA certificate store. CA certificate need to be + # stored as indivitual PEM files in this directory. + ca_path: "/etc/falco/certs/" + # -- Tell Falco to use mTLS + mtls: false + # -- Path to the client cert. + client_cert: "/etc/falco/certs/client/client.crt" + # -- Path to the client key. + client_key: "/etc/falco/certs/client/client.key" + # -- Whether to echo server answers to stdout + echo: false + # -- compress_uploads whether to compress data sent to http endpoint. + compress_uploads: false + # -- keep_alive whether to keep alive the connection. + keep_alive: false - # Falco contains an embedded webserver that exposes a healthy endpoint that can be used to check if Falco is up and running. - # By default the endpoint is /healthz + # [Stable] `program_output` # - # The ssl_certificate is a combination SSL Certificate and corresponding - # key contained in a single file. You can generate a key/cert as follows: + # -- Redirect the output to another program or command. # - # $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem - # $ cat certificate.pem key.pem > falco.pem - # $ sudo cp falco.pem /etc/falco/falco.pem - webserver: - # -- Enable Falco embedded webserver. - enabled: true - # -- Port where Falco embedded webserver listen to connections. - listen_port: 8765 - # -- Endpoint where Falco exposes the health status. - k8s_healthz_endpoint: /healthz - # -- Enable SSL on Falco embedded webserver. - ssl_enabled: false - # -- Certificate bundle path for the Falco embedded webserver. - ssl_certificate: /etc/falco/falco.pem - # Possible additional things you might want to do with program output: # - send to a slack webhook: # program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" @@ -571,73 +941,589 @@ falco: # program: logger -t falco-test # - send over a network connection: # program: nc host.example.com 80 - - # If keep_alive is set to true, the program will be started once and - # continuously written to, with each output message on its own - # line. If keep_alive is set to false, the program will be re-spawned - # for each output message. - # - # Also, the program will be closed and reopened if falco is signaled with - # SIGUSR1. + # If `keep_alive` is set to `true`, the program will be started once and + # continuously written to, with each output message on its own line. If + # `keep_alive` is set to `false`, the program will be re-spawned for each output + # message. Furthermore, the program will be re-spawned if Falco receives + # the SIGUSR1 signal. program_output: - # -- Enable program output for security notifications. enabled: false - # -- Start the program once or re-spawn when a notification arrives. keep_alive: false - # -- Command to execute for program output. program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - http_output: - # -- Enable http output for security notifications. + # [Stable] `grpc_output` + # + # -- Use gRPC as an output service. + # + # gRPC is a modern and high-performance framework for remote procedure calls + # (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC + # output in Falco provides a modern and efficient way to integrate with other + # systems. By default the setting is turned off. Enabling this option stores + # output events in memory until they are consumed by a gRPC client. Ensure that + # you have a consumer for the output events or leave it disabled. + grpc_output: enabled: false - # -- When set, this will override an auto-generated URL which matches the falcosidekick Service. - # -- When including Falco inside a parent helm chart, you must set this since the auto-generated URL won't match (#280). - url: "" - user_agent: "falcosecurity/falco" - # Falco supports running a gRPC server with two main binding types - # 1. Over the network with mandatory mutual TLS authentication (mTLS) - # 2. Over a local unix socket with no authentication - # By default, the gRPC server is disabled, with no enabled services (see grpc_output) - # please comment/uncomment and change accordingly the options below to configure it. - # Important note: if Falco has any troubles creating the gRPC server - # this information will be logged, however the main Falco daemon will not be stopped. - # gRPC server over network with (mandatory) mutual TLS configuration. - # This gRPC server is secure by default so you need to generate certificates and update their paths here. - # By default the gRPC server is off. - # You can configure the address to bind and expose it. - # By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use. + ########################## + # Falco exposed services # + ########################## + + # [Stable] `grpc` + # + # Falco provides support for running a gRPC server using two main binding types: + # 1. Over the network with mandatory mutual TLS authentication (mTLS), which + # ensures secure communication + # 2. Local Unix socket binding with no authentication. By default, the + # gRPCserver in Falco is turned off with no enabled services (see + # `grpc_output`setting). + # + # To configure the gRPC server in Falco, you can make the following changes to + # the options: + # + # - Uncomment the relevant configuration options related to the gRPC server. + # - Update the paths of the generated certificates for mutual TLS authentication + # if you choose to use mTLS. + # - Specify the address to bind and expose the gRPC server. + # - Adjust the threadiness configuration to control the number of threads and + # contexts used by the server. + # + # Keep in mind that if any issues arise while creating the gRPC server, the + # information will be logged, but it will not stop the main Falco daemon. + + # gRPC server using mTLS # grpc: # enabled: true # bind_address: "0.0.0.0:5060" - # # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores + # # When the `threadiness` value is set to 0, Falco will automatically determine + # # the appropriate number of threads based on the number of online cores in the system. # threadiness: 0 # private_key: "/etc/falco/certs/server.key" # cert_chain: "/etc/falco/certs/server.crt" # root_certs: "/etc/falco/certs/ca.crt" - # -- gRPC server using an unix socket + # -- gRPC server using a local unix socket grpc: - # -- Enable the Falco gRPC server. enabled: false - # -- Bind address for the grpc server. - bind_address: "unix:///var/run/falco/falco.sock" - # -- Number of threads (and context) the gRPC server will use, 0 by default, which means "auto". + bind_address: "unix:///run/falco/falco.sock" + # -- When the `threadiness` value is set to 0, Falco will automatically determine + # the appropriate number of threads based on the number of online cores in the system. threadiness: 0 - # gRPC output service. - # By default it is off. - # By enabling this all the output events will be kept in memory until you read them with a gRPC client. - # Make sure to have a consumer for them or leave this disabled. - grpc_output: - # -- Enable the gRPC output and events will be kept in memory until you read them with a gRPC client. + # [Stable] `webserver` + # + # -- Falco supports an embedded webserver that runs within the Falco process, + # providing a lightweight and efficient way to expose web-based functionalities + # without the need for an external web server. The following endpoints are + # exposed: + # - /healthz: designed to be used for checking the health and availability of + # the Falco application (the name of the endpoint is configurable). + # - /versions: responds with a JSON object containing the version numbers of the + # internal Falco components (similar output as `falco --version -o + # json_output=true`). + # + # Please note that the /versions endpoint is particularly useful for other Falco + # services, such as `falcoctl`, to retrieve information about a running Falco + # instance. If you plan to use `falcoctl` locally or with Kubernetes, make sure + # the Falco webserver is enabled. + # + # The behavior of the webserver can be controlled with the following options, + # which are enabled by default: + # + # The `ssl_certificate` option specifies a combined SSL certificate and + # corresponding key that are contained in a single file. You can generate a + # key/cert as follows: + # + # $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out + # certificate.pem $ cat certificate.pem key.pem > falco.pem $ sudo cp falco.pem + # /etc/falco/falco.pem + webserver: + enabled: true + # When the `threadiness` value is set to 0, Falco will automatically determine + # the appropriate number of threads based on the number of online cores in the system. + threadiness: 0 + listen_port: 8765 + k8s_healthz_endpoint: /healthz + # [Incubating] `prometheus_metrics_enabled` + # + # Enable the metrics endpoint providing Prometheus values + # It will only have an effect if metrics.enabled is set to true as well. + prometheus_metrics_enabled: false + ssl_enabled: false + ssl_certificate: /etc/falco/falco.pem + + ############################################################################## + # Falco logging / alerting / metrics related to software functioning (basic) # + ############################################################################## + + # [Stable] `log_stderr` and `log_syslog` + # + # Falco's logs related to the functioning of the software, which are not related + # to Falco alert outputs but rather its lifecycle, settings and potential + # errors, can be directed to stderr and/or syslog. + # -- Send information logs to stderr. Note these are *not* security + # notification logs! These are just Falco lifecycle (and possibly error) logs. + log_stderr: true + # -- Send information logs to syslog. Note these are *not* security + # notification logs! These are just Falco lifecycle (and possibly error) logs. + log_syslog: true + + # [Stable] `log_level` + # + # -- The `log_level` setting determines the minimum log level to include in Falco's + # logs related to the functioning of the software. This setting is separate from + # the `priority` field of rules and specifically controls the log level of + # Falco's operational logging. By specifying a log level, you can control the + # verbosity of Falco's operational logs. Only logs of a certain severity level + # or higher will be emitted. Supported levels: "emergency", "alert", "critical", + # "error", "warning", "notice", "info", "debug". + log_level: info + + # [Stable] `libs_logger` + # + # -- The `libs_logger` setting in Falco determines the minimum log level to include + # in the logs related to the functioning of the software of the underlying + # `libs` library, which Falco utilizes. This setting is independent of the + # `priority` field of rules and the `log_level` setting that controls Falco's + # operational logs. It allows you to specify the desired log level for the `libs` + # library specifically, providing more granular control over the logging + # behavior of the underlying components used by Falco. Only logs of a certain + # severity level or higher will be emitted. Supported levels: "emergency", + # "alert", "critical", "error", "warning", "notice", "info", "debug". It is not + # recommended for production use. + libs_logger: enabled: false + severity: debug - # Container orchestrator metadata fetching params - metadata_download: - # -- Max allowed response size (in Mb) when fetching metadata from Kubernetes. - max_mb: 100 - # -- Sleep time (in μs) for each download chunck when fetching metadata from Kubernetes. - chunk_wait_us: 1000 - # -- Watch frequency (in seconds) when fetching metadata from Kubernetes. - watch_freq_sec: 1 + ################################################################################# + # Falco logging / alerting / metrics related to software functioning (advanced) # + ################################################################################# + + # [Stable] `output_timeout` + # + # Generates Falco operational logs when `log_level=notice` at minimum + # + # A timeout error occurs when a process or operation takes longer to complete + # than the allowed or expected time limit. In the context of Falco, an output + # timeout error refers to the situation where an output channel fails to deliver + # an alert within a specified deadline. Various reasons, such as network issues, + # resource constraints, or performance bottlenecks can cause timeouts. + # + # -- The `output_timeout` parameter specifies the duration, in milliseconds, to + # wait before considering the deadline exceeded. By default, the timeout is set + # to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block + # the Falco output channel for up to 2 seconds without triggering a timeout + # error. + # + # Falco actively monitors the performance of output channels. With this setting + # the timeout error can be logged, but please note that this requires setting + # Falco's operational logs `log_level` to a minimum of `notice`. + # + # It's important to note that Falco outputs will not be discarded from the + # output queue. This means that if an output channel becomes blocked + # indefinitely, it indicates a potential issue that needs to be addressed by the + # user. + output_timeout: 2000 + + # [Stable] `syscall_event_timeouts` + # + # -- Generates Falco operational logs when `log_level=notice` at minimum + # + # Falco utilizes a shared buffer between the kernel and userspace to receive + # events, such as system call information, in userspace. However, there may be + # cases where timeouts occur in the underlying libraries due to issues in + # reading events or the need to skip a particular event. While it is uncommon + # for Falco to experience consecutive event timeouts, it has the capability to + # detect such situations. You can configure the maximum number of consecutive + # timeouts without an event after which Falco will generate an alert, but please + # note that this requires setting Falco's operational logs `log_level` to a + # minimum of `notice`. The default value is set to 1000 consecutive timeouts + # without receiving any events. The mapping of this value to a time interval + # depends on the CPU frequency. + syscall_event_timeouts: + max_consecutives: 1000 + + # [Stable] `syscall_event_drops` + # + # Generates "Falco internal: syscall event drop" rule output when `priority=debug` at minimum + # + # --- [Description] + # + # Falco uses a shared buffer between the kernel and userspace to pass system + # call information. When Falco detects that this buffer is full and system calls + # have been dropped, it can take one or more of the following actions: + # - ignore: do nothing (default when list of actions is empty) + # - log: log a DEBUG message noting that the buffer was full + # - alert: emit a Falco alert noting that the buffer was full + # - exit: exit Falco with a non-zero rc + # + # Notice it is not possible to ignore and log/alert messages at the same time. + # + # The rate at which log/alert messages are emitted is governed by a token + # bucket. The rate corresponds to one message every 30 seconds with a burst of + # one message (by default). + # + # The messages are emitted when the percentage of dropped system calls with + # respect the number of events in the last second is greater than the given + # threshold (a double in the range [0, 1]). If you want to be alerted on any + # drops, set the threshold to 0. + # + # For debugging/testing it is possible to simulate the drops using the + # `simulate_drops: true`. In this case the threshold does not apply. + # + # --- [Usage] + # + # Enabled by default, but requires Falco rules config `priority` set to `debug`. + # Emits a Falco rule named "Falco internal: syscall event drop" as many times in + # a given time period as dictated by the settings. Statistics here reflect the + # delta in a 1s time period. + # + # If instead you prefer periodic metrics of monotonic counters at a regular + # interval, which include syscall drop statistics and additional metrics, + # explore the `metrics` configuration option. + # -- For debugging/testing it is possible to simulate the drops using + # the `simulate_drops: true`. In this case the threshold does not apply. + syscall_event_drops: + # -- The messages are emitted when the percentage of dropped system calls + # with respect the number of events in the last second + # is greater than the given threshold (a double in the range [0, 1]). + threshold: .1 + # -- Actions to be taken when system calls were dropped from the circular buffer. + actions: + - log + - alert + # -- Rate at which log/alert messages are emitted. + rate: .03333 + # -- Max burst of messages emitted. + max_burst: 1 + # -- Flag to enable drops for debug purposes. + simulate_drops: false + + # [Experimental] `metrics` + # + # -- Generates "Falco internal: metrics snapshot" rule output when `priority=info` at minimum + # + # periodic metric snapshots (including stats and resource utilization) captured + # at regular intervals + # + # --- [Description] + # + # Consider these key points about the `metrics` feature in Falco: + # + # - It introduces a redesigned stats/metrics system. + # - Native support for resource utilization metrics and specialized performance + # metrics. + # - Metrics are emitted as monotonic counters at predefined intervals + # (snapshots). + # - All metrics are consolidated into a single log message, adhering to the + # established rules schema and naming conventions. + # - Additional info fields complement the metrics and facilitate customized + # statistical analyses and correlations. + # - The metrics framework is designed for easy future extension. + # + # The `metrics` feature follows a specific schema and field naming convention. + # All metrics are collected as subfields under the `output_fields` key, similar + # to regular Falco rules. Each metric field name adheres to the grammar used in + # Falco rules. There are two new field classes introduced: `falco.` and `scap.`. + # The `falco.` class represents userspace counters, statistics, resource + # utilization, or useful information fields. The `scap.` class represents + # counters and statistics mostly obtained from Falco's kernel instrumentation + # before events are sent to userspace, but can include scap userspace stats as + # well. + # + # It's important to note that the output fields and their names can be subject + # to change until the metrics feature reaches a stable release. + # + # To customize the hostname in Falco, you can set the environment variable + # `FALCO_HOSTNAME` to your desired hostname. This is particularly useful in + # Kubernetes deployments where the hostname can be set to the pod name. + # + # --- [Usage] + # + # `enabled`: Disabled by default. + # + # `interval`: The stats interval in Falco follows the time duration definitions + # used by Prometheus. + # https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations + # + # Time durations are specified as a number, followed immediately by one of the + # following units: + # + # ms - millisecond + # s - second + # m - minute + # h - hour + # d - day - assuming a day has always 24h + # w - week - assuming a week has always 7d + # y - year - assuming a year has always 365d + # + # Example of a valid time duration: 1h30m20s10ms + # + # A minimum interval of 100ms is enforced for metric collection. However, for + # production environments, we recommend selecting one of the following intervals + # for optimal monitoring: + # + # 15m + # 30m + # 1h + # 4h + # 6h + # + # `output_rule`: To enable seamless metrics and performance monitoring, we + # recommend emitting metrics as the rule "Falco internal: metrics snapshot". + # This option is particularly useful when Falco logs are preserved in a data + # lake. Please note that to use this option, the Falco rules config `priority` + # must be set to `info` at a minimum. + # + # `output_file`: Append stats to a `jsonl` file. Use with caution in production + # as Falco does not automatically rotate the file. + # + # `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage + # is reported as a percentage of one CPU and can be normalized to the total + # number of CPUs to determine overall usage. Memory metrics are provided in raw + # units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) + # and can be uniformly converted to megabytes (MB) using the + # `convert_memory_to_mb` functionality. In environments such as Kubernetes when + # deployed as daemonset, it is crucial to track Falco's container memory usage. + # To customize the path of the memory metric file, you can create an environment + # variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By + # default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to + # monitor container memory usage, which aligns with Kubernetes' + # `container_memory_working_set_bytes` metric. Finally, we emit the overall host + # CPU and memory usages, along with the total number of processes and open file + # descriptors (fds) on the host, obtained from the proc file system unrelated to + # Falco's monitoring. These metrics help assess Falco's usage in relation to the + # server's workload intensity. + # + # `rules_counters_enabled`: Emit counts for each rule. + # + # `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage + # is reported as a percentage of one CPU and can be normalized to the total + # number of CPUs to determine overall usage. Memory metrics are provided in raw + # units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) + # and can be uniformly converted to megabytes (MB) using the + # `convert_memory_to_mb` functionality. In environments such as Kubernetes when + # deployed as daemonset, it is crucial to track Falco's container memory usage. + # To customize the path of the memory metric file, you can create an environment + # variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By + # default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to + # monitor container memory usage, which aligns with Kubernetes' + # `container_memory_working_set_bytes` metric. Finally, we emit the overall host + # CPU and memory usages, along with the total number of processes and open file + # descriptors (fds) on the host, obtained from the proc file system unrelated to + # Falco's monitoring. These metrics help assess Falco's usage in relation to the + # server's workload intensity. + # + # `state_counters_enabled`: Emit counters related to Falco's state engine, including + # added, removed threads or file descriptors (fds), and failed lookup, store, or + # retrieve actions in relation to Falco's underlying process cache table (threadtable). + # We also log the number of currently cached containers if applicable. + # + # `kernel_event_counters_enabled`: Emit kernel side event and drop counters, as + # an alternative to `syscall_event_drops`, but with some differences. These + # counters reflect monotonic values since Falco's start and are exported at a + # constant stats interval. + # + # `libbpf_stats_enabled`: Exposes statistics similar to `bpftool prog show`, + # providing information such as the number of invocations of each BPF program + # attached by Falco and the time spent in each program measured in nanoseconds. + # To enable this feature, the kernel must be >= 5.1, and the kernel + # configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, + # or an equivalent statistics feature, is not available for non `*bpf*` drivers. + # Additionally, please be aware that the current implementation of `libbpf` does + # not support granularity of statistics at the bpf tail call level. + # + # `include_empty_values`: When the option is set to true, fields with an empty + # numeric value will be included in the output. However, this rule does not + # apply to high-level fields such as `n_evts` or `n_drops`; they will always be + # included in the output even if their value is empty. This option can be + # beneficial for exploring the data schema and ensuring that fields with empty + # values are included in the output. + # todo: prometheus export option + # todo: syscall_counters_enabled option + metrics: + enabled: false + interval: 1h + output_rule: true + # output_file: /tmp/falco_stats.jsonl + rules_counters_enabled: true + resource_utilization_enabled: true + state_counters_enabled: true + kernel_event_counters_enabled: true + libbpf_stats_enabled: true + convert_memory_to_mb: true + include_empty_values: false + + + ####################################### + # Falco performance tuning (advanced) # + ####################################### + + # [Experimental] `base_syscalls`, use with caution, read carefully + # + # --- [Description] + # + # -- This option configures the set of syscalls that Falco traces. + # + # --- [Falco's State Engine] + # + # Falco requires a set of syscalls to build up state in userspace. For example, + # when spawning a new process or network connection, multiple syscalls are + # involved. Furthermore, properties of a process during its lifetime can be + # modified by syscalls. Falco accounts for this by enabling the collection of + # additional syscalls than the ones defined in the rules and by managing a smart + # process cache table in userspace. Processes are purged from this table when a + # process exits. + # + # By default, with + # ``` + # base_syscalls.custom_set = [] + # base_syscalls.repair = false + # ``` + # Falco enables tracing for a syscall set gathered: (1) from (enabled) Falco + # rules (2) from a static, more verbose set defined in + # `libsinsp::events::sinsp_state_sc_set` in + # libs/userspace/libsinsp/events/sinsp_events_ppm_sc.cpp This allows Falco to + # successfully build up it's state engine and life-cycle management. + # + # If the default behavior described above does not fit the user's use case for + # Falco, the `base_syscalls` option allows for finer end-user control of + # syscalls traced by Falco. + # + # --- [base_syscalls.custom_set] + # + # CAUTION: Misconfiguration of this setting may result in incomplete Falco event + # logs or Falco being unable to trace events entirely. + # + # `base_syscalls.custom_set` allows the user to explicitly define an additional + # set of syscalls to be traced in addition to the syscalls from each enabled + # Falco rule. + # + # This is useful in lowering CPU utilization and further tailoring Falco to + # specific environments according to your threat model and budget constraints. + # + # --- [base_syscalls.repair] + # + # `base_syscalls.repair` is an alternative to Falco's default state engine + # enforcement. When enabled, this option is designed to (1) ensure that Falco's + # state engine is correctly and successfully built-up (2) be the most system + # resource-friendly by activating the least number of additional syscalls + # (outside of those enabled for enabled rules) + # + # Setting `base_syscalls.repair` to `true` allows Falco to automatically + # configure what is described in the [Suggestions] section below. + # + # `base_syscalls.repair` can be enabled with an empty custom set, meaning with + # the following, + # ``` + # base_syscalls.custom_set = [] + # base_syscalls.repair = true + # ``` + # Falco enables tracing for a syscall set gathered: (1) from (enabled) Falco + # rules (2) from minimal set of additional syscalls needed to "repair" the + # state engine and properly log event conditions specified in enabled Falco + # rules + # + # --- [Usage] + # + # List of system calls names (), negative ("!") + # notation supported. + # + # Example: base_syscalls.custom_set: [, , + # "!"] base_syscalls.repair: + # + # We recommend to only exclude syscalls, e.g. "!mprotect" if you need a fast + # deployment update (overriding rules), else remove unwanted syscalls from the + # Falco rules. + # + # Passing `-o "log_level=debug" -o "log_stderr=true" --dry-run` to Falco's cmd + # args will print the final set of syscalls to STDOUT. + # + # --- [Suggestions] + # + # NOTE: setting `base_syscalls.repair: true` automates the following suggestions + # for you. + # + # These suggestions are subject to change as Falco and its state engine evolve. + # + # For execve* events: Some Falco fields for an execve* syscall are retrieved + # from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning + # a new process. The `close` syscall is used to purge file descriptors from + # Falco's internal thread / process cache table and is necessary for rules + # relating to file descriptors (e.g. open, openat, openat2, socket, connect, + # accept, accept4 ... and many more) + # + # Consider enabling the following syscalls in `base_syscalls.custom_set` for + # process rules: [clone, clone3, fork, vfork, execve, execveat, close] + # + # For networking related events: While you can log `connect` or `accept*` + # syscalls without the socket syscall, the log will not contain the ip tuples. + # Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also + # necessary. + # + # We recommend the following as the minimum set for networking-related rules: + # [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, + # getsockopt] + # + # Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process + # when the running process opens a file or makes a network connection, consider + # adding the following to the above recommended syscall sets: ... setresuid, + # setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, + # fchdir ... + base_syscalls: + custom_set: [] + repair: false + + + ############## + # Falco libs # + ############## + + # [Experimental] `falco_libs` - Potentially subject to more frequent changes + # + # `thread_table_size` + # + # Set the maximum number of entries (the absolute maximum value can only be MAX UINT32) + # for Falco's internal threadtable (process cache). Please note that Falco operates at a + # granular level, focusing on individual threads. Falco rules reference the thread leader + # as the process. The size of the threadtable should typically be much higher than the + # number of currently alive processes. The default value should work well on modern + # infrastructures and be sufficient to absorb bursts. + # + # Reducing its size can help in better memory management, but as a consequence, your + # process tree may be more frequently disrupted due to missing threads. You can explore + # `metrics.state_counters_enabled` to measure how the internal state handling is performing, + # and the fields called `n_drops_full_threadtable` or `n_store_evts_drops` will inform you + # if you should increase this value for optimal performance. + falco_libs: + thread_table_size: 262144 + + # [Stable] Guidance for Kubernetes container engine command-line args settings + # + # Modern cloud environments, particularly Kubernetes, heavily rely on + # containerized workload deployments. When capturing events with Falco, it + # becomes essential to identify the owner of the workload for which events are + # being captured, such as syscall events. Falco integrates with the container + # runtime to enrich its events with container information, including fields like + # `container.image.repository`, `container.image.tag`, ... , `k8s.ns.name`, + # `k8s.pod.name`, `k8s.pod.*` in the Falco output (Falco retrieves Kubernetes + # namespace and pod name directly from the container runtime, see + # https://falco.org/docs/reference/rules/supported-fields/#field-class-container). + # + # Furthermore, Falco exposes container events themselves as a data source for + # alerting. To achieve this integration with the container runtime, Falco + # requires access to the runtime socket. By default, for Kubernetes, Falco + # attempts to connect to the following sockets: + # "/run/containerd/containerd.sock", "/run/crio/crio.sock", + # "/run/k3s/containerd/containerd.sock". If you have a custom path, you can use + # the `--cri` option to specify the correct location. + # + # In some cases, you may encounter empty fields for container metadata. To + # address this, you can explore the `--disable-cri-async` option, which disables + # asynchronous fetching if the fetch operation is not completing quickly enough. + # + # To get more information on these command-line arguments, you can run `falco + # --help` in your terminal to view their current descriptions. + # + # !!! The options mentioned here are not available in the falco.yaml + # configuration file. Instead, they can can be used as a command-line argument + # when running the Falco command. diff --git a/charts/falco/falco/values.yaml b/charts/falco/falco/values.yaml index 5b0b0cd3b..7aca156f6 100644 --- a/charts/falco/falco/values.yaml +++ b/charts/falco/falco/values.yaml @@ -13,16 +13,17 @@ falco: # -- The image repository to pull from repository: falcosecurity/falco-no-driver # -- The image tag to pull. Overrides the image tag whose default is the chart appVersion. - tag: 0.32.2 + tag: 0.38.2 # -- Secrets containing credentials when pulling from private/secure registries. imagePullSecrets: [] # -- Put here the new name if you want to override the release name used for Falco components. nameOverride: "" # -- Same as nameOverride but for the fullname. fullnameOverride: "" - rbac: - # Create and use rbac resources when set to true. Needed to fetch k8s metadata from the api-server. - create: true + # -- Override the deployment namespace + namespaceOverride: "" + # -- Add additional pod annotations + podAnnotations: {} serviceAccount: # -- Specifies whether a service account should be created. create: true @@ -31,8 +32,9 @@ falco: # -- The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: "" - # -- Add additional pod annotations - podAnnotations: {} + rbac: + # Create and use rbac resources when set to true. Needed to list and update configmaps in Falco's namespace. + create: true # -- Add additional pod labels podLabels: {} # -- Set pod priorityClassName @@ -49,7 +51,7 @@ falco: # 1) driver.enabled = false: # securityContext: {} # - # 2) driver.enabled = true and driver.kind = module: + # 2) driver.enabled = true and (driver.kind = module || driver.kind = modern-bpf): # securityContext: # privileged: true # @@ -79,6 +81,8 @@ falco: cpu: 100m memory: 512Mi # -- Maximum amount of resources that Falco container could get. + # If you are enabling more than one source in falco, than consider to increase + # the cpu limits. limits: cpu: 1000m memory: 1024Mi @@ -86,10 +90,12 @@ falco: nodeSelector: {} # -- Affinity constraint for pods' scheduling. affinity: {} - # -- Tolerations to allow Falco to run on Kubernetes 1.6 masters. + # -- Tolerations to allow Falco to run on Kubernetes masters. tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master + - effect: NoSchedule + key: node-role.kubernetes.io/control-plane # -- Parameters used healthChecks: livenessProbe: @@ -117,6 +123,10 @@ falco: controller: # Available options: deployment, daemonset. kind: daemonset + # Annotations to add to the daemonset or deployment + annotations: {} + # -- Extra labels to add to the daemonset or deployment + labels: {} daemonset: updateStrategy: # You can also customize maxUnavailable or minReadySeconds if you @@ -128,6 +138,8 @@ falco: # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. # For more info check the section on Plugins in the README.md file. replicas: 1 + # -- Number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) + # revisionHistoryLimit: 1 # -- Network services configuration (scenario requirement) # Add here your services to be deployed together with Falco. services: @@ -139,6 +151,98 @@ falco: # nodePort: 30007 # protocol: TCP + # -- metrics configures Falco to enable and expose the metrics. + metrics: + # -- enabled specifies whether the metrics should be enabled. + enabled: false + # -- interval is stats interval in Falco follows the time duration definitions + # used by Prometheus. + # https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations + # Time durations are specified as a number, followed immediately by one of the + # following units: + # ms - millisecond + # s - second + # m - minute + # h - hour + # d - day - assuming a day has always 24h + # w - week - assuming a week has always 7d + # y - year - assuming a year has always 365d + # Example of a valid time duration: 1h30m20s10ms + # A minimum interval of 100ms is enforced for metric collection. However, for + # production environments, we recommend selecting one of the following intervals + # for optimal monitoring: + # 15m + # 30m + # 1h + # 4h + # 6h + interval: 1h + # -- outputRule enables seamless metrics and performance monitoring, we + # recommend emitting metrics as the rule "Falco internal: metrics snapshot". + # This option is particularly useful when Falco logs are preserved in a data + # lake. Please note that to use this option, the Falco rules config `priority` + # must be set to `info` at a minimum. + outputRule: false + # -- rulesCountersEnabled specifies whether the counts for each rule should be emitted. + rulesCountersEnabled: true + # -- resourceUtilizationEnabled`: Emit CPU and memory usage metrics. CPU usage + # is reported as a percentage of one CPU and can be normalized to the total + # number of CPUs to determine overall usage. Memory metrics are provided in raw + # units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) + # and can be uniformly converted to megabytes (MB) using the + # `convert_memory_to_mb` functionality. In environments such as Kubernetes when + # deployed as daemonset, it is crucial to track Falco's container memory usage. + # To customize the path of the memory metric file, you can create an environment + # variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By + # default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to + # monitor container memory usage, which aligns with Kubernetes' + # `container_memory_working_set_bytes` metric. Finally, we emit the overall host + # CPU and memory usages, along with the total number of processes and open file + # descriptors (fds) on the host, obtained from the proc file system unrelated to + # Falco's monitoring. These metrics help assess Falco's usage in relation to the + # server's workload intensity. + resourceUtilizationEnabled: true + # stateCountersEnabled emits counters related to Falco's state engine, including + # added, removed threads or file descriptors (fds), and failed lookup, store, or + # retrieve actions in relation to Falco's underlying process cache table (threadtable). + # We also log the number of currently cached containers if applicable. + stateCountersEnabled: true + # kernelEventCountersEnabled emits kernel side event and drop counters, as + # an alternative to `syscall_event_drops`, but with some differences. These + # counters reflect monotonic values since Falco's start and are exported at a + # constant stats interval. + kernelEventCountersEnabled: true + # -- libbpfStatsEnabled exposes statistics similar to `bpftool prog show`, + # providing information such as the number of invocations of each BPF program + # attached by Falco and the time spent in each program measured in nanoseconds. + # To enable this feature, the kernel must be >= 5.1, and the kernel + # configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, + # or an equivalent statistics feature, is not available for non `*bpf*` drivers. + # Additionally, please be aware that the current implementation of `libbpf` does + # not support granularity of statistics at the bpf tail call level. + libbpfStatsEnabled: true + # -- convertMemoryToMB specifies whether the memory should be converted to mb. + convertMemoryToMB: true + # -- includeEmptyValues specifies whether the empty values should be included in the metrics. + includeEmptyValues: false + # -- service exposes the metrics service to be accessed from within the cluster. + # ref: https://kubernetes.io/docs/concepts/services-networking/service/ + service: + # -- create specifies whether a service should be created. + create: true + # -- type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible + # from within the cluster. + type: ClusterIP + # -- ports denotes all the ports on which the Service will listen. + ports: + # -- metrics denotes a listening service named "metrics". + metrics: + # -- port is the port on which the Service will listen. + port: 8765 + # -- targetPort is the port on which the Pod is listening. + targetPort: 8765 + # -- protocol specifies the network protocol that the Service should use for the associated port. + protocol: "TCP" # File access configuration (scenario requirement) mounts: # -- A list of volumes you want to add to the Falco pods. @@ -152,27 +256,64 @@ falco: # -- Set it to false if you want to deploy Falco without the drivers. # Always set it to false when using Falco with plugins. enabled: true - # -- Tell Falco which driver to use. Available options: module (kernel driver) and ebpf (eBPF probe). - kind: module + # -- kind tells Falco which driver to use. Available options: kmod (kernel driver), ebpf (eBPF probe), modern_ebpf (modern eBPF probe). + kind: auto + # -- kmod holds the configuration for the kernel module. + kmod: + # -- bufSizePreset determines the size of the shared space between Falco and its drivers. + # This shared space serves as a temporary storage for syscall events. + bufSizePreset: 4 + # -- dropFailedExit if set true drops failed system call exit events before pushing them to userspace. + dropFailedExit: false # -- Configuration section for ebpf driver. ebpf: - # -- Path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init + # -- path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init # container deployed with the chart. - path: + path: "${HOME}/.falco/falco-bpf.o" # -- Needed to enable eBPF JIT at runtime for performance reasons. # Can be skipped if eBPF JIT is enabled from outside the container hostNetwork: false # -- Constrain Falco with capabilities instead of running a privileged container. - # This option is only supported with the eBPF driver and a kernel >= 5.8. # Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). + # Capabilities used: {CAP_SYS_RESOURCE, CAP_SYS_ADMIN, CAP_SYS_PTRACE}. + # On kernel versions >= 5.8 'CAP_PERFMON' and 'CAP_BPF' could replace 'CAP_SYS_ADMIN' but please pay attention to the 'kernel.perf_event_paranoid' value on your system. + # Usually 'kernel.perf_event_paranoid>2' means that you cannot use 'CAP_PERFMON' and you should fallback to 'CAP_SYS_ADMIN', but the behavior changes across different distros. + # Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-1 + leastPrivileged: false + # -- bufSizePreset determines the size of the shared space between Falco and its drivers. + # This shared space serves as a temporary storage for syscall events. + bufSizePreset: 4 + # -- dropFailedExit if set true drops failed system call exit events before pushing them to userspace. + dropFailedExit: false + modernEbpf: + # -- Constrain Falco with capabilities instead of running a privileged container. + # Ensure the modern bpf driver is enabled (i.e., setting the `driver.kind` option to `modern-bpf`). + # Capabilities used: {CAP_SYS_RESOURCE, CAP_BPF, CAP_PERFMON, CAP_SYS_PTRACE}. + # Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 leastPrivileged: false + # -- bufSizePreset determines the size of the shared space between Falco and its drivers. + # This shared space serves as a temporary storage for syscall events. + bufSizePreset: 4 + # -- dropFailedExit if set true drops failed system call exit events before pushing them to userspace. + dropFailedExit: false + # -- cpusForEachBuffer is the index that controls how many CPUs to assign to a single syscall buffer. + cpusForEachBuffer: 2 + # -- Gvisor configuration. Based on your system you need to set the appropriate values. + # Please, remember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes. + gvisor: + # -- Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods. + runsc: + # -- Absolute path of the `runsc` binary in the k8s nodes. + path: /home/containerd/usr/local/sbin + # -- Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; + root: /run/containerd/runsc + # -- Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. + config: /run/containerd/runsc/config.toml # -- Configuration for the Falco init container. loader: # -- Enable/disable the init container. enabled: true initContainer: - # -- Enable/disable the init container. - enabled: true image: # -- The image pull policy. pullPolicy: IfNotPresent @@ -181,14 +322,14 @@ falco: # -- The image repository to pull from. repository: falcosecurity/falco-driver-loader # -- Overrides the image tag whose default is the chart appVersion. - tag: 0.32.2 + tag: 0.38.2 # -- Extra environment variables that will be pass onto Falco driver loader init container. - env: {} + env: [] # -- Arguments to pass to the Falco driver loader init container. args: [] # -- Resources requests and limits for the Falco driver loader init container. resources: {} - # -- Security context for the Falco driver loader init container. Overrides the default security context. If driver.mode == "module" you must at least set `privileged: true`. + # -- Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`. securityContext: {} # Collectors for data enrichment (scenario requirement) collectors: @@ -209,36 +350,36 @@ falco: enabled: true # -- The path of the CRI-O socket. socket: /run/crio/crio.sock + # -- kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy + # kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed + # to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 kubernetes: - # -- Enable Kubernetes meta data collection via a connection to the Kubernetes API server. - # When this option is disabled, Falco falls back to the container annotations to grap the meta data. + # -- enabled specifies whether the Kubernetes metadata should be collected using the k8smeta plugin and the k8s-metacollector component. + # It will deploy the k8s-metacollector external component that fetches Kubernetes metadata and pushes them to Falco instances. + # For more info see: + # https://github.com/falcosecurity/k8s-metacollector + # https://github.com/falcosecurity/charts/tree/master/charts/k8s-metacollector + # When this option is disabled, Falco falls back to the container annotations to grab the metadata. # In such a case, only the ID, name, namespace, labels of the pod will be available. - enabled: true - # -- The apiAuth value is to provide the authentication method Falco should use to connect to the Kubernetes API. - # The argument's documentation from Falco is provided here for reference: - # - # | :[:], --k8s-api-cert | :[:] - # Use the provided files names to authenticate user and (optionally) verify the K8S API server identity. - # Each entry must specify full (absolute, or relative to the current directory) path to the respective file. - # Private key password is optional (needed only if key is password protected). - # CA certificate is optional. For all files, only PEM file format is supported. - # Specifying CA certificate only is obsoleted - when single entry is provided - # for this option, it will be interpreted as the name of a file containing bearer token. - # Note that the format of this command-line option prohibits use of files whose names contain - # ':' or '#' characters in the file name. - # -- Provide the authentication method Falco should use to connect to the Kubernetes API. - apiAuth: /var/run/secrets/kubernetes.io/serviceaccount/token - ## -- Provide the URL Falco should use to connect to the Kubernetes API. - apiUrl: "https://$(KUBERNETES_SERVICE_HOST)" - # -- If true, only the current node (on which Falco is running) will be considered when requesting metadata of pods - # to the API server. Disabling this option may have a performance penalty on large clusters. - enableNodeFilter: true + enabled: false + # --pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: + # "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: 0.38.2 + pluginRef: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.0" + # -- collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match + # k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override + # it, make sure to set here the address of the k8s-metacollector. + # It is used by the k8smeta plugin to connect to the k8s-metacollector. + collectorHostname: "" + # -- collectorPort designates the port on which the k8s-metacollector gRPC service listens. If not specified + # the value of the port named `broker-grpc` in k8s-metacollector.service.ports is used. The default values is 45000. + # It is used by the k8smeta plugin to connect to the k8s-metacollector. + collectorPort: "" ########################### # Extras and customization # ############################ extra: # -- Extra environment variables that will be pass onto Falco containers. - env: {} + env: [] # -- Extra command-line arguments. args: [] # -- Additional initContainers for Falco pods. @@ -257,6 +398,12 @@ falco: ca: # -- CA certificate used by gRPC, webserver and AuditSink validation. crt: "" + existingClientSecret: "" + client: + # -- Key used by http mTLS client. + key: "" + # -- Certificate used by http mTLS client. + crt: "" # -- Third party rules enabled for Falco. More info on the dedicated section in README.md file. customRules: {} # Although Falco comes with a nice default rule set for detecting weird @@ -273,7 +420,7 @@ falco: # Falco integrations # ######################## - # -- For configuration values, see https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml + # -- For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml falcosidekick: # -- Enable falcosidekick deployment. enabled: false @@ -284,40 +431,281 @@ falco: image: registry: docker.m.daocloud.io repository: falcosecurity/falcosidekick - tag: 2.26.0 + tag: 2.29.0 + #################### + # falcoctl config # + #################### + falcoctl: + image: + # -- The image pull policy. + pullPolicy: IfNotPresent + # -- The image registry to pull from. + registry: docker.io + # -- The image repository to pull from. + repository: falcosecurity/falcoctl + # -- The image tag to pull. + tag: 0.38.2 + artifact: + # -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before + # Falco starts. It provides them to Falco by using an emptyDir volume. + install: + enabled: true + # -- Extra environment variables that will be pass onto falcoctl-artifact-install init container. + env: [] + # -- Arguments to pass to the falcoctl-artifact-install init container. + args: ["--log-format=json"] + # -- Resources requests and limits for the falcoctl-artifact-install init container. + resources: {} + # -- Security context for the falcoctl init container. + securityContext: {} + # -- A list of volume mounts you want to add to the falcoctl-artifact-install init container. + mounts: + volumeMounts: [] + # -- Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for + # updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) + # that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the + # correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible + # with the running version of Falco before installing it. + follow: + enabled: true + # -- Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. + env: [] + # -- Arguments to pass to the falcoctl-artifact-follow sidecar container. + args: ["--log-format=json"] + # -- Resources requests and limits for the falcoctl-artifact-follow sidecar container. + resources: {} + # -- Security context for the falcoctl-artifact-follow sidecar container. + securityContext: {} + # -- A list of volume mounts you want to add to the falcoctl-artifact-follow sidecar container. + mounts: + volumeMounts: [] + # -- Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. + config: + # -- List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: + # https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview + indexes: + - name: falcosecurity + url: https://falcosecurity.github.io/falcoctl/index.yaml + # -- Configuration used by the artifact commands. + artifact: + # -- List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained + # in the list it will refuse to downloade and install that artifact. + allowedTypes: + - rulesfile + - plugin + install: + # -- Resolve the dependencies for artifacts. + resolveDeps: true + # -- List of artifacts to be installed by the falcoctl init container. + refs: ['falco-rules:3'] + # -- Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir + # mounted also by the Falco pod. + rulesfilesDir: /rulesfiles + # -- Same as the one above but for the artifacts. + pluginsDir: /plugins + follow: + # -- List of artifacts to be followed by the falcoctl sidecar container. + refs: ['falco-rules:3'] + # -- How often the tool checks for new versions of the followed artifacts. + every: 6h + # -- HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible + # with the running Falco instance. + falcoversions: http://localhost:8765/versions + # -- See the fields of the artifact.install section. + rulesfilesDir: /rulesfiles + # -- See the fields of the artifact.install section. + pluginsDir: /plugins + # -- serviceMonitor holds the configuration for the ServiceMonitor CRD. + # A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should + # discover and scrape metrics from the Falco service. + serviceMonitor: + # -- create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. + # https://github.com/coreos/prometheus-operator + # Enable it only if the ServiceMonitor CRD is installed in your cluster. + create: false + # -- path at which the metrics are exposed by Falco. + path: /metrics + # -- labels set of labels to be applied to the ServiceMonitor resource. + # If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right + # label here in order for the ServiceMonitor to be selected for target discovery. + labels: {} + # -- selector set of labels that should match the labels on the Service targeted by the current serviceMonitor. + selector: {} + # -- interval specifies the time interval at which Prometheus should scrape metrics from the service. + interval: 15s + # -- scheme specifies network protocol used by the metrics endpoint. In this case HTTP. + scheme: http + # -- tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when + # scraping metrics from a service. It allows you to define the details of the TLS connection, such as + # CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support + # TLS configuration for the metrics endpoint. + tlsConfig: {} + # insecureSkipVerify: false + # caFile: /path/to/ca.crt + # certFile: /path/to/client.crt + # keyFile: /path/to/client.key + # -- scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. + # If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for + # that target. + scrapeTimeout: 10s + # -- relabelings configures the relabeling rules to apply the target’s metadata labels. + relabelings: [] + # -- targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. + targetLabels: [] + # -- endpointPort is the port in the Falco service that exposes the metrics service. Change the value if you deploy a custom service + # for Falco's metrics. + endpointPort: "metrics" ###################### # falco.yaml config # ###################### falco: - # File(s) or Directories containing Falco rules, loaded at startup. - # The name "rules_file" is only for backwards compatibility. - # If the entry is a file, it will be read directly. If the entry is a directory, - # every file in that directory will be read, in alphabetical order. - # - # falco_rules.yaml ships with the falco package and is overridden with - # every new software version. falco_rules.local.yaml is only created - # if it doesn't exist. If you want to customize the set of rules, add - # your customizations to falco_rules.local.yaml. - # - # The files will be read in the order presented here, so make sure if - # you have overrides they appear in later files. + ##################### + # Falco rules files # + ##################### + + # [Stable] `rules_file` + # + # Falco rules can be specified using files or directories, which are loaded at + # startup. The name "rules_file" is maintained for backwards compatibility. If + # the entry is a file, it will be read directly. If the entry is a directory, + # all files within that directory will be read in alphabetical order. + # + # The falco_rules.yaml file ships with the Falco package and is overridden with + # every new software version. falco_rules.local.yaml is only created if it + # doesn't already exist. + # + # To customize the set of rules, you can add your modifications to any file. + # It's important to note that the files or directories are read in the order + # specified here. In addition, rules are loaded by Falco in the order they + # appear within each rule file. + # + # If you have any customizations intended to override a previous configuration, + # make sure they appear in later files to take precedence. On the other hand, if + # the conditions of rules with the same event type(s) have the potential to + # overshadow each other, ensure that the more important rule appears first. This + # is because rules are evaluated on a "first match wins" basis, where the first + # rule that matches the conditions will be applied, and subsequent rules will + # not be evaluated for the same event type. + # + # By arranging the order of files and rules thoughtfully, you can ensure that + # desired customizations and rule behaviors are prioritized and applied as + # intended. # -- The location of the rules files that will be consumed by Falco. - rules_file: + rules_files: - /etc/falco/falco_rules.yaml - /etc/falco/falco_rules.local.yaml - /etc/falco/rules.d + # [Incubating] `rules` + # + # --- [Description] # - # Plugins that are available for use. These plugins are not loaded by - # default, as they require explicit configuration to point to - # cloudtrail log files. + # Falco rules can be enabled or disabled by name (with wildcards *) and/or by tag. + # + # This configuration is applied after all rules files have been loaded, including + # their overrides, and will take precedence over the enabled/disabled configuration + # specified or overridden in the rules files. + # + # The ordering matters and selections are evaluated in order. For instance, if you + # need to only enable a rule you would first disable all of them and then only + # enable what you need, regardless of the enabled status in the files. + # + # --- [Examples] + # + # Only enable two rules: + # + # rules: + # - disable: + # rule: "*" + # - enable: + # rule: Netcat Remote Code Execution in Container + # - enable: + # rule: Delete or rename shell history + # + # Disable all rules with a specific tag: 0.38.2 + # + # rules: + # - disable: + # tag: 0.38.2 # - # To learn more about the supported formats for - # init_config/open_params for the cloudtrail plugin, see the README at - # https://github.com/falcosecurity/plugins/blob/master/plugins/cloudtrail/README.md. - # -- Plugins configuration. Add here all plugins and their configuration. Please + # [Incubating] `rule_matching` + # + # - Falco has to be performant when evaluating rules against events. To quickly + # understand which rules could trigger on a specific event, Falco maintains + # buckets of rules sharing the same event type in a map. Then, the lookup + # in each bucket is performed through linear search. The `rule_matching` + # configuration key's values are: + # - "first": when evaluating conditions of rules in a bucket, Falco will stop + # to evaluate rules if it finds a matching rules. Since rules are stored + # in buckets in the order they are defined in the rules files, this option + # could prevent other rules to trigger even if their condition is met, causing + # a shadowing problem. + # - "all": with this value Falco will continue evaluating all the rules + # stored in the bucket, so that multiple rules could be triggered upon one + # event. + rule_matching: first + # [Incubating] `outputs_queue` + # + # -- Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter + # allows you to customize the queue capacity. Please refer to the official documentation: + # https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. + # On a healthy system with optimized Falco rules, the queue should not fill up. + # If it does, it is most likely happening due to the entire event flow being too slow, + # indicating that the server is under heavy load. + # + # `capacity`: the maximum number of items allowed in the queue is determined by this value. + # Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. + # In other words, when this configuration is set to 0, the number of allowed items is + # effectively set to the largest possible long value, disabling this setting. + # + # In the case of an unbounded queue, if the available memory on the system is consumed, + # the Falco process would be OOM killed. When using this option and setting the capacity, + # the current event would be dropped, and the event loop would continue. This behavior mirrors + # kernel-side event drops when the buffer between kernel space and user space is full. + outputs_queue: + capacity: 0 + ################# + # Falco plugins # + ################# + + # [Stable] `load_plugins` and `plugins` + # + # --- [Description] + # + # Falco plugins enable integration with other services in the your ecosystem. + # They allow Falco to extend its functionality and leverage data sources such as + # Kubernetes audit logs or AWS CloudTrail logs. This enables Falco to perform + # fast on-host detections beyond syscalls and container events. The plugin + # system will continue to evolve with more specialized functionality in future + # releases. + # + # Please refer to the plugins repo at + # https://github.com/falcosecurity/plugins/blob/master/plugins/ for detailed + # documentation on the available plugins. This repository provides comprehensive + # information about each plugin and how to utilize them with Falco. + # + # Please note that if your intention is to enrich Falco syscall logs with fields + # such as `k8s.ns.name`, `k8s.pod.name`, and `k8s.pod.*`, you do not need to use + # the `k8saudit` plugin. This information is automatically extracted from the + # container runtime socket. The `k8saudit` plugin is specifically designed to + # integrate with Kubernetes audit logs and is not required for basic enrichment + # of syscall logs with Kubernetes-related fields. + # + # --- [Usage] + # + # Disabled by default, indicated by an empty `load_plugins` list. Each plugin meant + # to be enabled needs to be listed as explicit list item. + # + # For example, if you want to use the `k8saudit` plugin, + # ensure it is configured appropriately and then change this to: + # load_plugins: [k8saudit, json] + # -- Add here all plugins and their configuration. Please # consult the plugins documentation for more info. Remember to add the plugins name in # "load_plugins: []" in order to load them in Falco. + load_plugins: [] + # -- Customize subsettings for each enabled plugin. These settings will only be + # applied when the corresponding plugin is enabled using the `load_plugins` + # option. plugins: - name: k8saudit library_path: libk8saudit.so @@ -333,189 +721,168 @@ falco: - name: json library_path: libjson.so init_config: "" - # Setting this list to empty ensures that the above plugins are *not* - # loaded and enabled by default. If you want to use the above plugins, - # set a meaningful init_config/open_params for the cloudtrail plugin - # and then change this to: - # load_plugins: [cloudtrail, json] - # -- Add here the names of the plugins that you want to be loaded by Falco. Please make sure that - # plugins have ben configured under the "plugins" section before adding them here. - load_plugins: [] + ###################### + # Falco config files # + ###################### + + # [Stable] `config_files` + # + # Falco will load additional configs files specified here. + # Their loading is assumed to be made *after* main config file has been processed, + # exactly in the order they are specified. + # Therefore, loaded config files *can* override values from main config file. + # Also, nested include is not allowed, ie: included config files won't be able to include other config files. + # + # Like for 'rules_files', specifying a folder will load all the configs files present in it in a lexicographical order. + config_files: + - /etc/falco/config.d + # [Stable] `watch_config_files` + # + # Falco monitors configuration and rule files for changes and automatically + # reloads itself to apply the updated configuration when any modifications are + # detected. This feature is particularly useful when you want to make real-time + # changes to the configuration or rules of Falco without interrupting its + # operation or losing its state. For more information about Falco's state + # engine, please refer to the `base_syscalls` section. # -- Watch config file and rules files for modification. # When a file is modified, Falco will propagate new config, # by reloading itself. watch_config_files: true - # -- If true, the times displayed in log messages and output messages - # will be in ISO 8601. By default, times are displayed in the local - # time zone, as governed by /etc/localtime. + ########################## + # Falco outputs settings # + ########################## + + # [Stable] `time_format_iso_8601` + # + # -- When enabled, Falco will display log and output messages with times in the ISO + # 8601 format. By default, times are shown in the local time zone determined by + # the /etc/localtime configuration. time_format_iso_8601: false - # -- Whether to output events in json or text. - json_output: true - # -- When using json output, whether or not to include the "output" property - # itself (e.g. "File below a known binary directory opened for writing - # (user=root ....") in the json output. - json_include_output_property: true - # -- When using json output, whether or not to include the "tags" property - # itself in the json output. If set to true, outputs caused by rules - # with no tags will have a "tags" field set to an empty array. If set to - # false, the "tags" field will not be included in the json output at all. - json_include_tags_property: true - # -- Send information logs to syslog. Note these are *not* security - # notification logs! These are just Falco lifecycle (and possibly error) logs. - log_stderr: true - # -- Send information logs to stderr. Note these are *not* security - # notification logs! These are just Falco lifecycle (and possibly error) logs. - log_syslog: true - # -- Minimum log level to include in logs. Note: these levels are - # separate from the priority field of rules. This refers only to the - # log level of falco's internal logging. Can be one of "emergency", - # "alert", "critical", "error", "warning", "notice", "info", "debug". - log_level: info - # Falco is capable of managing the logs coming from libs. If enabled, - # the libs logger send its log records the same outputs supported by - # Falco (stderr and syslog). Disabled by default. - libs_logger: - # -- Enable the libs logger. - enabled: false - # -- Minimum log severity to include in the libs logs. Note: this value is - # separate from the log level of the Falco logger and does not affect it. - # Can be one of "fatal", "critical", "error", "warning", "notice", - # "info", "debug", "trace". - severity: debug - # -- Minimum rule priority level to load and run. All rules having a - # priority more severe than this level will be loaded/run. Can be one - # of "emergency", "alert", "critical", "error", "warning", "notice", - # "informational", "debug". + # [Stable] `priority` + # + # -- Any rule with a priority level more severe than or equal to the specified + # minimum level will be loaded and run by Falco. This allows you to filter and + # control the rules based on their severity, ensuring that only rules of a + # certain priority or higher are active and evaluated by Falco. Supported + # levels: "emergency", "alert", "critical", "error", "warning", "notice", + # "info", "debug" priority: debug - # -- Whether or not output to any of the output channels below is - # buffered. Defaults to false - buffered_outputs: false - # Falco uses a shared buffer between the kernel and userspace to pass - # system call information. When Falco detects that this buffer is - # full and system calls have been dropped, it can take one or more of - # the following actions: - # - ignore: do nothing (default when list of actions is empty) - # - log: log a DEBUG message noting that the buffer was full - # - alert: emit a Falco alert noting that the buffer was full - # - exit: exit Falco with a non-zero rc + # [Stable] `json_output` # - # Notice it is not possible to ignore and log/alert messages at the same time. + # -- When enabled, Falco will output alert messages and rules file + # loading/validation results in JSON format, making it easier for downstream + # programs to process and consume the data. By default, this option is disabled. + json_output: true + # [Stable] `json_include_output_property` # - # The rate at which log/alert messages are emitted is governed by a - # token bucket. The rate corresponds to one message every 30 seconds - # with a burst of one message (by default). + # -- When using JSON output in Falco, you have the option to include the "output" + # property itself in the generated JSON output. The "output" property provides + # additional information about the purpose of the rule. To reduce the logging + # volume, it is recommended to turn it off if it's not necessary for your use + # case. + json_include_output_property: true + # [Stable] `json_include_tags_property` # - # The messages are emitted when the percentage of dropped system calls - # with respect the number of events in the last second - # is greater than the given threshold (a double in the range [0, 1]). + # -- When using JSON output in Falco, you have the option to include the "tags" + # field of the rules in the generated JSON output. The "tags" field provides + # additional metadata associated with the rule. To reduce the logging volume, + # if the tags associated with the rule are not needed for your use case or can + # be added at a later stage, it is recommended to turn it off. + json_include_tags_property: true + # [Stable] `buffered_outputs` # - # For debugging/testing it is possible to simulate the drops using - # the `simulate_drops: true`. In this case the threshold does not apply. - syscall_event_drops: - # -- The messages are emitted when the percentage of dropped system calls - # with respect the number of events in the last second - # is greater than the given threshold (a double in the range [0, 1]). - threshold: .1 - # -- Actions to be taken when system calls were dropped from the circular buffer. - actions: - - log - - alert - # -- Rate at which log/alert messages are emitted. - rate: .03333 - # -- Max burst of messages emitted. - max_burst: 1 - # Falco uses a shared buffer between the kernel and userspace to receive - # the events (eg., system call information) in userspace. + # -- Enabling buffering for the output queue can offer performance optimization, + # efficient resource usage, and smoother data flow, resulting in a more reliable + # output mechanism. By default, buffering is disabled (false). + buffered_outputs: false + # [Stable] `outputs` # - # Anyways, the underlying libraries can also timeout for various reasons. - # For example, there could have been issues while reading an event. - # Or the particular event needs to be skipped. - # Normally, it's very unlikely that Falco does not receive events consecutively. + # -- A throttling mechanism, implemented as a token bucket, can be used to control + # the rate of Falco outputs. Each event source has its own rate limiter, + # ensuring that alerts from one source do not affect the throttling of others. + # The following options control the mechanism: + # - rate: the number of tokens (i.e. right to send a notification) gained per + # second. When 0, the throttling mechanism is disabled. Defaults to 0. + # - max_burst: the maximum number of tokens outstanding. Defaults to 1000. # - # Falco is able to detect such uncommon situation. + # For example, setting the rate to 1 allows Falco to send up to 1000 + # notifications initially, followed by 1 notification per second. The burst + # capacity is fully restored after 1000 seconds of no activity. # - # Here you can configure the maximum number of consecutive timeouts without an event - # after which you want Falco to alert. - # By default this value is set to 1000 consecutive timeouts without an event at all. - # How this value maps to a time interval depends on the CPU frequency. - syscall_event_timeouts: - # -- Maximum number of consecutive timeouts without an event - # after which you want Falco to alert. - max_consecutives: 1000 - # Falco continuously monitors outputs performance. When an output channel does not allow - # to deliver an alert within a given deadline, an error is reported indicating - # which output is blocking notifications. - # The timeout error will be reported to the log according to the above log_* settings. - # Note that the notification will not be discarded from the output queue; thus, - # output channels may indefinitely remain blocked. - # An output timeout error indeed indicate a misconfiguration issue or I/O problems - # that cannot be recovered by Falco and should be fixed by the user. - # - # The "output_timeout" value specifies the duration in milliseconds to wait before - # considering the deadline exceed. - # - # With a 2000ms default, the notification consumer can block the Falco output - # for up to 2 seconds without reaching the timeout. - # -- Duration in milliseconds to wait before considering the output timeout deadline exceed. - output_timeout: 2000 - # A throttling mechanism implemented as a token bucket limits the - # rate of falco notifications. This throttling is controlled by the following configuration - # options: - # - rate: the number of tokens (i.e. right to send a notification) - # gained per second. Defaults to 1. - # - max_burst: the maximum number of tokens outstanding. Defaults to 1000. + # Throttling can be useful in various scenarios, such as preventing notification + # floods, managing system load, controlling event processing, or complying with + # rate limits imposed by external systems or APIs. It allows for better resource + # utilization, avoids overwhelming downstream systems, and helps maintain a + # balanced and controlled flow of notifications. # - # With these defaults, falco could send up to 1000 notifications after - # an initial quiet period, and then up to 1 notification per second - # afterward. It would gain the full burst back after 1000 seconds of - # no activity. + # With the default settings, the throttling mechanism is disabled. outputs: - # -- Number of tokens gained per second. - rate: 1 - # -- Maximum number of tokens outstanding. + rate: 0 max_burst: 1000 - # Where security notifications should go. - # Multiple outputs can be enabled. + ########################## + # Falco outputs channels # + ########################## + + # Falco supports various output channels, such as syslog, stdout, file, gRPC, + # webhook, and more. You can enable or disable these channels as needed to + # control where Falco alerts and log messages are directed. This flexibility + # allows seamless integration with your preferred logging and alerting systems. + # Multiple outputs can be enabled simultaneously. + + # [Stable] `stdout_output` + # + # -- Redirect logs to standard output. + stdout_output: + enabled: true + # [Stable] `syslog_output` + # + # -- Send logs to syslog. syslog_output: - # -- Enable syslog output for security notifications. enabled: true - # If keep_alive is set to true, the file will be opened once and - # continuously written to, with each output message on its own - # line. If keep_alive is set to false, the file will be re-opened - # for each output message. + # [Stable] `file_output` # - # Also, the file will be closed and reopened if falco is signaled with - # SIGUSR1. + # -- When appending Falco alerts to a file, each new alert will be added to a new + # line. It's important to note that Falco does not perform log rotation for this + # file. If the `keep_alive` option is set to `true`, the file will be opened once + # and continuously written to, else the file will be reopened for each output + # message. Furthermore, the file will be closed and reopened if Falco receives + # the SIGUSR1 signal. file_output: - # -- Enable file output for security notifications. enabled: false - # -- Open file once or every time a new notification arrives. keep_alive: false - # -- The filename for logging notifications. filename: ./events.txt - stdout_output: - # -- Enable stdout output for security notifications. - enabled: true - # Falco contains an embedded webserver that exposes a healthy endpoint that can be used to check if Falco is up and running. - # By default the endpoint is /healthz + # [Stable] `http_output` + # + # -- Send logs to an HTTP endpoint or webhook. + http_output: + enabled: false + url: "" + user_agent: "falcosecurity/falco" + # -- Tell Falco to not verify the remote server. + insecure: false + # -- Path to the CA certificate that can verify the remote server. + ca_cert: "" + # -- Path to a specific file that will be used as the CA certificate store. + ca_bundle: "" + # -- Path to a folder that will be used as the CA certificate store. CA certificate need to be + # stored as indivitual PEM files in this directory. + ca_path: "/etc/falco/certs/" + # -- Tell Falco to use mTLS + mtls: false + # -- Path to the client cert. + client_cert: "/etc/falco/certs/client/client.crt" + # -- Path to the client key. + client_key: "/etc/falco/certs/client/client.key" + # -- Whether to echo server answers to stdout + echo: false + # -- compress_uploads whether to compress data sent to http endpoint. + compress_uploads: false + # -- keep_alive whether to keep alive the connection. + keep_alive: false + # [Stable] `program_output` # - # The ssl_certificate is a combination SSL Certificate and corresponding - # key contained in a single file. You can generate a key/cert as follows: + # -- Redirect the output to another program or command. # - # $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem - # $ cat certificate.pem key.pem > falco.pem - # $ sudo cp falco.pem /etc/falco/falco.pem - webserver: - # -- Enable Falco embedded webserver. - enabled: true - # -- Port where Falco embedded webserver listen to connections. - listen_port: 8765 - # -- Endpoint where Falco exposes the health status. - k8s_healthz_endpoint: /healthz - # -- Enable SSL on Falco embedded webserver. - ssl_enabled: false - # -- Certificate bundle path for the Falco embedded webserver. - ssl_certificate: /etc/falco/falco.pem # Possible additional things you might want to do with program output: # - send to a slack webhook: # program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" @@ -523,69 +890,574 @@ falco: # program: logger -t falco-test # - send over a network connection: # program: nc host.example.com 80 - - # If keep_alive is set to true, the program will be started once and - # continuously written to, with each output message on its own - # line. If keep_alive is set to false, the program will be re-spawned - # for each output message. - # - # Also, the program will be closed and reopened if falco is signaled with - # SIGUSR1. + # If `keep_alive` is set to `true`, the program will be started once and + # continuously written to, with each output message on its own line. If + # `keep_alive` is set to `false`, the program will be re-spawned for each output + # message. Furthermore, the program will be re-spawned if Falco receives + # the SIGUSR1 signal. program_output: - # -- Enable program output for security notifications. enabled: false - # -- Start the program once or re-spawn when a notification arrives. keep_alive: false - # -- Command to execute for program output. program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - http_output: - # -- Enable http output for security notifications. + # [Stable] `grpc_output` + # + # -- Use gRPC as an output service. + # + # gRPC is a modern and high-performance framework for remote procedure calls + # (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC + # output in Falco provides a modern and efficient way to integrate with other + # systems. By default the setting is turned off. Enabling this option stores + # output events in memory until they are consumed by a gRPC client. Ensure that + # you have a consumer for the output events or leave it disabled. + grpc_output: enabled: false - # -- When set, this will override an auto-generated URL which matches the falcosidekick Service. - # -- When including Falco inside a parent helm chart, you must set this since the auto-generated URL won't match (#280). - url: "" - user_agent: "falcosecurity/falco" - # Falco supports running a gRPC server with two main binding types - # 1. Over the network with mandatory mutual TLS authentication (mTLS) - # 2. Over a local unix socket with no authentication - # By default, the gRPC server is disabled, with no enabled services (see grpc_output) - # please comment/uncomment and change accordingly the options below to configure it. - # Important note: if Falco has any troubles creating the gRPC server - # this information will be logged, however the main Falco daemon will not be stopped. - # gRPC server over network with (mandatory) mutual TLS configuration. - # This gRPC server is secure by default so you need to generate certificates and update their paths here. - # By default the gRPC server is off. - # You can configure the address to bind and expose it. - # By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use. + ########################## + # Falco exposed services # + ########################## + + # [Stable] `grpc` + # + # Falco provides support for running a gRPC server using two main binding types: + # 1. Over the network with mandatory mutual TLS authentication (mTLS), which + # ensures secure communication + # 2. Local Unix socket binding with no authentication. By default, the + # gRPCserver in Falco is turned off with no enabled services (see + # `grpc_output`setting). + # + # To configure the gRPC server in Falco, you can make the following changes to + # the options: + # + # - Uncomment the relevant configuration options related to the gRPC server. + # - Update the paths of the generated certificates for mutual TLS authentication + # if you choose to use mTLS. + # - Specify the address to bind and expose the gRPC server. + # - Adjust the threadiness configuration to control the number of threads and + # contexts used by the server. + # + # Keep in mind that if any issues arise while creating the gRPC server, the + # information will be logged, but it will not stop the main Falco daemon. + + # gRPC server using mTLS # grpc: # enabled: true # bind_address: "0.0.0.0:5060" - # # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores + # # When the `threadiness` value is set to 0, Falco will automatically determine + # # the appropriate number of threads based on the number of online cores in the system. # threadiness: 0 # private_key: "/etc/falco/certs/server.key" # cert_chain: "/etc/falco/certs/server.crt" # root_certs: "/etc/falco/certs/ca.crt" - # -- gRPC server using an unix socket + # -- gRPC server using a local unix socket grpc: - # -- Enable the Falco gRPC server. enabled: false - # -- Bind address for the grpc server. - bind_address: "unix:///var/run/falco/falco.sock" - # -- Number of threads (and context) the gRPC server will use, 0 by default, which means "auto". + bind_address: "unix:///run/falco/falco.sock" + # -- When the `threadiness` value is set to 0, Falco will automatically determine + # the appropriate number of threads based on the number of online cores in the system. threadiness: 0 - # gRPC output service. - # By default it is off. - # By enabling this all the output events will be kept in memory until you read them with a gRPC client. - # Make sure to have a consumer for them or leave this disabled. - grpc_output: - # -- Enable the gRPC output and events will be kept in memory until you read them with a gRPC client. + # [Stable] `webserver` + # + # -- Falco supports an embedded webserver that runs within the Falco process, + # providing a lightweight and efficient way to expose web-based functionalities + # without the need for an external web server. The following endpoints are + # exposed: + # - /healthz: designed to be used for checking the health and availability of + # the Falco application (the name of the endpoint is configurable). + # - /versions: responds with a JSON object containing the version numbers of the + # internal Falco components (similar output as `falco --version -o + # json_output=true`). + # + # Please note that the /versions endpoint is particularly useful for other Falco + # services, such as `falcoctl`, to retrieve information about a running Falco + # instance. If you plan to use `falcoctl` locally or with Kubernetes, make sure + # the Falco webserver is enabled. + # + # The behavior of the webserver can be controlled with the following options, + # which are enabled by default: + # + # The `ssl_certificate` option specifies a combined SSL certificate and + # corresponding key that are contained in a single file. You can generate a + # key/cert as follows: + # + # $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out + # certificate.pem $ cat certificate.pem key.pem > falco.pem $ sudo cp falco.pem + # /etc/falco/falco.pem + webserver: + enabled: true + # When the `threadiness` value is set to 0, Falco will automatically determine + # the appropriate number of threads based on the number of online cores in the system. + threadiness: 0 + listen_port: 8765 + k8s_healthz_endpoint: /healthz + # [Incubating] `prometheus_metrics_enabled` + # + # Enable the metrics endpoint providing Prometheus values + # It will only have an effect if metrics.enabled is set to true as well. + prometheus_metrics_enabled: false + ssl_enabled: false + ssl_certificate: /etc/falco/falco.pem + ############################################################################## + # Falco logging / alerting / metrics related to software functioning (basic) # + ############################################################################## + + # [Stable] `log_stderr` and `log_syslog` + # + # Falco's logs related to the functioning of the software, which are not related + # to Falco alert outputs but rather its lifecycle, settings and potential + # errors, can be directed to stderr and/or syslog. + # -- Send information logs to stderr. Note these are *not* security + # notification logs! These are just Falco lifecycle (and possibly error) logs. + log_stderr: true + # -- Send information logs to syslog. Note these are *not* security + # notification logs! These are just Falco lifecycle (and possibly error) logs. + log_syslog: true + # [Stable] `log_level` + # + # -- The `log_level` setting determines the minimum log level to include in Falco's + # logs related to the functioning of the software. This setting is separate from + # the `priority` field of rules and specifically controls the log level of + # Falco's operational logging. By specifying a log level, you can control the + # verbosity of Falco's operational logs. Only logs of a certain severity level + # or higher will be emitted. Supported levels: "emergency", "alert", "critical", + # "error", "warning", "notice", "info", "debug". + log_level: info + # [Stable] `libs_logger` + # + # -- The `libs_logger` setting in Falco determines the minimum log level to include + # in the logs related to the functioning of the software of the underlying + # `libs` library, which Falco utilizes. This setting is independent of the + # `priority` field of rules and the `log_level` setting that controls Falco's + # operational logs. It allows you to specify the desired log level for the `libs` + # library specifically, providing more granular control over the logging + # behavior of the underlying components used by Falco. Only logs of a certain + # severity level or higher will be emitted. Supported levels: "emergency", + # "alert", "critical", "error", "warning", "notice", "info", "debug". It is not + # recommended for production use. + libs_logger: enabled: false - # Container orchestrator metadata fetching params - metadata_download: - # -- Max allowed response size (in Mb) when fetching metadata from Kubernetes. - max_mb: 100 - # -- Sleep time (in μs) for each download chunck when fetching metadata from Kubernetes. - chunk_wait_us: 1000 - # -- Watch frequency (in seconds) when fetching metadata from Kubernetes. - watch_freq_sec: 1 + severity: debug + ################################################################################# + # Falco logging / alerting / metrics related to software functioning (advanced) # + ################################################################################# + + # [Stable] `output_timeout` + # + # Generates Falco operational logs when `log_level=notice` at minimum + # + # A timeout error occurs when a process or operation takes longer to complete + # than the allowed or expected time limit. In the context of Falco, an output + # timeout error refers to the situation where an output channel fails to deliver + # an alert within a specified deadline. Various reasons, such as network issues, + # resource constraints, or performance bottlenecks can cause timeouts. + # + # -- The `output_timeout` parameter specifies the duration, in milliseconds, to + # wait before considering the deadline exceeded. By default, the timeout is set + # to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block + # the Falco output channel for up to 2 seconds without triggering a timeout + # error. + # + # Falco actively monitors the performance of output channels. With this setting + # the timeout error can be logged, but please note that this requires setting + # Falco's operational logs `log_level` to a minimum of `notice`. + # + # It's important to note that Falco outputs will not be discarded from the + # output queue. This means that if an output channel becomes blocked + # indefinitely, it indicates a potential issue that needs to be addressed by the + # user. + output_timeout: 2000 + # [Stable] `syscall_event_timeouts` + # + # -- Generates Falco operational logs when `log_level=notice` at minimum + # + # Falco utilizes a shared buffer between the kernel and userspace to receive + # events, such as system call information, in userspace. However, there may be + # cases where timeouts occur in the underlying libraries due to issues in + # reading events or the need to skip a particular event. While it is uncommon + # for Falco to experience consecutive event timeouts, it has the capability to + # detect such situations. You can configure the maximum number of consecutive + # timeouts without an event after which Falco will generate an alert, but please + # note that this requires setting Falco's operational logs `log_level` to a + # minimum of `notice`. The default value is set to 1000 consecutive timeouts + # without receiving any events. The mapping of this value to a time interval + # depends on the CPU frequency. + syscall_event_timeouts: + max_consecutives: 1000 + # [Stable] `syscall_event_drops` + # + # Generates "Falco internal: syscall event drop" rule output when `priority=debug` at minimum + # + # --- [Description] + # + # Falco uses a shared buffer between the kernel and userspace to pass system + # call information. When Falco detects that this buffer is full and system calls + # have been dropped, it can take one or more of the following actions: + # - ignore: do nothing (default when list of actions is empty) + # - log: log a DEBUG message noting that the buffer was full + # - alert: emit a Falco alert noting that the buffer was full + # - exit: exit Falco with a non-zero rc + # + # Notice it is not possible to ignore and log/alert messages at the same time. + # + # The rate at which log/alert messages are emitted is governed by a token + # bucket. The rate corresponds to one message every 30 seconds with a burst of + # one message (by default). + # + # The messages are emitted when the percentage of dropped system calls with + # respect the number of events in the last second is greater than the given + # threshold (a double in the range [0, 1]). If you want to be alerted on any + # drops, set the threshold to 0. + # + # For debugging/testing it is possible to simulate the drops using the + # `simulate_drops: true`. In this case the threshold does not apply. + # + # --- [Usage] + # + # Enabled by default, but requires Falco rules config `priority` set to `debug`. + # Emits a Falco rule named "Falco internal: syscall event drop" as many times in + # a given time period as dictated by the settings. Statistics here reflect the + # delta in a 1s time period. + # + # If instead you prefer periodic metrics of monotonic counters at a regular + # interval, which include syscall drop statistics and additional metrics, + # explore the `metrics` configuration option. + # -- For debugging/testing it is possible to simulate the drops using + # the `simulate_drops: true`. In this case the threshold does not apply. + syscall_event_drops: + # -- The messages are emitted when the percentage of dropped system calls + # with respect the number of events in the last second + # is greater than the given threshold (a double in the range [0, 1]). + threshold: .1 + # -- Actions to be taken when system calls were dropped from the circular buffer. + actions: + - log + - alert + # -- Rate at which log/alert messages are emitted. + rate: .03333 + # -- Max burst of messages emitted. + max_burst: 1 + # -- Flag to enable drops for debug purposes. + simulate_drops: false + # [Experimental] `metrics` + # + # -- Generates "Falco internal: metrics snapshot" rule output when `priority=info` at minimum + # + # periodic metric snapshots (including stats and resource utilization) captured + # at regular intervals + # + # --- [Description] + # + # Consider these key points about the `metrics` feature in Falco: + # + # - It introduces a redesigned stats/metrics system. + # - Native support for resource utilization metrics and specialized performance + # metrics. + # - Metrics are emitted as monotonic counters at predefined intervals + # (snapshots). + # - All metrics are consolidated into a single log message, adhering to the + # established rules schema and naming conventions. + # - Additional info fields complement the metrics and facilitate customized + # statistical analyses and correlations. + # - The metrics framework is designed for easy future extension. + # + # The `metrics` feature follows a specific schema and field naming convention. + # All metrics are collected as subfields under the `output_fields` key, similar + # to regular Falco rules. Each metric field name adheres to the grammar used in + # Falco rules. There are two new field classes introduced: `falco.` and `scap.`. + # The `falco.` class represents userspace counters, statistics, resource + # utilization, or useful information fields. The `scap.` class represents + # counters and statistics mostly obtained from Falco's kernel instrumentation + # before events are sent to userspace, but can include scap userspace stats as + # well. + # + # It's important to note that the output fields and their names can be subject + # to change until the metrics feature reaches a stable release. + # + # To customize the hostname in Falco, you can set the environment variable + # `FALCO_HOSTNAME` to your desired hostname. This is particularly useful in + # Kubernetes deployments where the hostname can be set to the pod name. + # + # --- [Usage] + # + # `enabled`: Disabled by default. + # + # `interval`: The stats interval in Falco follows the time duration definitions + # used by Prometheus. + # https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations + # + # Time durations are specified as a number, followed immediately by one of the + # following units: + # + # ms - millisecond + # s - second + # m - minute + # h - hour + # d - day - assuming a day has always 24h + # w - week - assuming a week has always 7d + # y - year - assuming a year has always 365d + # + # Example of a valid time duration: 1h30m20s10ms + # + # A minimum interval of 100ms is enforced for metric collection. However, for + # production environments, we recommend selecting one of the following intervals + # for optimal monitoring: + # + # 15m + # 30m + # 1h + # 4h + # 6h + # + # `output_rule`: To enable seamless metrics and performance monitoring, we + # recommend emitting metrics as the rule "Falco internal: metrics snapshot". + # This option is particularly useful when Falco logs are preserved in a data + # lake. Please note that to use this option, the Falco rules config `priority` + # must be set to `info` at a minimum. + # + # `output_file`: Append stats to a `jsonl` file. Use with caution in production + # as Falco does not automatically rotate the file. + # + # `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage + # is reported as a percentage of one CPU and can be normalized to the total + # number of CPUs to determine overall usage. Memory metrics are provided in raw + # units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) + # and can be uniformly converted to megabytes (MB) using the + # `convert_memory_to_mb` functionality. In environments such as Kubernetes when + # deployed as daemonset, it is crucial to track Falco's container memory usage. + # To customize the path of the memory metric file, you can create an environment + # variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By + # default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to + # monitor container memory usage, which aligns with Kubernetes' + # `container_memory_working_set_bytes` metric. Finally, we emit the overall host + # CPU and memory usages, along with the total number of processes and open file + # descriptors (fds) on the host, obtained from the proc file system unrelated to + # Falco's monitoring. These metrics help assess Falco's usage in relation to the + # server's workload intensity. + # + # `rules_counters_enabled`: Emit counts for each rule. + # + # `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage + # is reported as a percentage of one CPU and can be normalized to the total + # number of CPUs to determine overall usage. Memory metrics are provided in raw + # units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) + # and can be uniformly converted to megabytes (MB) using the + # `convert_memory_to_mb` functionality. In environments such as Kubernetes when + # deployed as daemonset, it is crucial to track Falco's container memory usage. + # To customize the path of the memory metric file, you can create an environment + # variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By + # default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to + # monitor container memory usage, which aligns with Kubernetes' + # `container_memory_working_set_bytes` metric. Finally, we emit the overall host + # CPU and memory usages, along with the total number of processes and open file + # descriptors (fds) on the host, obtained from the proc file system unrelated to + # Falco's monitoring. These metrics help assess Falco's usage in relation to the + # server's workload intensity. + # + # `state_counters_enabled`: Emit counters related to Falco's state engine, including + # added, removed threads or file descriptors (fds), and failed lookup, store, or + # retrieve actions in relation to Falco's underlying process cache table (threadtable). + # We also log the number of currently cached containers if applicable. + # + # `kernel_event_counters_enabled`: Emit kernel side event and drop counters, as + # an alternative to `syscall_event_drops`, but with some differences. These + # counters reflect monotonic values since Falco's start and are exported at a + # constant stats interval. + # + # `libbpf_stats_enabled`: Exposes statistics similar to `bpftool prog show`, + # providing information such as the number of invocations of each BPF program + # attached by Falco and the time spent in each program measured in nanoseconds. + # To enable this feature, the kernel must be >= 5.1, and the kernel + # configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, + # or an equivalent statistics feature, is not available for non `*bpf*` drivers. + # Additionally, please be aware that the current implementation of `libbpf` does + # not support granularity of statistics at the bpf tail call level. + # + # `include_empty_values`: When the option is set to true, fields with an empty + # numeric value will be included in the output. However, this rule does not + # apply to high-level fields such as `n_evts` or `n_drops`; they will always be + # included in the output even if their value is empty. This option can be + # beneficial for exploring the data schema and ensuring that fields with empty + # values are included in the output. + # todo: prometheus export option + # todo: syscall_counters_enabled option + metrics: + enabled: false + interval: 1h + output_rule: true + # output_file: /tmp/falco_stats.jsonl + rules_counters_enabled: true + resource_utilization_enabled: true + state_counters_enabled: true + kernel_event_counters_enabled: true + libbpf_stats_enabled: true + convert_memory_to_mb: true + include_empty_values: false + ####################################### + # Falco performance tuning (advanced) # + ####################################### + + # [Experimental] `base_syscalls`, use with caution, read carefully + # + # --- [Description] + # + # -- This option configures the set of syscalls that Falco traces. + # + # --- [Falco's State Engine] + # + # Falco requires a set of syscalls to build up state in userspace. For example, + # when spawning a new process or network connection, multiple syscalls are + # involved. Furthermore, properties of a process during its lifetime can be + # modified by syscalls. Falco accounts for this by enabling the collection of + # additional syscalls than the ones defined in the rules and by managing a smart + # process cache table in userspace. Processes are purged from this table when a + # process exits. + # + # By default, with + # ``` + # base_syscalls.custom_set = [] + # base_syscalls.repair = false + # ``` + # Falco enables tracing for a syscall set gathered: (1) from (enabled) Falco + # rules (2) from a static, more verbose set defined in + # `libsinsp::events::sinsp_state_sc_set` in + # libs/userspace/libsinsp/events/sinsp_events_ppm_sc.cpp This allows Falco to + # successfully build up it's state engine and life-cycle management. + # + # If the default behavior described above does not fit the user's use case for + # Falco, the `base_syscalls` option allows for finer end-user control of + # syscalls traced by Falco. + # + # --- [base_syscalls.custom_set] + # + # CAUTION: Misconfiguration of this setting may result in incomplete Falco event + # logs or Falco being unable to trace events entirely. + # + # `base_syscalls.custom_set` allows the user to explicitly define an additional + # set of syscalls to be traced in addition to the syscalls from each enabled + # Falco rule. + # + # This is useful in lowering CPU utilization and further tailoring Falco to + # specific environments according to your threat model and budget constraints. + # + # --- [base_syscalls.repair] + # + # `base_syscalls.repair` is an alternative to Falco's default state engine + # enforcement. When enabled, this option is designed to (1) ensure that Falco's + # state engine is correctly and successfully built-up (2) be the most system + # resource-friendly by activating the least number of additional syscalls + # (outside of those enabled for enabled rules) + # + # Setting `base_syscalls.repair` to `true` allows Falco to automatically + # configure what is described in the [Suggestions] section below. + # + # `base_syscalls.repair` can be enabled with an empty custom set, meaning with + # the following, + # ``` + # base_syscalls.custom_set = [] + # base_syscalls.repair = true + # ``` + # Falco enables tracing for a syscall set gathered: (1) from (enabled) Falco + # rules (2) from minimal set of additional syscalls needed to "repair" the + # state engine and properly log event conditions specified in enabled Falco + # rules + # + # --- [Usage] + # + # List of system calls names (), negative ("!") + # notation supported. + # + # Example: base_syscalls.custom_set: [, , + # "!"] base_syscalls.repair: + # + # We recommend to only exclude syscalls, e.g. "!mprotect" if you need a fast + # deployment update (overriding rules), else remove unwanted syscalls from the + # Falco rules. + # + # Passing `-o "log_level=debug" -o "log_stderr=true" --dry-run` to Falco's cmd + # args will print the final set of syscalls to STDOUT. + # + # --- [Suggestions] + # + # NOTE: setting `base_syscalls.repair: true` automates the following suggestions + # for you. + # + # These suggestions are subject to change as Falco and its state engine evolve. + # + # For execve* events: Some Falco fields for an execve* syscall are retrieved + # from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning + # a new process. The `close` syscall is used to purge file descriptors from + # Falco's internal thread / process cache table and is necessary for rules + # relating to file descriptors (e.g. open, openat, openat2, socket, connect, + # accept, accept4 ... and many more) + # + # Consider enabling the following syscalls in `base_syscalls.custom_set` for + # process rules: [clone, clone3, fork, vfork, execve, execveat, close] + # + # For networking related events: While you can log `connect` or `accept*` + # syscalls without the socket syscall, the log will not contain the ip tuples. + # Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also + # necessary. + # + # We recommend the following as the minimum set for networking-related rules: + # [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, + # getsockopt] + # + # Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process + # when the running process opens a file or makes a network connection, consider + # adding the following to the above recommended syscall sets: ... setresuid, + # setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, + # fchdir ... + base_syscalls: + custom_set: [] + repair: false + ############## + # Falco libs # + ############## + + # [Experimental] `falco_libs` - Potentially subject to more frequent changes + # + # `thread_table_size` + # + # Set the maximum number of entries (the absolute maximum value can only be MAX UINT32) + # for Falco's internal threadtable (process cache). Please note that Falco operates at a + # granular level, focusing on individual threads. Falco rules reference the thread leader + # as the process. The size of the threadtable should typically be much higher than the + # number of currently alive processes. The default value should work well on modern + # infrastructures and be sufficient to absorb bursts. + # + # Reducing its size can help in better memory management, but as a consequence, your + # process tree may be more frequently disrupted due to missing threads. You can explore + # `metrics.state_counters_enabled` to measure how the internal state handling is performing, + # and the fields called `n_drops_full_threadtable` or `n_store_evts_drops` will inform you + # if you should increase this value for optimal performance. + falco_libs: + thread_table_size: 262144 +# [Stable] Guidance for Kubernetes container engine command-line args settings +# +# Modern cloud environments, particularly Kubernetes, heavily rely on +# containerized workload deployments. When capturing events with Falco, it +# becomes essential to identify the owner of the workload for which events are +# being captured, such as syscall events. Falco integrates with the container +# runtime to enrich its events with container information, including fields like +# `container.image.repository`, `container.image.tag`, ... , `k8s.ns.name`, +# `k8s.pod.name`, `k8s.pod.*` in the Falco output (Falco retrieves Kubernetes +# namespace and pod name directly from the container runtime, see +# https://falco.org/docs/reference/rules/supported-fields/#field-class-container). +# +# Furthermore, Falco exposes container events themselves as a data source for +# alerting. To achieve this integration with the container runtime, Falco +# requires access to the runtime socket. By default, for Kubernetes, Falco +# attempts to connect to the following sockets: +# "/run/containerd/containerd.sock", "/run/crio/crio.sock", +# "/run/k3s/containerd/containerd.sock". If you have a custom path, you can use +# the `--cri` option to specify the correct location. +# +# In some cases, you may encounter empty fields for container metadata. To +# address this, you can explore the `--disable-cri-async` option, which disables +# asynchronous fetching if the fetch operation is not completing quickly enough. +# +# To get more information on these command-line arguments, you can run `falco +# --help` in your terminal to view their current descriptions. +# +# !!! The options mentioned here are not available in the falco.yaml +# configuration file. Instead, they can can be used as a command-line argument +# when running the Falco command.