From 814922c435118f6c6b0555d9cbd0503e8114c842 Mon Sep 17 00:00:00 2001 From: Alexa Kreizinger Date: Wed, 23 Jul 2025 13:09:09 -0700 Subject: [PATCH 1/2] pipeline: outputs: stackdriver: general cleanup Signed-off-by: Alexa Kreizinger --- pipeline/outputs/stackdriver.md | 146 +++++++++++++++----------------- 1 file changed, 69 insertions(+), 77 deletions(-) diff --git a/pipeline/outputs/stackdriver.md b/pipeline/outputs/stackdriver.md index a4608f3d4..80b6bcc7d 100644 --- a/pipeline/outputs/stackdriver.md +++ b/pipeline/outputs/stackdriver.md @@ -1,54 +1,45 @@ # Stackdriver -Stackdriver output plugin allows to ingest your records into the [Google Cloud Stackdriver Logging](https://cloud.google.com/logging/) service. - -Before getting started with the plugin configuration, make sure to obtain the proper credentials to get access to the service. We strongly recommend using a common JSON credentials file, reference link: - -* [Creating a Google Service Account for Stackdriver](https://cloud.google.com/logging/docs/agent/authorization#create-service-account) - -{% hint style="info" %} - -Your goal is to obtain a credentials JSON file that will be used later by Fluent Bit Stackdriver output plugin. - -{% endhint %} - -Refer to the [Google Cloud `LogEntry` API documentation](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) for information on the meaning of some of the parameters below. - -## Configuration Parameters - -| Key | Description | default | -|:-------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| google\_service\_credentials | Absolute path to a Google Cloud credentials JSON file | Value of environment variable `$GOOGLE_APPLICATION_CREDENTIALS` | -| service\_account\_email | Account email associated to the service. Only available if **no credentials file** has been provided. | Value of environment variable `$SERVICE_ACCOUNT_EMAIL` | -| service\_account\_secret | Private key content associated with the service account. Only available if **no credentials file** has been provided. | Value of environment variable `$SERVICE_ACCOUNT_SECRET` | -| metadata\_server | Prefix for a metadata server. | Value of environment variable `$METADATA_SERVER`, or http://metadata.google.internal if unset. | -| location | The GCP or AWS region in which to store data about the resource. If the resource type is one of the _generic\_node_ or _generic\_task_, then this field is required. | | -| namespace | A namespace identifier, such as a cluster name or environment. If the resource type is one of the _generic\_node_ or _generic\_task_, then this field is required. | | -| node\_id | A unique identifier for the node within the namespace, such as hostname or IP address. If the resource type is _generic\_node_, then this field is required. | | -| job | An identifier for a grouping of related task, such as the name of a microservice or distributed batch. If the resource type is _generic\_task_, then this field is required. | | -| task\_id | A unique identifier for the task within the namespace and job, such as a replica index identifying the task within the job. If the resource type is _generic\_task_, then this field is required. | | -| export\_to\_project\_id | The GCP project that should receive these logs. | The `project_id` in the google\_service\_credentials file, or the `project_id` from Google's metadata.google.internal server. | -| resource | Set resource type of data. Supported resource types: _k8s\_container_, _k8s\_node_, _k8s\_pod_, _global_, _generic\_node_, _generic\_task_, and _gce\_instance_. | global, gce\_instance | -| k8s\_cluster\_name | The name of the cluster that the container \(node or pod based on the resource type\) is running in. If the resource type is one of the _k8s\_container_, _k8s\_node_ or _k8s\_pod_, then this field is required. | | -| k8s\_cluster\_location | The physical location of the cluster that contains \(node or pod based on the resource type\) the container. If the resource type is one of the _k8s\_container_, _k8s\_node_ or _k8s\_pod_, then this field is required. | | -| labels\_key | The name of the key from the original record that contains the LogEntry's `labels`. | logging.googleapis.com/labels | -| labels | Optional list of comma-separated of strings specifying `key=value` pairs. The resulting labels will be combined with the elements obtained from `labels_key` to set the LogEntry Labels. Elements from `labels` will override duplicate values from `labels_key`. | | -| log\_name\_key | The name of the key from the original record that contains the logName value. | logging.googleapis.com/logName | -| tag\_prefix | Set the tag\_prefix used to validate the tag of logs with k8s resource type. Without this option, the tag of the log must be in format of k8s\_container\(pod/node\).\* in order to use the k8s\_container resource type. Now the tag prefix is configurable by this option \(note the ending dot\). | k8s\_container., k8s\_pod., k8s\_node. | -| severity\_key | The name of the key from the original record that contains the severity. | logging.googleapis.com/severity | -| project_id_key | The value of this field is used by the Stackdriver output plugin to find the gcp project id from jsonPayload and then extract the value of it to set the PROJECT_ID within LogEntry logName, which controls the gcp project that should receive these logs. | `logging.googleapis.com/projectId`. See [Stackdriver Special Fields][StackdriverSpecialFields] for more info. | -| autoformat\_stackdriver\_trace | Rewrite the _trace_ field to include the projectID and format it for use with Cloud Trace. When this flag is enabled, the user can get the correct result by printing only the traceID (usually 32 characters). | false | -| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` | -| custom\_k8s\_regex | Set a custom regex to extract field like pod\_name, namespace\_name, container\_name and docker\_id from the local\_resource\_id in logs. This is helpful if the value of pod or node name contains dots. | `(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$` | -| resource_labels | An optional list of comma-separated strings specifying resource label plaintext assignments (`new=value`) and/or mappings from an original field in the log entry to a destination field (`destination=$original`). Nested fields and environment variables are also supported using the [record accessor syntax](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor). If configured, *all* resource labels will be assigned using this API only, with the exception of `project_id`. See [Resource Labels](#resource-labels) for more details. | | -| http_request_key | The name of the key from the original record that contains the LogEntry's `httpRequest`. Note that the default value erroneously uses an underscore; users will likely need to set this to `logging.googleapis.com/httpRequest`. | logging.googleapis.com/http_request | -| compress | Set payload compression mechanism. The only available option is `gzip`. Default = "", which means no compression. | | -| cloud\_logging\_base\_url | Set the base Cloud Logging API URL to use for the `/v2/entries:write` API request. | https://logging.googleapis.com | - - -### Configuration File - -If you are using a `Google Cloud Credentials File`, the following configuration is enough to get started: +The _Stackdriver_ output plugin lets you ingest your records into the [Google Cloud Stackdriver Logging](https://cloud.google.com/logging/) service. + +Before getting started with the plugin configuration, make sure to obtain the [proper credentials](https://cloud.google.com/logging/docs/agent/logging/authorization#create-service-account) to get access to the service. For best results, use a common JSON credentials file that can be referenced by the Stackdriver plugin. + +## Configuration parameters + +This plugin uses the following configuration parameters. For more details about Stackdriver-specific parameters, see the [Google Cloud `LogEntry` API documentation](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry). + +| Key | Description | Default | +| --- | ----------- | ------- | +| `google_service_credentials` | Absolute path to a Google Cloud credentials JSON file | Value of environment variable `$GOOGLE_APPLICATION_CREDENTIALS` | +| `service_account_email` | Account email associated with the service. Only available if no credentials file has been provided. | Value of environment variable `$SERVICE_ACCOUNT_EMAIL` | +| `service_account_secret` | Private key content associated with the service account. Only available if no credentials file has been provided. | Value of environment variable `$SERVICE_ACCOUNT_SECRET` | +| `metadata_server` | Prefix for a metadata server. | Value of environment variable `$METADATA_SERVER`, or `http://metadata.google.internal` if unset. | +| `location` | The GCP or AWS region in which to store data about the resource. If the resource type is either `generic_node` or `generic_task`, this field is required. | _none_ | +| `namespace` | A namespace identifier, such as a cluster name or environment. If the resource type is either `generic_node` or `generic_task`, this field is required. | _none_ | +| `node_id` | A unique identifier for the node within the namespace, such as a hostname or IP address. If the resource type is `generic_node`, this field is required. | _none_ | +| `job` | An identifier for a grouping of related task, such as the name of a microservice or distributed batch. If the resource type is `generic_task`, this field is required. | _none_ | +| `task_id` | A unique identifier for the task within the namespace and job, such as a replica index identifying the task within the job. If the resource type is `generic_task`, this field is required. | _none_ | +| `export_to_project_id` | The GCP project that should receive these logs. | The `project_id` in the `google_service_credentials` file, or the `project_id` from Google's `metadata.google.internal` server. | +| `resource` | Set resource type of data. Supported resource types: `k8s_container`, `k8s_node`, `k8s_pod`, `global`, `generic_node`, `generic_task`, and `gce_instance`. | `global`, `gce_instance` | +| `k8s_cluster_name` | The name of the cluster that the container (node or pod based on the resource type) is running in. If the resource type is `k8s_container`, `k8s_node`, or `k8s_pod`, this field is required. | _none_ | +| `k8s_cluster_location` | The physical location of the cluster that contains (node or pod based on the resource type) the container. If the resource type is `k8s_container`, `k8s_node`, or `k8s_pod`, this field is required. | _none_ | +| `labels_key` | The name of the key from the original record that contains the LogEntry's `labels`. | `logging.googleapis.com/labels` | +| `labels` | Optional list of comma-separated of strings specifying `key=value` pairs. The resulting labels will be combined with the elements obtained from `labels_key` to set the LogEntry Labels. Elements from `labels` will override duplicate values from `labels_key`. | _none_ | +| `log_name_key` | The name of the key from the original record that contains the `logName` value. | `logging.googleapis.com/logName` | +| `tag_prefix` | Set the `tag_prefix` used to validate the tag of logs with Kubernetes resource type. Without this option, the tag of the log must be in format of `k8s_container(pod/node).*` to use the `k8s_container` resource type. Now the tag prefix is configurable by this option (note the ending dot). | `k8s_container.`, `k8s_pod.`, `k8s_node.` | +| `severity_key` | The name of the key from the original record that contains the severity. | `logging.googleapis.com/severity` | +| `project_id_key` | The value of this field is used by the Stackdriver output plugin to find the GCP `projectID` from `jsonPayload` and then extract the value of it to set the `PROJECT_ID` within LogEntry `logName`, which controls the GCP project that should receive these logs. | `logging.googleapis.com/projectId`. See [Stackdriver Special Fields](https://github.com/fluent/fluent-bit-docs/blob/master/pipeline/outputs/stackdriver_special_fields.md#log-entry-fields) for more info. | +| `autoformat_stackdriver_trace` | Rewrite the _trace_ field to include the `projectID` and format it for use with Cloud Trace. When this flag is enabled, the user can get the correct result by printing only the `traceID` (usually 32 characters). | `false` | +| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` | +| `custom_k8s_regex` | Set a custom regular expression to extract field like `pod_name`, `namespace_name`, `container_name`, and `docker_id` from the `local_resource_id` in logs. This is helpful if the value of pod or node name contains dots. | `(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$` | +| `resource_labels` | An optional list of comma-separated strings specifying resource label plain text assignments (`new=value`) and mappings from an original field in the log entry to a destination field (`destination=$original`). Nested fields and environment variables are also supported using the [record accessor syntax](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor). If configured, all resource labels will be assigned using this API only, with the exception of `project_id`. See [Resource Labels](#resource-labels) for more details. | _none_ | +| `http_request_key` | The name of the key from the original record that contains the LogEntry's `httpRequest`. Be aware that the default value erroneously uses an underscore; users will likely need to set this to `logging.googleapis.com/httpRequest`. | `logging.googleapis.com/http_request` | +| `compress` | Set payload compression mechanism. The only available option is `gzip`. If no value is specified, no compression is applied. | _none_ | +| `cloud_logging_base_url` | Set the base Cloud Logging API URL to use for the `/v2/entries:write` API request. | `https://logging.googleapis.com` | + +## Configuration file + +If you are using a Google Cloud Credentials File, the following configuration is enough to get started: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -80,17 +71,17 @@ pipeline: {% endtab %} {% endtabs %} -#### Example configuration file for k8s resource type: +### Example configuration file for Kubernetes resource types -`local_resource_id` is used by the Stackdriver output plugin to set the labels field for different k8s resource types. Stackdriver plugin will try to find the `local_resource_id` field in the log entry. If there is no field `logging.googleapis.com/local_resource_id` in the log, the plugin will then construct it by using the tag value of the log. +The `local_resource_id` value is used by the Stackdriver output plugin to set the labels field for different Kubernetes resource types. Stackdriver plugin will try to find the `local_resource_id` field in the log entry. If there is no field `logging.googleapis.com/local_resource_id` in the log, the plugin will then construct it by using the tag value of the log. -The local_resource_id should be in format: +The `local_resource_id` should be in format: -* `k8s_container...` -* `k8s_node.` -* `k8s_pod..` +- `k8s_container...` +- `k8s_node.` +- `k8s_pod..` -This implies that if there is no local_resource_id in the log entry then the tag of logs should match this format. Note that we have an option tag_prefix so it is not mandatory to use k8s_container(node/pod) as the prefix for tag. +This implies that if there is no `local_resource_id` in the log entry then the tag of logs should match this format. Be aware that there is a `tag_prefix` option, so it's not mandatory to use `k8s_container(node/pod)` as the prefix for tag. {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -138,22 +129,26 @@ pipeline: {% endtab %} {% endtabs %} -## Resource Labels +## Resource labels -Currently, there are four ways which fluent-bit uses to assign fields into the resource/labels section. +Fluent Bit uses the following methods to assign fields to the resource/labels section: 1. Resource Labels API 2. Monitored Resource API 3. Local Resource ID 4. Credentials / Config Parameters -If `resource_labels` is correctly configured, then fluent-bit will attempt to populate all resource/labels using the entries specified. Otherwise, fluent-bit will attempt to use the monitored resource API. Similarly, if the monitored resource API cannot be used, then fluent-bit will attempt to populate resource/labels using configuration parameters and/or credentials specific to the resource type. As mentioned in the [Configuration File](#configuration-file) section, fluent-bit will attempt to use or construct a local resource ID for a K8s resource type which does not use the resource labels or monitored resource API. +If `resource_labels` is correctly configured, then Fluent Bit will attempt to populate all resource labels using the entries specified. Otherwise, Fluent Bit will attempt to use the monitored resource API. Similarly, if the monitored resource API can't be used, then Fluent Bit will attempt to populate resource/labels using configuration parameters and credentials specific to the resource type. As mentioned in the [configuration file](#configuration-file) section, Fluent Bit will attempt to use or construct a local resource ID for a Kubernetes resource type that doesn't use the resource labels or monitored resource API. + +{% hint style="info" %} -Note that the `project_id` resource label will always be set from the service credentials or fetched from the metadata server and cannot be overridden. +The `project_id` resource label will always be set from the service credentials or fetched from the metadata server. It can't be overridden. -### Using the resource_labels parameter +{% endhint %} -The `resource_labels` configuration parameter offers an alternative API for assigning the resource labels. To use, input a list of comma separated strings specifying resource labels plaintext assignments (`new=value`), mappings from an original field in the log entry to a destination field (`destination=$original`) and/or environment variable assignments (`new=${var}`). +### Use the `resource_labels` parameter + +The `resource_labels` configuration parameter offers an alternative API for assigning the resource labels. To use, input a list of comma separated strings specifying resource labels plain text assignments (`new=value`), mappings from an original field in the log entry to a destination field (`destination=$original`) and environment variable assignments (`new=${var}`). For instance, consider the following log entry: @@ -221,7 +216,7 @@ This will produce the following log: This makes the `resource_labels` API the recommended choice for supporting new or existing resource types that have all resource labels known before runtime or available on the payload during runtime. -For instance, for a K8s resource type, `resource_labels` can be used in tandem with the [Kubernetes filter](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) to pack all six resource labels. Below is an example of what this could look like for a `k8s_container` resource: +For instance, for a Kubernetes resource type, `resource_labels` can be used in tandem with the [Kubernetes filter](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) to pack all six resource labels. The following example shows what this could look like for a `k8s_container` resource: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -250,9 +245,9 @@ pipeline: {% endtab %} {% endtabs %} -`resource_labels` also supports validation for required labels based on the input resource type. This allows fluent-bit to check if all specified labels are present for a given configuration before runtime. If validation is not currently supported for a resource type that you would like to use this API with, we encourage you to open a pull request for it. Adding validation for a new resource type is simple - all that is needed is to specify the resources associated with the type alongside the required labels [here](https://github.com/fluent/fluent-bit/blob/master/plugins/out_stackdriver/stackdriver_resource_types.c#L27). +The `resource_labels` parameter also supports validation for required labels based on the input resource type. This allows Fluent Bit to check if all specified labels are present for a given configuration before runtime. If validation isn't currently supported for a resource type that you would like to use this API with, you can open a pull request for it. Adding validation for a new resource type involves specifying the resources associated with the type alongside the [required labels](https://github.com/fluent/fluent-bit/blob/master/plugins/out_stackdriver/stackdriver_resource_types.c#L27). -## Log Name +## Log names By default, the plugin will write to the following log name: @@ -260,13 +255,13 @@ By default, the plugin will write to the following log name: /projects//logs/ ``` -You may be in a scenario where being more specific about the log name is important (for example [integration with Log Router rules](https://cloud.google.com/logging/docs/routing/overview) or [controlling cardinality of log based metrics]((https://cloud.google.com/logging/docs/logs-based-metrics/troubleshooting#too-many-time-series))). You can control the log name directly on a per-log basis by using the [`logging.googleapis.com/logName` special field][StackdriverSpecialFields]. You can configure a `log_name_key` if you'd like to use something different from `logging.googleapis.com/logName`, i.e. if the `log_name_key` is set to `mylognamefield` will extract the log name from `mylognamefield` in the log. +You might be in a scenario where being more specific about the log name is important (for example [integration with Log Router rules](https://cloud.google.com/logging/docs/routing/overview) or [controlling cardinality of log based metrics](https://cloud.google.com/logging/docs/logs-based-metrics/troubleshooting#too-many-time-series)). You can control the log name directly on a per-log basis by using the [`logging.googleapis.com/logName` special field](https://github.com/fluent/fluent-bit-docs/blob/master/pipeline/outputs/stackdriver_special_fields.md#log-entry-fields). You can configure a `log_name_key` if you'd like to use something different from `logging.googleapis.com/logName`. For example, if the `log_name_key` is set to `mylognamefield` will extract the log name from `mylognamefield` in the log. -## Troubleshooting Notes +## Troubleshooting ### Upstream connection error -An upstream connection error means Fluent Bit was not able to reach Google services, the error looks like this: +An upstream connection error means Fluent Bit wasn't able to reach Google services, the error looks like this: ```text [2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection @@ -274,9 +269,8 @@ An upstream connection error means Fluent Bit was not able to reach Google servi This is due to a network issue in the environment where Fluent Bit is running. Make sure that the Host, Container or Pod can reach the following Google end-points: -* [https://www.googleapis.com](https://www.googleapis.com) -* [https://logging.googleapis.com](https://logging.googleapis.com) - +- [https://www.googleapis.com](https://www.googleapis.com) +- [https://logging.googleapis.com](https://logging.googleapis.com) {% hint style="warning" %} @@ -284,7 +278,7 @@ For more details, see GitHub reference: [#761](https://github.com/fluent/fluent- {% endhint %} -### Fail to process local_resource_id +### Fail to process `local_resource_id` The error looks like this: @@ -294,13 +288,11 @@ The error looks like this: Check the following: -* If the log entry does not contain the local_resource_id field, does the tag of the log match for format? -* If tag_prefix is configured, does the prefix of tag specified in the input plugin match the tag_prefix? +- If the log entry doesn't contain the `local_resource_id` field, does the tag of the log match for format? +- If `tag_prefix` is configured, does the prefix of tag specified in the input plugin match the `tag_prefix`? ## Other Implementations Stackdriver officially supports a [logging agent based on Fluentd](https://cloud.google.com/logging/docs/agent). -We plan to support some [special fields in structured payloads](https://cloud.google.com/logging/docs/agent/configuration#special-fields). Use cases of special fields is [here](./stackdriver_special_fields.md). - -[StackdriverSpecialFields]: ./stackdriver_special_fields.md#log-entry-fields \ No newline at end of file +Fluent Bit plans to support some [special fields in structured payloads](https://cloud.google.com/logging/docs/agent/configuration#special-fields). For more information, see the documentation about [special fields](https://github.com/fluent/fluent-bit-docs/blob/master/pipeline/outputs/stackdriver_special_fields.md#log-entry-fields). From 8eca6c49a66c6db7b67ead70d5f71f22f63e7785 Mon Sep 17 00:00:00 2001 From: Alexa Kreizinger Date: Wed, 23 Jul 2025 13:46:17 -0700 Subject: [PATCH 2/2] Apply suggestions from code review Co-authored-by: Craig Norris <112565517+cnorris-cs@users.noreply.github.com> Signed-off-by: Alexa Kreizinger --- pipeline/outputs/stackdriver.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pipeline/outputs/stackdriver.md b/pipeline/outputs/stackdriver.md index 80b6bcc7d..20a070f97 100644 --- a/pipeline/outputs/stackdriver.md +++ b/pipeline/outputs/stackdriver.md @@ -2,7 +2,7 @@ The _Stackdriver_ output plugin lets you ingest your records into the [Google Cloud Stackdriver Logging](https://cloud.google.com/logging/) service. -Before getting started with the plugin configuration, make sure to obtain the [proper credentials](https://cloud.google.com/logging/docs/agent/logging/authorization#create-service-account) to get access to the service. For best results, use a common JSON credentials file that can be referenced by the Stackdriver plugin. +Before getting started with the plugin configuration, be sure to obtain the [proper credentials](https://cloud.google.com/logging/docs/agent/logging/authorization#create-service-account) to get access to the service. For best results, use a common JSON credentials file that can be referenced by the Stackdriver plugin. ## Configuration parameters @@ -10,7 +10,7 @@ This plugin uses the following configuration parameters. For more details about | Key | Description | Default | | --- | ----------- | ------- | -| `google_service_credentials` | Absolute path to a Google Cloud credentials JSON file | Value of environment variable `$GOOGLE_APPLICATION_CREDENTIALS` | +| `google_service_credentials` | Absolute path to a Google Cloud credentials JSON file. | Value of environment variable `$GOOGLE_APPLICATION_CREDENTIALS` | | `service_account_email` | Account email associated with the service. Only available if no credentials file has been provided. | Value of environment variable `$SERVICE_ACCOUNT_EMAIL` | | `service_account_secret` | Private key content associated with the service account. Only available if no credentials file has been provided. | Value of environment variable `$SERVICE_ACCOUNT_SECRET` | | `metadata_server` | Prefix for a metadata server. | Value of environment variable `$METADATA_SERVER`, or `http://metadata.google.internal` if unset. | @@ -148,7 +148,7 @@ The `project_id` resource label will always be set from the service credentials ### Use the `resource_labels` parameter -The `resource_labels` configuration parameter offers an alternative API for assigning the resource labels. To use, input a list of comma separated strings specifying resource labels plain text assignments (`new=value`), mappings from an original field in the log entry to a destination field (`destination=$original`) and environment variable assignments (`new=${var}`). +The `resource_labels` configuration parameter offers an alternative API for assigning the resource labels. To use, input a list of comma-separated strings specifying resource labels plain text assignments (`new=value`), mappings from an original field in the log entry to a destination field (`destination=$original`) and environment variable assignments (`new=${var}`). For instance, consider the following log entry: @@ -245,7 +245,7 @@ pipeline: {% endtab %} {% endtabs %} -The `resource_labels` parameter also supports validation for required labels based on the input resource type. This allows Fluent Bit to check if all specified labels are present for a given configuration before runtime. If validation isn't currently supported for a resource type that you would like to use this API with, you can open a pull request for it. Adding validation for a new resource type involves specifying the resources associated with the type alongside the [required labels](https://github.com/fluent/fluent-bit/blob/master/plugins/out_stackdriver/stackdriver_resource_types.c#L27). +The `resource_labels` parameter also supports validation for required labels based on the input resource type. This allows Fluent Bit to check if all specified labels are present for a given configuration before runtime. If validation isn't supported for a resource type that you want to use this API with, open a pull request for it. Adding validation for a new resource type involves specifying the resources associated with the type alongside the [required labels](https://github.com/fluent/fluent-bit/blob/master/plugins/out_stackdriver/stackdriver_resource_types.c#L27). ## Log names @@ -255,13 +255,13 @@ By default, the plugin will write to the following log name: /projects//logs/ ``` -You might be in a scenario where being more specific about the log name is important (for example [integration with Log Router rules](https://cloud.google.com/logging/docs/routing/overview) or [controlling cardinality of log based metrics](https://cloud.google.com/logging/docs/logs-based-metrics/troubleshooting#too-many-time-series)). You can control the log name directly on a per-log basis by using the [`logging.googleapis.com/logName` special field](https://github.com/fluent/fluent-bit-docs/blob/master/pipeline/outputs/stackdriver_special_fields.md#log-entry-fields). You can configure a `log_name_key` if you'd like to use something different from `logging.googleapis.com/logName`. For example, if the `log_name_key` is set to `mylognamefield` will extract the log name from `mylognamefield` in the log. +You might be in a scenario where being more specific about the log name is important (for example, [integration with Log Router rules](https://cloud.google.com/logging/docs/routing/overview) or [controlling cardinality of log based metrics](https://cloud.google.com/logging/docs/logs-based-metrics/troubleshooting#too-many-time-series)). You can control the log name directly on a per-log basis by using the [`logging.googleapis.com/logName` special field](https://github.com/fluent/fluent-bit-docs/blob/master/pipeline/outputs/stackdriver_special_fields.md#log-entry-fields). You can configure a `log_name_key` if you'd like to use something different from `logging.googleapis.com/logName`. For example, if the `log_name_key` is set to `mylognamefield` will extract the log name from `mylognamefield` in the log. ## Troubleshooting ### Upstream connection error -An upstream connection error means Fluent Bit wasn't able to reach Google services, the error looks like this: +An upstream connection error means Fluent Bit wasn't able to reach Google services. In that case, the error message looks like this: ```text [2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection