Skip to content

Commit

Permalink
fix: fix broken markdown links
Browse files Browse the repository at this point in the history
  • Loading branch information
miguelsorianod committed Apr 17, 2023
1 parent c54e497 commit 7c8b7c6
Show file tree
Hide file tree
Showing 9 changed files with 38 additions and 35 deletions.
29 changes: 16 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ file in the root directory of the repository.
* Changes have been verified by one additional reviewer against:
* each required environment
* each supported upgrade path
* If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members.
* If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members.
* PR has been merged


Expand Down Expand Up @@ -55,7 +55,7 @@ $GOPATH
```

## Set up Git Hooks
Run the following command to set up git hooks for the project.
Run the following command to set up git hooks for the project.

```
make setup/git/hooks
Expand Down Expand Up @@ -87,7 +87,10 @@ Set the following configuration in your **Launch.json** file.
}
```
## Modifying the API definition
The services' OpenAPI specification is located in `openapi/kas-fleet-manager.yaml`. It can be modified using Apicurio Studio, Swagger or manually. The OpenAPI files follows the [RHOAS API standards](https://api.appservices.tech/docs/api-standards.html), each modification has to adhere to it.
The services' OpenAPI specification is located in `openapi/kas-fleet-manager.yaml`.
It can be modified using Apicurio Studio, Swagger or manually. The OpenAPI
files follow the [RHOAS API standards](https://github.com/redhat-developer/app-services-api-guidelines/blob/main/docs/api-standards.md),
each modification has to adhere to it.

Once you've made your changes, the second step is to validate it:

Expand All @@ -101,16 +104,16 @@ Once the schema is valid, the remaining step is to generate the openapi modules
make openapi/generate
```
## Adding a new ConfigModule
See the [Adding a new Config Module](./docs/adding-a-new-ConfigModule.md) documentation for more information.
See the [Adding a new Config Module](docs/adding-a-new-config-module.md) documentation for more information.

## Adding a new endpoint
See the [adding-a-new-endpoint](./docs/adding-a-new-endpoint.md) documentation.
See the [adding-a-new-endpoint](docs/adding-a-new-endpoint.md) documentation.

## Adding New Serve Command Flags
See the [Adding Flags to KAS Fleet Manager](./docs/adding-new-flags.md) documentation for more information.
See the [Adding Flags to KAS Fleet Manager](docs/adding-new-flags.md) documentation for more information.

## Testing
See the [automated testing](./docs/automated-testing.md) documentation.
See the [automated testing](docs/automated-testing.md) documentation.

## Logging Standards & Best Practices
* Log only actionable information, which will be read by a human or a machine for auditing or debugging purposes
Expand All @@ -131,7 +134,7 @@ Log to this level any error based information that might want to be brought to s

#### Error
Log to this level any error that is fatal to the given transaction and affects expected user operation. This may or may not include failed connections, missing expected data, or other unrecoverable outcomes.
Error handling should be implemented by following these best practices as laid out in [this guide](docs/error-handing.md).
Error handling should be implemented by following these best practices as laid out in [this guide](docs/best-practices/error-handling.md).

#### Fatal
Log to this level any error that is fatal to the service and requires the service to be immediately shutdown in order to prevent data loss or other unrecoverable states. This should be limited to scripts and fail-fast scenarios in service startup *only* and *never* because of a user operation in an otherwise healthy servce.
Expand Down Expand Up @@ -160,17 +163,17 @@ glog.V(10).Info("biz")

### Sentry Logging
Sentry monitors errors/exceptions in a real-time environment. It provides detailed information about captured errors. See [sentry](https://sentry.io/welcome/) for more details.

Logging can be enabled by importing the sentry-go package: "github.com/getsentry/sentry-go

Following are possible ways of logging events via Sentry:

```go
sentry.CaptureMessage(message) // for logging message
sentry.CaptureEvent(event) // capture the events
sentry.CaptureEvent(event) // capture the events
sentry.CaptureException(error) // capture the exception
```
Example :
```
Example :
```go
func check(err error, msg string) {
if err != nil && err != http.ErrServerClosed {
Expand Down Expand Up @@ -200,4 +203,4 @@ make code/fix

## Writing Docs

Please see the [README](./docs/README.md) in `docs` directory.
Please see the [README](docs/README.md) in `docs` directory.
10 changes: 5 additions & 5 deletions docs/adding-a-new-config-module.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
# Adding a new Config Module
This document covers the steps on how to add a new config module to the service.

1. Add an example configuration file in the /config folder. Ensure that the parameters are documented, see [provider-configuration.yaml](/config/provider-configuration.yaml) as example.
1. Add an example configuration file in the /config folder. Ensure that the parameters are documented, see [provider-configuration.yaml](../config/provider-configuration.yaml) as example.

2. Create a new config file with a filename format of `<config>.go` (see [kafka.go](../internal/kafka/internal/config/kafka.go) as an example).
> **NOTE**: If this configuration is an extension of an already existing configuration, please prefix the filename with the name of the config module its extending i.e. [kafka_supported_sizes.go](../internal/kafka/internal/config/kafka_supported_sizes.go) is an extension of the [kafka.go](../internal/kafka/internal/config/kafka.go) config.
> **NOTE**: If this configuration is an extension of an already existing configuration, please prefix the filename with the name of the config module its extending i.e. [kafka_supported_instance_types.go](../internal/kafka/internal/config/kafka_supported_instance_types.go) is an extension of the [kafka.go](../internal/kafka/internal/config/kafka.go) config.
These files should be created in the following places based on usage :
- `/pkg/...` - If it's a common configuration that can be shared across all services
- `/internal/kafka/internal/config` - for any Kafka specific configuration
- `/internal/connector/internal/config` - for any Connector specific configuration

3. The new config file has to implement the `ConfigModule` [interface](/pkg/environments/interfaces.go). This should be added into one of the following providers file:
3. The new config file has to implement the `ConfigModule` [interface](../pkg/environments/interfaces.go). This should be added into one of the following providers file:
- `CoreConfigProviders()` inside the [core providers file](../pkg/providers/core.go): For any global configuration that is not specific to a particular service (i.e. kafka or connector).
- `ConfigProviders()` inside [kafka providers](../internal/kafka/providers.go): For any kafka specific configuration.
- `ConfigProviders()` inside [connector providers](../internal/connector/providers.go): For any connector specific configuration.
> **NOTE**: If your ConfigModule also implements the ServiceValidator [interface](/pkg/environments/interfaces.go), please ensure to also specify `di.As(new(environments2.ServiceValidator))` when providing the dependency in one of the ConfigProviders listed above. Otherwise, the validation for your configuration will not be called.
> **NOTE**: If your ConfigModule also implements the ServiceValidator [interface](../pkg/environments/interfaces.go), please ensure to also specify `di.As(new(environments2.ServiceValidator))` when providing the dependency in one of the ConfigProviders listed above. Otherwise, the validation for your configuration will not be called.
4. Create/edit tests for the configuration file if needed with a filename format of `<config_test>.go` in the same directory the config file was created.
4. Create/edit tests for the configuration file if needed with a filename format of `<config_test>.go` in the same directory the config file was created.

5. Ensure the [service-template](../templates/service-template.yml) is updated. See this [pr](https://github.com/bf2fc6cc711aee1a0c2a/kas-fleet-manager/pull/817) as an example.
2 changes: 1 addition & 1 deletion docs/adding-new-flags.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ func (c *Config) AddFlags(fs *pflag.FlagSet) {
### Adding a New Config File
If the new configuration flag doesn't fit in any of the existing config file, a new one should be created.

1. See the [Adding a New Config File](/docs/adding-a-new-config-module.md) documentation.
1. See the [Adding a New Config File](adding-a-new-config-module.md) documentation.
2. Define any new flags in the `AddFlags()` function.

### Verify Addition of New Flags
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Data Plane OSD Cluster Dynamic Scaling functionality currently deals with:

## Autoscaling of worker nodes of an OSD cluster

Autoscaling of worker nodes of an OSD cluster is done by leveraging the [Cluster Autoscaler](https://docs.openshift.com/container-platform/4.9/machine_management/applying-autoscaling.html) as described in [AP-15: Dynamic Scaling of Data Plane Clusters](https://architecture.appservices.tech/ap/15/#autoscaling-of-nodes). For Manual clusters, this has to be enabled manually. Worker node autoscaling is enabled by default for all clusters that are created dynamically by the Fleet manager
Autoscaling of worker nodes of an OSD cluster is done by leveraging the [Cluster Autoscaler](https://docs.openshift.com/container-platform/4.9/machine_management/applying-autoscaling.html) as described in [AP-15: Dynamic Scaling of Data Plane Clusters](https://architecture.bf2.dev/ap/15/). For Manual clusters, this has to be enabled manually. Worker node autoscaling is enabled by default for all clusters that are created dynamically by the Fleet manager

## Prewarming of worker nodes

Expand Down
16 changes: 8 additions & 8 deletions docs/automated-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A new `InterfaceNameMock` struct will be generated and can be used in tests.
For more information on using `moq`, see:

- [The moq repository](https://github.com/matryer/moq)
- The IDGenerator [interface](../pkg/client/ocm/id.go) and [mock](../pkg/client/ocm/idgenerator_moq_test.go) in this repository
- The IDGenerator [interface](../pkg/client/ocm/id.go) and [mock](../pkg/client/ocm/idgenerator_moq.go) in this repository

For mocking the OCM client, [this wrapper interface](../pkg/client/ocm/client.go). If using the OCM SDK in
any component of the system, please ensure the OCM SDK logic is placed inside this mock-able
Expand Down Expand Up @@ -60,7 +60,7 @@ defer teardown()

See [TestKafkaPost integration test](../internal/kafka/test/integration/kafkas_test.go) as an example of this.

>NOTE: the `teardown` function is responsible for performing post test cleanups e.g of service accounts that are provisioned
>NOTE: the `teardown` function is responsible for performing post test cleanups e.g of service accounts that are provisioned
for the Fleetshard authentication or Kafka canary service account. Ensure that if the integration test of a new features provision external resources, then these are properly cleanedup.

Integration tests in this service can take advantage of running in an "emulated OCM API". This
Expand Down Expand Up @@ -129,15 +129,15 @@ import (
func MyTestFunction(t *testing.T) {
err := utils.NewPollerBuilder().
// test output should go to `t.Logf` instead of `fmt.Printf`
OutputFunction(t.Logf).
OutputFunction(t.Logf).
// sets number of retries and interval between each retry
IntervalAndRetries(10 * time.Second, 10).
OnRetry(func(attempt int, maxAttempts int) (done bool, err error) { // function executed on each retry
// put your logic here
// put your logic here
return true, nil
}).
Build().Poll()
if err != nil {
if err != nil {
// ...
}
// ...
Expand All @@ -148,9 +148,9 @@ func MyTestFunction(t *testing.T) {
### Mock KAS Fleetshard Sync
The mock [KAS Fleetshard Sync](../internal/kafka/test/mocks/kasfleetshardsync/kas-fleetshard-sync.go) is used to reconcile data plane and Kafka cluster status during integration tests. This also needs to be used even when running integration tests against a real OCM environment as the KAS Fleetshard Sync in the data plane cluster cannot
communicate to the KAS Fleet Manager during integration test runs.
communicate to the KAS Fleet Manager during integration test runs.

The mock KAS Fleetshard Sync needs to be started at the start of a test that requires updates to a data plane or Kafka cluster status.
The mock KAS Fleetshard Sync needs to be started at the start of a test that requires updates to a data plane or Kafka cluster status.

**Example Usage:**
```go
Expand Down Expand Up @@ -233,4 +233,4 @@ Some endpoints will act differently depending on privileges of the entity callin

### Avoid setting up Kafka_SRE Identity Provider for Dataplane clusters

The KafkaSRE identity provider is automatically setup for each cluster in `cluster_provisioned` state and it is reconciled every time for all the clusters in `ready` state. This step is not done if the cluster IDP has already been configured i.e the `identity_provider_id` column is set. When it is not required to set up the IDP, you just have to make sure that the dataplane cluster in under test has the `IdentityProviderID` field / `identity_provider_id` column set to a dummy value.
The KafkaSRE identity provider is automatically setup for each cluster in `cluster_provisioned` state and it is reconciled every time for all the clusters in `ready` state. This step is not done if the cluster IDP has already been configured i.e the `identity_provider_id` column is set. When it is not required to set up the IDP, you just have to make sure that the dataplane cluster in under test has the `IdentityProviderID` field / `identity_provider_id` column set to a dummy value.
6 changes: 3 additions & 3 deletions docs/best-practices/error-handling.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ err := fmt.Errorf("wrap error: %w", err)

### Capture the original error when creating a new ServiceError

When a new [ServiceError](../pkg/errors/errors.go) is created and it is caused by the another error, use the `NewWithCause` function to create the new one, but retain the original error value.
When a new [ServiceError](../../pkg/errors/errors.go) is created and it is caused by the another error, use the `NewWithCause` function to create the new one, but retain the original error value.
Make sure the message for the `Reason` field of the new `ServiceError` does not leak any internal information.

#### Do
Expand Down Expand Up @@ -82,8 +82,8 @@ if err := somefunc(); err != nil {
There is no need to log an error or forward it to Sentry at the place where it occurs. We have central places to handle errors.
The main places where errors handled are:

1. The [handError](../pkg/handlers/framework.go#L42) function. All errors for HTTP requested should be handled here, and it will log the error to logs and forward the error to Sentry.
2. The [runReconcile](../pkg/workers/reconciler.go#L87) function. All errors occur in the background workers should be handled here.
1. The [errorHandler](../../pkg/handlers/framework.go) function. All errors for HTTP requested should be handled here, and it will log the error to logs and forward the error to Sentry.
2. The [runReconcile](../../pkg/workers/reconciler.go) function. All errors occur in the background workers should be handled here.
3. If the error is not returned to the caller, we should use an instance of the `UHCLogger` to log the error which will make sure it is captured by Sentry as well.

#### Do
Expand Down
2 changes: 1 addition & 1 deletion docs/feature-flags.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ This lists the feature flags and their sub-configurations to enable/disable and
- `kafka-tls-key-file` [Required]: The path to the file containing the Kafka TLS private key (default: `'secrets/kafka-tls.key'`).
- **enable-developer-instance**: Enable the creation of one kafka developer instances per user
- **quota-type**: Sets the quota service to be used for access control when requesting Kafka instances (options: `ams` or `quota-management-list`, default: `quota-management-list`).
> For more information on the quota service implementation, see the [quota service architecture](./architecture/quota-service-implementation) architecture documentation.
> For more information on the quota service implementation, see the [quota service architecture](./architecture/quota-service-implementation.md) architecture documentation.
- If this is set to `quota-management-list`, quotas will be managed via the quota management list configuration.
> See [quota control](./quota-management-list-configuration.md) documentation for more information about the quota management list.
- `enable-instance-limit-control` [Required]: Enables enforcement of limits on how much Kafka instances a user can create (default: `false`).
Expand Down
2 changes: 1 addition & 1 deletion docs/utilities/arrays.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Returned values are:
* The index of the value into the list of passed-in values (-1 if not found)
* The found value or the _zero value_ for the `T` type

Some example usage can be found [here](../pkg/shared/utils/arrays/generic_array_utils_test.go)
Some example usage can be found [here](../../pkg/shared/utils/arrays/generic_array_utils_test.go)

### Filter

Expand Down
4 changes: 2 additions & 2 deletions pkg/db/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

Database migrations are handled by this package. All migrations should be created in separate files, following a starndard naming convetion

The `migrations.go` file defines an array of migrate functions that are called by the [gormigrate](https://github.com/go-gormigrate/gormigrate/v2) helper. Each migration function should perform a specific migration.
The `migrations.go` file defines an array of migrate functions that are called by the [Gormigrate](https://github.com/go-gormigrate/gormigrate) helper. Each migration function should perform a specific migration.

## Creating a new migration

Create a migration ID based on the time using the YYYYMMDDHHMM format. Example: `August 21 2018 at 2:54pm` would be `201808211454`.

Your migration's name should be used in the file name and in the function name and should adequately represent the actions your migration is taking. If your migration is doing too much to fit in a name, you should consider creating multiple migrations.

Create a separate file in `pkg/db/` following the naming schema in place: `<migration_id>_<migration_name>.go`. In the file, you'll create a function that returns a [gormmigrate.Migration](https://github.com/go-gormigrate/gormigrate/v2/blob/master/gormigrate.go#L37) object with `gormigrate.Migrate` and `gormigrate.Rollback` functions defined.
Create a separate file in `pkg/db/` following the naming schema in place: `<migration_id>_<migration_name>.go`. In the file, you'll create a function that returns a [gormmigrate.Migration](https://github.com/go-gormigrate/gormigrate/blob/master/gormigrate.go) object with `gormigrate.Migrate` and `gormigrate.Rollback` functions defined.

Add the function you created in the separate file to the `migrations` list in `pkg/db/migrations.go`.

Expand Down

0 comments on commit 7c8b7c6

Please sign in to comment.