-
requirements:
-
define the following env vars:
-
COS_BASE_PATH → base URL for the managed connector service control plane
-
KAS_BASE_PATH → base URL for the managed kafka service control plane
TipI use direnv with the following set-up
export OCM_CONFIG=$PWD/.ocm.json export KUBECONFIG=$PWD/.kube/config export COS_BASE_PATH=https://cos-fleet-manager-managed-connectors-dev.rhoc-dev-153f1de160110098c1928a6c05e19444-0000.eu-de.containers.appdomain.cloud export KAS_BASE_PATH=https://api.openshift.com
-
-
retrieve your ocm-offline-token from https://qaprodauth.cloud.redhat.com/openshift/token using the _kafka_supporting account
-
follow the steps on that page to download and install the ocm command-line tool, then run the ocm login with the provided token
-
Note
|
This is an example installation that consists in:
|
-
set-up minikube
./etc/scripts/start_minikube.sh
-
install images
eval $(minikube --profile cos docker-env) ./mvnw clean install -DskipTests=true -Pcontainer-build
-
install the service
kubectl apply -k etc/kubernetes/manifests/overlays/local --server-side --force-conflicts
If the previous command ends with an error like
error: unable to recognize "etc/kubernetes/manifests/overlays/local": no matches for kind "IntegrationPlatform" in version "camel.apache.org/v1"
, then run the following:kubectl apply -f etc/kubernetes/manifests/overlays/local/camel-k/integration-platform.yaml --server-side --force-conflicts
At this point, operators and sync are deployed in the redhat-openshift-connectors, but they are not running as replica is set to 0 by default because some resources have to be configured.
Point to the cos namespace
kubectl config set-context --current --namespace=redhat-openshift-connectors
➜ kubectl get deployments -n redhat-openshift-connectors NAME READY UP-TO-DATE AVAILABLE AGE camel-k-operator 1/1 1 1 2d3h cos-fleetshard-operator-camel 0/0 0 0 6s cos-fleetshard-operator-debezium 0/0 0 0 5s cos-fleetshard-sync 0/0 0 0 4s strimzi-cluster-operator 1/1 1 1 2d3h
-
configure pull secret
In order to use private image on quay a pull secret need to be crated:
-
copy the content of rhoas-pull-docker to a local file
-
create a pull secret:
kubectl create secret generic addon-pullsecret \ --from-file=.dockerconfigjson=${path of rhoas-pull-docker} \ --type=kubernetes.io/dockerconfigjson
-
-
create cluster and configure secrets
This section expects you to have cos-tools/bin in your PATH, but you may also just run the scripts from inside the bin directory.
NoteThis creates a new cluster for you on the fleet manager, remember to delete it once done.
SUFFIX=$(uuidgen | tr -d '-') create-cluster-secret $(create-cluster "$USER-$SUFFIX" | jq -r '.id')
When you’re done you may query for created clusters with
get-clusters
and delete it withdelete-clusters <cluster id>
. -
scale deployments
kubectl scale deployment -l "app.kubernetes.io/part-of=cos" --replicas=1
-
uninstall the service
kubectl delete -k etc/kubernetes/manifests/overlays/local
By default, all the operators are configured to print INFO logs which might not be enough to track an issue. Follow the instructions below to enable the DEBUG logs on the different operators:
-
Camel K Operator
To enable DEBUG log level for the Camel K Operator, follow these steps
-
Create a config map named "camel-k-override-config" with a property "log.level=debug" on the same namespace as of the camel k operator.
-
Restart the camel k operator pod to load the created config map.
Remove the created config map from the namespace and restart the camel k operator pod to restore the default behavior.
-
-
Strimzi Operator
To enable DEBUG log level for the Strimzi Operator, follow these steps
-
Create a config map named "strimzi-override-config" with a property "log.level=debug" in the same namespace as of the strimzi operator.
-
Restart the strimzi operator pod to load the created config map.
Remove the created config map from the namespace and restart the strimzi operator pod to restore the default behavior.
-
You may want to use quarkus:dev
for local development, or if you want to run connected to a local fleet-manager. In this case, do the following:
-
stop running containers
Scale to 0 the components you want to run locally. For e.g., if you want to run the sync:
kubectl scale deployment cos-fleetshard-sync --replicas=0
-
check env vars
If you are also running the
fleet-manager
locally, you need to changeCOS_BASE_PATH
accordingly:export COS_BASE_PATH=<your_fleet_manager>
-
make sure secret is correct
If you change the
COS_BASE_PATH
variable, you need to recreate the cluster secret:create-cluster-secret <your-cluster-id>
-
run using local profile
Use the
local
maven profile to run the application. It will usequarkus:dev
under the hoods, use an available port, set our namespace, and also import properties values from kubernetes secrets and configmaps:mvn -Dlocal
-
Code style
The build process automatically formats the code according to the following Eclipse code formatter configuration files:
-
cos-build-tools/src/main/resources/eclipse-format.xml
-
cos-build-tools/src/main/resources/eclipse.importorder
TipIntelliJ users can install the Adapter for Eclipse Code Formatter plugin to use Eclipse code formatter directly from IntelliJ
-
Note
|
Although this section expects you to use a completely new kubernetes cluster, you may also just stop |
-
set-up minikube
# you may need to tune this command ./etc/scripts/start_minikube.sh
-
install CRDs
# setup test resources kubectl apply -k ./etc/kubernetes/manifests/overlays/it --server-side --force-conflicts
-
run tests
./mvnw clean install