Contributing encompasses repository specific requirements.
To review the ODH requirements, please refer to the dev setup documentation.
Before beginning development on an issue, please refer to our Definition of Ready.
Development for only "frontend" can target a backend service running on an OpenShift cluster. This method requires you to first log in to the OpenShift cluster. It is recommended to use this method unless backend changes are being developed.
cd frontend
oc login ...
npm run start:dev:ext
Development for both "frontend" and "backend" can be done while running:
npm run dev
But the recommended flow for development would be have two sessions, one for the "frontend":
cd frontend
npm run start:dev
And one for the "backend":
cd backend
npm run start:dev
Once you have either method running, you can open the dashboard locally at: http://localhost:4010
. The dev server will reload automatically when you make changes.
If running a local backend, some requests from the frontend need to make their way to services running on the cluster for which there are no external routes exposed. This can be achieved using oc port-forward
. Run the following command in a separate terminal to start the port forwarding processes. Note that this limits developers to working within a single namespace and must be restarted if switching to a new namespace.
NAMESPACE=my-example make port-forward
To give your dev environment access to the ODH configuration, log in to the OpenShift cluster and set the project to the location of the ODH installation
oc login https://api.my-openshift-cluster.com:6443 -u <username> -p <password>
or log in using the makefile and .env.local
settings
OC_URL=https://specify.in.env:6443
OC_PROJECT=my-project
OC_USER=kubeadmin
OC_PASSWORD=my-password
make login
or
npm run make:login
Note: You'll need to reauthenticate using one of the above steps and restart
backend
each day.
See frontend testing guidelines for more information.
Jest unit tests cover all utility and hook functions.
npm run test:unit
Cypress tests using a production instance of the dashboard frontend to test the full application.
cd ./frontend
# Build and start the server
npm run cypress:server:build
npm run cypress:server
# Run cypress in a separate terminal
npm run cypress:run:mock
cd ./frontend && npm run test:lint
You can apply lint auto-fixes with
npm run test:fix
The CI will run the command npm run test
which will run tests for both backend and frontend.
The current build leverages dotenv
, or .env*
, files to apply environment build configuration.
dotenv files applied to the root of this project...
.env
, basic settings, utilized by both "frontend" and "backend".env.local
, gitignored settings, utilized by both "frontend" and "backend".env.development
, utilized by both "frontend" and "backend". Its use can be seen with the NPM script$ npm run dev
.env.development.local
, utilized by both "frontend" and "backend". Its use can be seen with the NPM script$ npm run dev
.env.production
, is primarily used by the "frontend", minimally used by the "backend". Its use can be seen with the NPM script$ npm run start
.env.production.local
, is primarily used by the "frontend", minimally used by the "backend". Its use can be seen with the NPM script$ npm run start
.env.test
, is primarily used by the "frontend", minimally used by the "backend" during testing.env.test.local
, is primarily used by the "frontend", minimally used by the "backend" during testing
There are build processes in place that leverage the .env*.local
files, these files are actively applied in our .gitignore
in order to avoid build conflicts. They should continue to remain ignored, and not be added to the repository.
The dotenv files have access to default settings grouped by facet; frontend, backend, build
...
For testing purposes, we recommend deploying a new version of the dashboard in your cluster following the steps below.
- Make sure you have the
oc
command line tool installed and configured to access your cluster. - Make sure you have the
Open Data Hub Operator
installed in your cluster. - Remove the
dashboard
component from yourKfDef
CR if already deployed. - You can remove previous dashboard deployments by running
make undeploy
ornpm run make:undeploy
in the root of this repository.
We use IMAGE_REPOSITORY
as the environment variable to specify the image to use for the dashboard. You can set it in the .env.local
file in the root of this repository.
This environment variable is used in the Makefile
to build and deploy the dashboard image, and can be set to a new image tag to build or to a pre-built image to deploy.
To deploy a new image, you can either build it locally or use the one built by the CI.
You can build your image by running
make build
or
npm run make:build
in the root of this repository.
By default, we use podman as the default container tool, but you can change it by
- setting the
CONTAINER_BUILDER
environment variable todocker
- passing it as environment overrides when using
make build -e CONTAINER_BUILDER=docker
After building the image, you need to push it to a container registry accessible by your cluster. You can do that by running
make push
or
npm run make:push
in the root of this repository.
All pull requests will have an associated pr-<PULL REQUEST NUMBER>
image built and pushed to quay.io for use in testing and verifying code changes as part of the PR code review. Any updates to the PR code will automatically trigger a new PR image build, replacing the previous hash that was referenced by pr-<PULL REQUEST NUMBER>
.
To deploy your image, you just need to run the following command in the root of this repository
make deploy
or
npm run make:deploy
you will deploy all the resources located in the manifests
folder alongside the image you selected in the previous step.
Once the elements defined in the Definition of Done are complete, the feature, bug or story being developed will be considered ready for release.