-
Notifications
You must be signed in to change notification settings - Fork 7
WIP Proxy to devconsole api #86
base: master-next
Are you sure you want to change the base?
Conversation
build and deploy |
Your build is progress : https://console-openshift-console.apps.rohit13.devcluster.openshift.com/k8s/ns/openshift-console/builds PR will be LIVE in https://pr-86-openshift-console.apps.rohit13.devcluster.openshift.com |
cmd/bridge/main.go
Outdated
@@ -281,6 +284,13 @@ func main() { | |||
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"}, | |||
Endpoint: k8sEndpoint, | |||
} | |||
srv.DevConsoleAppServiceProxyConfig = &proxy.Config{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move this to in-cluster? Or are we doing this at a later time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you check https://pr-86-openshift-console.apps.rohit13.devcluster.openshift.com/k8s/ns/openshift-console/deploymentconfigs/pr-86/yaml , console
is started with
command:
- /opt/bridge/bin/bridge
- '--public-dir=/opt/bridge/static'
- '--config=/var/console-config/console-config.yaml'
- '--service-ca-file=/var/service-ca/service-ca.crt'
and the config looks like
kind: ConsoleConfig
apiVersion: console.openshift.io/v1beta1
auth:
clientID: pr-86
clientSecretFile: /var/oauth-config/clientSecret
logoutRedirect: ""
oauthEndpointCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
clusterInfo:
consoleBaseAddress: https://pr-86-openshift-console.apps.rohit13.devcluster.openshift.com
consoleBasePath: ""
masterPublicURL: https://https://api.rohit13.devcluster.openshift.com:6443
customization:
branding: ocp
documentationBaseURL: https://docs.openshift.com/container-platform/4.0/
servingInfo:
bindAddress: https://0.0.0.0:8443
certFile: /var/serving-cert/tls.crt
keyFile: /var/serving-cert/tls.key
so, probably, it uses in-cluster
, let me put the code in off-cluster
build and deploy |
Your build is progress : https://console-openshift-console.apps.rohit13.devcluster.openshift.com/k8s/ns/openshift-console/builds PR will be LIVE in https://pr-86-openshift-console.apps.rohit13.devcluster.openshift.com |
build and deploy |
Your build is progress : https://console-openshift-console.apps.rohit13.devcluster.openshift.com/k8s/ns/openshift-console/builds PR will be LIVE in https://pr-86-openshift-console.apps.rohit13.devcluster.openshift.com |
cmd/bridge/main.go
Outdated
InsecureSkipVerify: *fK8sModeOffClusterSkipVerifyTLS, | ||
}, | ||
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"}, | ||
Endpoint: &url.URL{Scheme: "http", Host: openshiftDevConsoleAppServiceHost, Path: ""}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this service will be available in-cluster? If so, we should use https and the service-ca certificate like we do for prometheus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sam, Right! Will do that.
Assuming the devconsole API service , which would house the topology backend, is deployed by the operator https://github.com/redhat-developer/devconsole-operator/pull/195/files , this change exposes a console API
/api/devconsole/
which proxies to the REST service, with authorization headers.The authorization header contains the user's openshift token.