diff --git a/README.md b/README.md index 6d21a88..4a3c9bf 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ Compose-Services === -Docker-compose setup for experimental commons, small commons, or local development of the Gen3 stack. Production use should use [cloud-automation](https://github.com/uc-cdis/cloud-automation). +Docker-compose setup for experimental commons, small commons, or local development of the Gen3 stack. Production use should use [cloud-automation](https://github.com/uc-cdis/cloud-automation). * [Introduction](#Introduction) * [Setup](#Setup) @@ -48,7 +48,7 @@ Database setup only has to occur the very first time you setup your local gen3 D Configure the postgres database container to publish the db service port to the host machine by un-commenting the `ports` block under the `postgres` service in `docker-compose.yml`, then running `docker-compose up -d postgres`: ``` - # + # # uncomment this to make postgres available from the container host - ex: # psql -h localhost -d fence -U fence_user ports: @@ -65,7 +65,7 @@ psql -h localhost -U fence_user -d fence_db - Docker and Docker Compose ### Docker Setup -The official Docker installation page can be found [here](https://docs.docker.com/install/#supported-platforms). If you've never used Docker before, it may be helpful to read some of the Docker documentation to familiarize yourself with containers. +The official Docker installation page can be found [here](https://docs.docker.com/install/#supported-platforms). If you've never used Docker before, it may be helpful to read some of the Docker documentation to familiarize yourself with containers. ### Docker Compose Setup If you are using Linux, then the official Docker installation does not come with Docker Compose. The official Docker Compose installation page can be found [here](https://docs.docker.com/compose/install/#prerequisites). You can also read an overview of what Docker Compose is [here](https://docs.docker.com/compose/overview/) if you want some extra background information. Go through the steps of installing Docker Compose for your platform, then proceed to setting up credentials. @@ -108,11 +108,11 @@ Now that you are done with the setup, all Docker Compose features should be avai The basic command of Docker Compose is ``` docker-compose up -``` -which can be useful for debugging errors. To detach output from the containers, run +``` +which can be useful for debugging errors. To detach output from the containers, run ``` docker-compose up -d -``` +``` When doing this, the logs for each service can be accessed using ``` docker logs @@ -133,7 +133,7 @@ so it may take several minutes for the portal to finally come up at https://loca Following the portal logs is one way to monitor its startup progress: ``` docker logs -f portal-service -``` +``` ## Dev Tips @@ -198,7 +198,7 @@ Refer to [Setting up Users](#Setting-Up-Users) to review how to apply the change ### Generating Test Metadata The `gen3` stack requires metadata submitted to the system to conform -to a schema defined by the system's dictionary. The `gen3` developers +to a schema defined by the system's dictionary. The `gen3` developers use a tool to generate test data that conforms to a particular dictionary. For example - the following commands generate data files suitable to submit to a `gen3` stack running the default genomic dictionary at https://s3.amazonaws.com/dictionary-artifacts/datadictionary/develop/schema.json @@ -220,3 +220,24 @@ The data dictionary the commons uses is dictated by either the `DICTIONARY_URL` In addition to changing the `DICTIONARY_URL` or `PATH_TO_SCHEMA_DIR` field, it may also be necesary to change the `APP` environment variable in data-portal. This will only be the case if the alternate dictionary deviates too much from the default dev dictionary. As this is a change to the Docker Compose configuration, you will need to restart the Docker Compose to apply the changes. + +### Enabling data upload to s3 + +The templates/user.yaml file has been configured to grant data_upload privileges to the `yourlogin@gmail.com` user. Connect it to your s3 bucket by configuring access keys and bucket name in `fence-config.yaml`. + +``` +289,290c289,290 +< aws_access_key_id: 'your-key' +< aws_secret_access_key: 'your-key' +--- +> aws_access_key_id: '' +> aws_secret_access_key: '' +296c296 +< your-bucket: +--- +> bucket1: +309c309 +< DATA_UPLOAD_BUCKET: 'your-bucket' +--- +> DATA_UPLOAD_BUCKET: 'bucket1' +``` diff --git a/scripts/fence_setup.sh b/scripts/fence_setup.sh index 3345dd0..d1307e5 100644 --- a/scripts/fence_setup.sh +++ b/scripts/fence_setup.sh @@ -9,8 +9,8 @@ done echo "postgres is ready" -update-ca-certificates +update-ca-certificates -fence-create sync --yaml user.yaml +fence-create sync --yaml user.yaml --arborist http://arborist-service -rm -f /var/run/apache2/apache2.pid && /usr/sbin/apache2ctl -D FOREGROUND \ No newline at end of file +rm -f /var/run/apache2/apache2.pid && /usr/sbin/apache2ctl -D FOREGROUND diff --git a/templates/fence-config.yaml b/templates/fence-config.yaml index 8b1a8c2..e36156c 100644 --- a/templates/fence-config.yaml +++ b/templates/fence-config.yaml @@ -333,7 +333,7 @@ INDEXD_USERNAME: 'indexd_client' INDEXD_PASSWORD: 'indexd_client_pass' # url where role-based access control microservice is running -ARBORIST: null +ARBORIST: http://arborist-service # ////////////////////////////////////////////////////////////////////////////////////// # CLOUD API LIBRARY (CIRRUS) CONFIGURATION diff --git a/templates/peregrine_settings.py b/templates/peregrine_settings.py index ab2fcfb..c362bd8 100644 --- a/templates/peregrine_settings.py +++ b/templates/peregrine_settings.py @@ -13,11 +13,11 @@ def load_json(file_name): config["AUTH_ADMIN_CREDS"] = None config["INTERNAL_AUTH"] = None -# Signpost +# Signpost - coordinate auth with values in indexd_setup.sh config['SIGNPOST'] = { 'host': environ.get('SIGNPOST_HOST', 'http://indexd-service'), 'version': 'v0', - 'auth': ('gdcapi', conf_data.get( 'indexd_password', '{{indexd_password}}')), + 'auth': ('indexd_client', conf_data.get( 'indexd_password', '{{indexd_password}}')), } config["FAKE_AUTH"] = False config["PSQLGRAPH"] = { diff --git a/templates/sheepdog_settings.py b/templates/sheepdog_settings.py index 6b2a614..ccce334 100644 --- a/templates/sheepdog_settings.py +++ b/templates/sheepdog_settings.py @@ -13,11 +13,11 @@ def load_json(file_name): config["AUTH_ADMIN_CREDS"] = None config["INTERNAL_AUTH"] = None -# Signpost +# Signpost - coordinate auth with values in indexd_setup.sh config['SIGNPOST'] = { 'host': environ.get('SIGNPOST_HOST', 'http://indexd-service'), 'version': 'v0', - 'auth': ('gdcapi', conf_data.get('indexd_password', '{{indexd_password}}')), + 'auth': ('indexd_client', conf_data.get('indexd_password', '{{indexd_password}}')), } config["FAKE_AUTH"] = False config["PSQLGRAPH"] = { diff --git a/templates/user.yaml b/templates/user.yaml index 24d4365..e842647 100644 --- a/templates/user.yaml +++ b/templates/user.yaml @@ -7,8 +7,8 @@ users: admin: True # # Give yourself permissions to DEV, QA, and jnkns (JENKINS - for gen3 qa tests) - # programs - these wont exist until you create them - # + # programs - these wont exist until you create them. + # Also, grant yourself the data_upload policy for DEV yourlogin@gmail.com: admin: True projects: @@ -17,6 +17,7 @@ users: - auth_id: jenkins privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] - auth_id: DEV + resource: /programs/DEV privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] - auth_id: project1 privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] @@ -24,6 +25,8 @@ users: privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] - auth_id: project1 privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] + policies: ['data_upload'] + # # The integration test suite assumes this user exists - you can # delete it if you dont want to run the test suite: @@ -44,3 +47,28 @@ users: privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] - auth_id: project1 privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage'] + +# Define information used for role-based access control. +# The information in the `rbac` section is sent to arborist to populate its +# access control model. +# see https://github.com/uc-cdis/fence/blob/88327870a5c35b13154a11408548cecb60a4945c/ua.yaml#L25 +rbac: + policies: + - id: 'data_upload' + description: 'upload raw data files to S3' + role_ids: ['file_uploader'] + resource_paths: ['/data_file'] + resources: + - name: 'data_file' + description: 'data files, stored in S3' + - name: 'programs' + subresources: + - name: 'DEV' + roles: + - id: 'file_uploader' + description: 'can upload data files' + permissions: + - id: 'file_upload' + action: + service: 'fence' + method: 'file_upload'