Skip to content

Commit

Permalink
ohsu feedback (#24)
Browse files Browse the repository at this point in the history
* adds s3 bucket config instructions
* adds arborist rbac
  • Loading branch information
bwalsh authored and zflamig committed Mar 18, 2019
1 parent da737b2 commit 1ff3d52
Show file tree
Hide file tree
Showing 6 changed files with 67 additions and 18 deletions.
37 changes: 29 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Compose-Services
===

Docker-compose setup for experimental commons, small commons, or local development of the Gen3 stack. Production use should use [cloud-automation](https://github.com/uc-cdis/cloud-automation).
Docker-compose setup for experimental commons, small commons, or local development of the Gen3 stack. Production use should use [cloud-automation](https://github.com/uc-cdis/cloud-automation).

* [Introduction](#Introduction)
* [Setup](#Setup)
Expand Down Expand Up @@ -48,7 +48,7 @@ Database setup only has to occur the very first time you setup your local gen3 D

Configure the postgres database container to publish the db service port to the host machine by un-commenting the `ports` block under the `postgres` service in `docker-compose.yml`, then running `docker-compose up -d postgres`:
```
#
#
# uncomment this to make postgres available from the container host - ex:
# psql -h localhost -d fence -U fence_user
ports:
Expand All @@ -65,7 +65,7 @@ psql -h localhost -U fence_user -d fence_db
- Docker and Docker Compose

### Docker Setup
The official Docker installation page can be found [here](https://docs.docker.com/install/#supported-platforms). If you've never used Docker before, it may be helpful to read some of the Docker documentation to familiarize yourself with containers.
The official Docker installation page can be found [here](https://docs.docker.com/install/#supported-platforms). If you've never used Docker before, it may be helpful to read some of the Docker documentation to familiarize yourself with containers.

### Docker Compose Setup
If you are using Linux, then the official Docker installation does not come with Docker Compose. The official Docker Compose installation page can be found [here](https://docs.docker.com/compose/install/#prerequisites). You can also read an overview of what Docker Compose is [here](https://docs.docker.com/compose/overview/) if you want some extra background information. Go through the steps of installing Docker Compose for your platform, then proceed to setting up credentials.
Expand Down Expand Up @@ -108,11 +108,11 @@ Now that you are done with the setup, all Docker Compose features should be avai
The basic command of Docker Compose is
```
docker-compose up
```
which can be useful for debugging errors. To detach output from the containers, run
```
which can be useful for debugging errors. To detach output from the containers, run
```
docker-compose up -d
```
```
When doing this, the logs for each service can be accessed using
```
docker logs
Expand All @@ -133,7 +133,7 @@ so it may take several minutes for the portal to finally come up at https://loca
Following the portal logs is one way to monitor its startup progress:
```
docker logs -f portal-service
```
```

## Dev Tips

Expand Down Expand Up @@ -198,7 +198,7 @@ Refer to [Setting up Users](#Setting-Up-Users) to review how to apply the change
### Generating Test Metadata

The `gen3` stack requires metadata submitted to the system to conform
to a schema defined by the system's dictionary. The `gen3` developers
to a schema defined by the system's dictionary. The `gen3` developers
use a tool to generate test data that conforms to a particular dictionary.
For example - the following commands generate data files suitable to submit
to a `gen3` stack running the default genomic dictionary at https://s3.amazonaws.com/dictionary-artifacts/datadictionary/develop/schema.json
Expand All @@ -220,3 +220,24 @@ The data dictionary the commons uses is dictated by either the `DICTIONARY_URL`
In addition to changing the `DICTIONARY_URL` or `PATH_TO_SCHEMA_DIR` field, it may also be necesary to change the `APP` environment variable in data-portal. This will only be the case if the alternate dictionary deviates too much from the default dev dictionary.

As this is a change to the Docker Compose configuration, you will need to restart the Docker Compose to apply the changes.

### Enabling data upload to s3

The templates/user.yaml file has been configured to grant data_upload privileges to the `[email protected]` user. Connect it to your s3 bucket by configuring access keys and bucket name in `fence-config.yaml`.

```
289,290c289,290
< aws_access_key_id: 'your-key'
< aws_secret_access_key: 'your-key'
---
> aws_access_key_id: ''
> aws_secret_access_key: ''
296c296
< your-bucket:
---
> bucket1:
309c309
< DATA_UPLOAD_BUCKET: 'your-bucket'
---
> DATA_UPLOAD_BUCKET: 'bucket1'
```
6 changes: 3 additions & 3 deletions scripts/fence_setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ done

echo "postgres is ready"

update-ca-certificates
update-ca-certificates

fence-create sync --yaml user.yaml
fence-create sync --yaml user.yaml --arborist http://arborist-service

rm -f /var/run/apache2/apache2.pid && /usr/sbin/apache2ctl -D FOREGROUND
rm -f /var/run/apache2/apache2.pid && /usr/sbin/apache2ctl -D FOREGROUND
2 changes: 1 addition & 1 deletion templates/fence-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ INDEXD_USERNAME: 'indexd_client'
INDEXD_PASSWORD: 'indexd_client_pass'

# url where role-based access control microservice is running
ARBORIST: null
ARBORIST: http://arborist-service

# //////////////////////////////////////////////////////////////////////////////////////
# CLOUD API LIBRARY (CIRRUS) CONFIGURATION
Expand Down
4 changes: 2 additions & 2 deletions templates/peregrine_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ def load_json(file_name):
config["AUTH_ADMIN_CREDS"] = None
config["INTERNAL_AUTH"] = None

# Signpost
# Signpost - coordinate auth with values in indexd_setup.sh
config['SIGNPOST'] = {
'host': environ.get('SIGNPOST_HOST', 'http://indexd-service'),
'version': 'v0',
'auth': ('gdcapi', conf_data.get( 'indexd_password', '{{indexd_password}}')),
'auth': ('indexd_client', conf_data.get( 'indexd_password', '{{indexd_password}}')),
}
config["FAKE_AUTH"] = False
config["PSQLGRAPH"] = {
Expand Down
4 changes: 2 additions & 2 deletions templates/sheepdog_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ def load_json(file_name):
config["AUTH_ADMIN_CREDS"] = None
config["INTERNAL_AUTH"] = None

# Signpost
# Signpost - coordinate auth with values in indexd_setup.sh
config['SIGNPOST'] = {
'host': environ.get('SIGNPOST_HOST', 'http://indexd-service'),
'version': 'v0',
'auth': ('gdcapi', conf_data.get('indexd_password', '{{indexd_password}}')),
'auth': ('indexd_client', conf_data.get('indexd_password', '{{indexd_password}}')),
}
config["FAKE_AUTH"] = False
config["PSQLGRAPH"] = {
Expand Down
32 changes: 30 additions & 2 deletions templates/user.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ users:
admin: True
#
# Give yourself permissions to DEV, QA, and jnkns (JENKINS - for gen3 qa tests)
# programs - these wont exist until you create them
#
# programs - these wont exist until you create them.
# Also, grant yourself the data_upload policy for DEV
[email protected]:
admin: True
projects:
Expand All @@ -17,13 +17,16 @@ users:
- auth_id: jenkins
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']
- auth_id: DEV
resource: /programs/DEV
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']
- auth_id: project1
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']
- auth_id: QA
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']
- auth_id: project1
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']
policies: ['data_upload']

#
# The integration test suite assumes this user exists - you can
# delete it if you dont want to run the test suite:
Expand All @@ -44,3 +47,28 @@ users:
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']
- auth_id: project1
privilege: ['create', 'read', 'update', 'delete', 'upload', 'read-storage']

# Define information used for role-based access control.
# The information in the `rbac` section is sent to arborist to populate its
# access control model.
# see https://github.com/uc-cdis/fence/blob/88327870a5c35b13154a11408548cecb60a4945c/ua.yaml#L25
rbac:
policies:
- id: 'data_upload'
description: 'upload raw data files to S3'
role_ids: ['file_uploader']
resource_paths: ['/data_file']
resources:
- name: 'data_file'
description: 'data files, stored in S3'
- name: 'programs'
subresources:
- name: 'DEV'
roles:
- id: 'file_uploader'
description: 'can upload data files'
permissions:
- id: 'file_upload'
action:
service: 'fence'
method: 'file_upload'

0 comments on commit 1ff3d52

Please sign in to comment.