Skip to content

Commit

Permalink
Merge pull request #450 from xchem/m2ms-1055-cset-upload-fix
Browse files Browse the repository at this point in the history
Initial Computed/CompoundSet upload logic
  • Loading branch information
alanbchristie authored Nov 24, 2023
2 parents 2170020 + 92c5e46 commit da7e5a7
Show file tree
Hide file tree
Showing 10 changed files with 291 additions and 208 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ repos:
# the user has done 90% of the lint checks before the code
# hits the server.
- repo: https://github.com/pycqa/pylint
rev: v2.17.0
rev: v3.0.2
hooks:
- id: pylint
additional_dependencies:
Expand Down
19 changes: 10 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,29 +201,30 @@ at `/code/logs`.

## Database migrations
The best approach is to spin-up the development backend (locally) using
`docker-compose` and then shell into Django. For example,
to make new migrations called "add_job_request_start_and_finish_times"
`docker-compose` with the custom *migration* compose file and then shell into Django.
For example, to make new migrations called "add_job_request_start_and_finish_times"
for the viewer's model run the following: -

> Before starting postgres, if you need to, remove any pre-existing local database
(if one exists) with `rm -rf ./data/postgresl`

docker-compose up -d
docker-compose -f docker-compose-migrate.yml up -d

# Then enter the backend container with: -

Then from within the backend container make the migrations
(in this case for the `viewer`)...

docker-compose exec backend bash

docker-compose -f docker-compose-migrate.yml exec backend bash
python manage.py makemigrations viewer --name "add_job_request_start_and_finish_times"

Exit the container and tear-down the deployment: -

docker-compose down
docker-compose -f docker-compose-migrate.yml down

> The migrations will be written to your clone's filesystem as the clone directory
is mapped into the container as a volume. You just need to commit the
migrations that have been written to the local directory to Git.
> The migrations will be written to your clone's filesystem as the project directory
is mapped into the container as a volume at `/code`. You just need to commit the
migrations that have been written to the corresponding migrations directory.

## Sentry error logging
[Sentry] can be used to log errors in the backend container image.
Expand Down
77 changes: 77 additions & 0 deletions docker-compose-migrate.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---

# You typically create .env file to populate the
# sensitive variables for the backend deployment.
# Then bring the containers up with: -
# docker-compose -f docker-compose-migrate.yml up -d
# Then enter the backend container with: -
# docker-compose exec backend bash
# Then run the migrations with: -
# python manage.py makemigrations viewer --name "add_job_request_start_and_finish_times"

version: '3'

services:

# The database
database:
image: postgres:12.16-alpine3.18
container_name: database
volumes:
- ./data/postgresql/data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: fragalysis
POSTGRES_DB: frag
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
healthcheck:
test: pg_isready -U postgres -d frag
interval: 10s
timeout: 2s
retries: 5
start_period: 10s

# The stack backend
backend:
image: ${BE_NAMESPACE:-xchem}/fragalysis-backend:${BE_IMAGE_TAG:-latest}
container_name: backend
build:
context: .
dockerfile: Dockerfile
command: /bin/bash /code/launch-stack.sh
volumes:
- ./data/logs:/code/logs/
- ./data/media:/code/media/
- .:/code/
environment:
AUTHENTICATE_UPLOAD: ${AUTHENTICATE_UPLOAD:-True}
POSTGRESQL_USER: postgres
# Celery tasks need to run synchronously
CELERY_TASK_ALWAYS_EAGER: 'True'
# Error reporting and default/root log-level
FRAGALYSIS_BACKEND_SENTRY_DNS: ${FRAGALYSIS_BACKEND_SENTRY_DNS}
LOGGING_FRAMEWORK_ROOT_LEVEL: ${LOGGING_FRAMEWORK_ROOT_LEVEL:-INFO}
# Keycloak configuration
OIDC_KEYCLOAK_REALM: ${OIDC_KEYCLOAK_REALM}
OIDC_RP_CLIENT_ID: ${OIDC_RP_CLIENT_ID:-fragalysis-local}
OIDC_RP_CLIENT_SECRET: ${OIDC_RP_CLIENT_SECRET}
OIDC_AS_CLIENT_ID: ${OIDC_AS_CLIENT_ID:-account-server-api}
OIDC_DM_CLIENT_ID: ${OIDC_DM_CLIENT_ID:-data-manager-api}
OIDC_RENEW_ID_TOKEN_EXPIRY_MINUTES: '210'
# Squonk configuration
SQUONK2_VERIFY_CERTIFICATES: 'No'
SQUONK2_UNIT_BILLING_DAY: 3
SQUONK2_PRODUCT_FLAVOUR: BRONZE
SQUONK2_SLUG: fs-local
SQUONK2_ORG_OWNER: ${SQUONK2_ORG_OWNER}
SQUONK2_ORG_OWNER_PASSWORD: ${SQUONK2_ORG_OWNER_PASSWORD}
SQUONK2_ORG_UUID: ${SQUONK2_ORG_UUID}
SQUONK2_UI_URL: ${SQUONK2_UI_URL}
SQUONK2_DMAPI_URL: ${SQUONK2_DMAPI_URL}
SQUONK2_ASAPI_URL: ${SQUONK2_ASAPI_URL}
ports:
- "8080:80"
depends_on:
database:
condition: service_healthy
4 changes: 4 additions & 0 deletions fragalysis/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -354,6 +354,10 @@
# dedicated Discourse server.
DISCOURSE_DEV_POST_SUFFIX = os.environ.get("DISCOURSE_DEV_POST_SUFFIX", '')

# Where all the computed set SDF files are hosted (relative to the MEDIA_ROOT).
# Used primarily by the Computed-Set upload logic.
COMPUTED_SET_SDF_ROOT = "computed_set_sdfs/"

# An optional URL that identifies the URL to a prior stack.
# If set, it's typically something like "https://fragalysis.diamond.ac.uk".
# It can be blank, indicating there is no legacy service.
Expand Down
Loading

0 comments on commit da7e5a7

Please sign in to comment.