Releases: sebadob/rauthy
v0.28.0
Breaking
Environment Variable Only Config
If you configured Rauthy via environment variables only, you might have breaking changes with this update.
If a configuration is being made purely via env vars, and a proper rauthy.cfg
(at least an empty file) is not being
created and mounted inside the container, the application would actually use demo values as long as they are not
overwritten by env vars manually.
To improve the security out of the box, the container setup has been changed and the demo config has a separate
filename, which will only be parsed when LOCAL_TEST=true
is passed in as an env var before app startup.
Setting this value inside the usual rauthy.cfg
has no effect.
The insecure local testing values that have been set before (again, with an env vars only setup), can be found here
https://github.com/sebadob/rauthy/blob/v0.27.3/rauthy.deploy.cfg for reference, so you can check, if you would have
breaking changes.
If no rauthy.cfg
is ever being created, default values will be used, and you can configure the application safely
with env vars only. If you decide to use both, env vars will keep on having the higher priority over values set inside
the config file, just like it has been before.
Changed header names for session CSRF and password reset tokens
This may concern you, if you have built custom UI parts in front of Rauthy.
The Headers names for the session and password reset CSRF tokens have been changed and now contain a leading x-
.
This make the API more clean, since custom headers should be marked with a leading x-
.
csrf-token
->x-csrf-token
pwd-csrf-token
->x-pwd-csrf-token
Custom Client Branding
With the migration to Svelte 5 (mentioned below), the way theming is done has been changed from the ground up in such
a way, that it is not possible to migrate possibly existing custom client branding. This means that you will lose and
need to re-create a possibly existing custom branding with this version.
Paginated Users / Sessions
This may only concern you if you are doing direct API calls to GET
users or sessions on a very big Rauthy instance
in combination with server side pagination. When you added backwards=true
before, the offset of a single page has
been added automatically in the backend. This is not the case anymore to provide more flexibility with this API. You
need to add the offset
yourself now while going backwards
.
Security
CVE-2025-24898
Even though the vulnerable code blocks have not been used directly, the openssl
and openssl-sys
dependencies have
been bumped to fix CVE-2025-24898.
GHSA-67mh-4wv8-2f99
This could have only been affecting dev environments, not any production build,
but GHSA-67mh-4wv8-2f99 has been fixed by bumping frontend
dependencies.
Changes
Svelte 5 Migration
The whole UI has been migrated to Svelte 5 + Typescript. Many parts and components have been re-written from the ground
up to provide a better DX and maintainability in the future.
This whole migration comes with a lot of changes, most of them under the hood regarding performance and efficiency.
There are so many changes, that it does not make much sense to list them all here, but the TL;DR is:
- The whole UI is now based on Svelte 5 + TS with improved performance.
- The DX and UX has been improved a lot.
- Accessibility has been improved by a huge margin.
- Rauthy now comes with a light and dark mode, even for the custom client branding login site.
- We have a new logo, which makes it a lot easier to identify Rauthy in a tab overview and so on.
- The whole UI is now fully responsive and usable even down to mobile devices.
- The whole design of the UI has been changed in a way that most components and payloads can now be cache infinitely.
- The engine for server side rendering of the static HTML content has been migrated
from askama to rinja (based on askama with
lots of improvements). - The backend now comes with caching and dynamic pre-compression of all dynamic SSR HTML content.
- The way i18n is done has been changed a lot and moved from the backend into a type-checked frontend file to make
it a bit easier to get into and provide caching again. - The admin UI can now be translated as well. The i18n for common user sites and the admin UI are split for reduced
payloads for most users. Currently, onlyen
andde
exist for the Admin UI, but these can be extended easily in
the future as soon as someone provides a PR. They are also independent with the only requirement that a common i18n
must exist before an admin i18n. (Translations for E-Mails are still in the backend of course) - Part of the state for the Admin UI has been moved into the URL, which makes it possible to copy & paste most links
and actually end up where you were before.
NOTICE: Since the whole UI has basically been re-written, or at least almost every single line has been touched, the
new UI can be seen as in beta state. If you have any problems with it, please open an issue.
User Pictures / Avatars
It is now possible to upload an avatar / picture for each user. This can be done via the account dashboard.
Rauthy uses the term picture
to match the OIDC RFC spec. If the scope
during login includes profile
and the user
has a picture, the picture
claim will be included in the id_token
and will contain the URL where the user picture
can be found.
User Picture URLs are "safe" to be used publicly, and they contain 2 cryptographically random secure IDs. This makes it
possible to even make them available without authentication for ease of use. By default, a session / API Key / token
is required to fetch them, but you can opt-out of that.
For storage options, the default is database. This is not ideal and should only be done for small instances with maybe
a few hundred users. They can fill up the database pretty quickly, even though images are optimized after upload, they
will end up somewhere in the range of ~25 - 40kB each.
For single instance deployments, you can use local file
storage, while for HA deployments, you should probably use
an S3 bucket to do so.
Uploading user pictures can be disabled completely by setting PICTURE_STORAGE_TYPE=disabled
The following new config variables are available:
#####################################
########## User Pictures ############
#####################################
# The storage type for user pictures.
# By default, they are saved inside the Database, which is not ideal.
# If you only have a couple hundred users, this will be fine, but
# anything larger should use an S3 bucket if available. For single
# instance deployments, file storage to local disk is available
# as well, but this must not be used with multi replica / HA
# deployments.
# Images will ba reduced in size to max 192px on the longest side.
# They most often end up between 25 - 40kB in size.
#
# Available options: db file s3 disabled
# Default: db
#PICTURE_STORAGE_TYPE=db
# If `PICTURE_STORAGE_TYPE=file`, the path where pictures will be
# saved can be changed with this value.
# default: ./pictures
#PICTURE_PATH="./pictures"
# Access values for the S3 bucket if `PICTURE_STORAGE_TYPE=s3`.
# Not needed otherwise.
#PIC_S3_URL=https://s3.example.com
#PIC_S3_BUCKET=my_bucket
#PIC_S3_REGION=example
#PIC_S3_KEY=s3_key
#PIC_S3_SECRET=s3_secret
# default: true
#PIC_S3_PATH_STYLE=true
# Set the upload limit for user picture uploads in MB.
# default: 10
#PICTURE_UPLOAD_LIMIT_MB=10
# By default, user pictures can only be fetched with a valid
# session, an API Key with access to Users + Read, or with a
# valid token for this user. However, depending on where and
# how you are using Rauthy for your user management, you may
# want to make user pictures available publicly without any
# authentication.
#
# User Picture URLs are "safe" in a way that you cannot guess
# a valid URL. You will need to know the User ID + the Picture
# ID. Both values are generated cryptographically secure in the
# backend during creation. The Picture ID will also change
# with every new upload.
#
# default: false
#PICTURE_PUBLIC=false
Static HTML + prepared queries added to version control
To make it possible to build Rauthy from source in environments like e.g. FreeBSD, all pre-built static HTML files have
been added to version control, even though they are built dynamically each time in release pipelines. Additionally, all
DB queries used by sqlx
are added to version control as well.
The reason is that the UI cannot be built in certain environments. With these files checked-in, you can build from
source with just cargo build --release
by having Rust available. You don't need to build the UI or have a Postgres
running anymore, if you only care about building from source.
I18n - Korean
Korean has been added to the translations for all user-facing UI parts.
#670
Filter I18n UI Languages
Since it is likely that the available translations will expand in the future and you may not need or
want to show all options to the users, because you maybe only have a local / regional deployment, you
can now apply a filter to the Languages that are shown i...
v0.27.3
Changes
Upstream Identity Providers
To provide additional compatibility for some upstream providers like Active Directory Federation Severices, some changes have been applied to Rauthy's behavior.
The first thing is that the HTTP client used for upstream Logins does not force TLS v1.3 anymore, but also allows TLS v1.2. Both v1.2 and v1.3 are considered being secure by current standards. This is necessary, because some OSes like Windows Server 2019 do not support TLS 1.3.
The second change is for the way upstream providers are configured. The behavior until now was, that Rauthy added the client redentials as both Basic Authentication in headers, and in the body for maximum compatibility. However, some IdP'S (like ADFS for nstance) complain about this and only expect it in one place. To make this happen, there are 2 new fields for the upstream IdP onfiguration:
client_secret_basic: bool
client_secret_post: bool
These are available as switches in the Admin UI for each upstream provider. To not introduce breaking changes, all possibly existing configurations will have both options enabled like it has been up until now.
Note
Even though this changes the request and response objects on the API, this change is NOT being handled as a breaking change. API clients are forbidden to modify upstream IdPs for security reasons, which means this change should only affect the Rauthy Admin UI.
Gitlab as Upstream IdP
Gitlab is special and does its own, annoying thing to make it usable as an upstream IdP. An issue has been found when someone tries to log in with no publicly shown email address. In this worst case scenario, a successful login via Github while retrieving all necessary information (email is mandatory for Rauthy), you need to do 3 different API requests.
This version also makes it possible to log in via Github IdP with an account with only private email addresses. A different scope
for the login is necessary to make this possible. The template in the UI has been updated, but this will not affect existing Github IdP Providers. If you are currently using Github as upstream IdP, please change the scope
manually from read:user
to user:email
.
Bugfix
- During the deletion of a custom scope, that has been mapped to only a clients default scopes, but not the free ones, the mapping would be skipped during the whole client cleanup and end up being left-over after the deletion, which needed a manual cleanup afterward.
#663
v0.27.2
Changes
Even though not recommended at all, it is now possible to opt-out of the refresh_token
nbf claim, and disable it.
By default, A refresh_token
will not be valid before access_token_lifetime - 60 seconds
, but some (bad) client implementations try to refresh access_tokens
while they are still valid for a long time. To opt-out, you get a new config variable:
# By default, `refresh_token`s will have an `nbf` claim, making them valid
# at `access_token_lifetime - 60 seconds`. Any usage before this time will
# result in invalidation of not only the token itself, but also all other
# linked sessions and tokens for this user to prevent damage in case a client
# leaked the token by accident.
# However, there are bad / lazy client implementations that do not respect
# either `nbf` in the `refresh_token`, or the `exp` claim in `access_token`
# and will refresh early while the current access_token is still valid.
# This does not only waste resources and time, but also makes it possible
# to have multiple valid `access_token`s at the same time for the same
# session. You should only disable the `nbf` claim if you have a good
# reasons to do so.
# If disabled, the `nbf` claim will still exist, but always set to *now*.
# default: false
DISABLE_REFRESH_TOKEN_NBF=false
Bugfix
The Rauthy deployment could get stuck in Kubernetes when you were running a HA-Cluster with Postgres as your database of choice. The cache raft re-join had an issue sometimes because of a race condition, which needed a full restart of the cluster. This has been fixed in hiqlite-0.3.2 and the dependency has been bumped.
v0.27.1
Bugfix
With the big migration to Hiqlite under the hood, a bug has been introduced with v0.27.0
that made it possible to end up with a NULL
value for the password policy after an update. Which would result in errors further down the road after a restart, because the policy could not be read again.
This version fixes the issue itself and checks at startup if the database needs a fix for this issue because of an already existing NULL
value. In this case, the default password policy will be inserted correctly at startup.
EDIT:
Please don't use this release if Postgres is your database of choice:
- With Postgres, you could not get into the
NULL
situation - The check for
NULL
at startup does not work with Postgres as your main DB and will cause issues. A 0.27.2 will come soon which fixes everything for both.
v0.27.0
Breaking
Single Container Image
The different versions have been combined into a single container image. The image with the -lite
extension does not exist anymore and all deployments can be done with just the base image. Since Postgres was the default before, you need to change your image name when you do not use Postgres as your database, just remove the -lite
.
Dropped sqlx
SQLite in favor of Hiqlite
From this version on, Rauthy will not support a default SQLite anymore. Instead, it will use Hiqlite, which under the hood uses SQLite again and is another project of mine.
Hiqlite will bring lots of advantages. It will use a few more resources than a direct, plain SQLite, but only ~10-15 MB of memory for small instances. In return, you will get higher consistency and never blocking writes to the database during high traffic. It also reduces the latency for all read statements by a huge margin compared to the solution before. Rauthy always enables the dashboard
feature for Hiqlite, which will be available over the Hiqlite API port / server.
The biggest feature it brings though is the ability to run a HA cluster without any external dependencies. You can use Hiqlite on a single instance and it would "feel" the same as just a SQLite, but you can also spin up 3 or 5 nodes to get High Availability without the need for an external database. It uses the Raft algorithm to sync data while still using just a simple SQLite under the hood. The internal design of Hiqlite has been optimized a lot to provide way higher throughput as you would normally get when you just use a direct connection to a SQLite file. If you are interested more about the internals, take a look at the hiqlite/README.md or hiqlite/ARCHITECTURE.md.
With these features, Hiqlite will always be the preferred database solution for Rauthy. You should really not spin up a dedicated Postgres instance just for Rauthy, because it would just use too many resources, which is not necessary. If you have a Postgres up and running anyway, you can still opt-in to use it.
This was a very big migration and tens of thousands of lines of code has been changed. All tests are passing and a lot of additional checks have been included. I could not find any leftover issues or errors, but please let me know if you find something.
If you are using Rauthy with Postgres as database, you don't need to do that much. If however you use SQLite, no worries, Rauthy can handle the migration for you after adopting a few config variables. Even if you do the auto-migration from an existing SQLite to Hiqlite, Rauthy will keep the original SQLite file in place for additional safety, so you don't need to worry about a backup (as long as you set the config correctly of course). The next bigger release will maybe do cleanup work when everything worked fine for sure, or you can do it manually.
New / Changed Config Variables
There are quite a few new config variables and some old ones are gone. What you need to set to migration will be explained below.
#####################################
############## BACKUPS ###############
#####################################
# When the auto-backup task should run.
# Accepts cron syntax:
# "sec min hour day_of_month month day_of_week year"
# default: "0 30 2 * * * *"
HQL_BACKUP_CRON="0 30 2 * * * *"
# Local backups older than the configured days will be cleaned up after
# the backup cron job.
# default: 30
#HQL_BACKUP_KEEP_DAYS=30
# Backups older than the configured days will be cleaned up locally
# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`.
# default: 3
#HQL_BACKUP_KEEP_DAYS_LOCAL=3
# If you ever need to restore from a backup, the process is simple.
# 1. Have the cluster shut down. This is probably the case anyway, if
# you need to restore from a backup.
# 2. Provide the backup file name on S3 storage with the
# HQL_BACKUP_RESTORE value.
# 3. Start up the cluster again.
# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE
# env value.
#HQL_BACKUP_RESTORE=
# Access values for the S3 bucket where backups will be pushed to.
#HQL_S3_URL=https://s3.example.com
#HQL_S3_BUCKET=my_bucket
#HQL_S3_REGION=example
#HQL_S3_PATH_STYLE=true
#HQL_S3_KEY=s3_key
#HQL_S3_SECRET=s3_secret
#####################################
############# CLUSTER ###############
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
HQL_NODES="
1 localhost:8100 localhost:8200
"
# Sets the limit when the Raft will trigger the creation of a new
# state machine snapshot and purge all logs that are included in
# the snapshot.
# Higher values can achieve more throughput in very write heavy
# situations but will end up in more disk usage and longer
# snapshot creations / log purges.
# default: 10000
#HQL_LOGS_UNTIL_SNAPSHOT=10000
# Secrets for Raft internal authentication as well as for the API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
#####################################
############ DATABASE ###############
#####################################
# Max DB connections for the Postgres pool.
# Irrelevant for Hiqlite.
# default: 20
#DATABASE_MAX_CONN=20
# If specified, the currently configured Database will be DELETED and
# OVERWRITTEN with a migration from the given database with this variable.
# Can be used to migrate between different databases.
#
# !!! USE WITH CARE !!!
#
#MIGRATE_DB_FROM=sqlite:data/rauthy.db
#MIGRATE_DB_FROM=postgresql://postgres:123SuperSafe@localhost:5432/rauthy
# Hiqlite is the default database for Rauthy.
# You can opt-out and use Postgres instead by setting the proper
# `DATABASE_URL=postgresql://...` by setting `HIQLITE=false`
# default: true
#HIQLITE=true
# The data dir hiqlite will store raft logs and state machine data in.
# default: data
#HQL_DATA_DIR=data
# The file name of the SQLite database in the state machine folder.
# default: hiqlite.db
#HQL_FILENAME_DB=hiqlite.db
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
#HQL_LOG_STATEMENTS=false
# The size of the pooled connections for local database reads.
#
# Do not confuse this with a pool size for network databases, as it
# is much more efficient. You can't really translate between them,
# because it depends on many things, but assuming a factor of 10 is
# a good start. This means, if you needed a (read) pool size of 40
# connections for something like a postgres before, you should start
# at a `read_pool_size` of 4.
#
# Keep in mind that this pool is only used for reads and writes will
# travel through the Raft and have their own dedicated connection.
#
# default: 4
#HQL_READ_POOL_SIZE=4
# Enables immediate flush + sync to disk after each Log Store Batch.
# The situations where you would need this are very rare, and you
# should use it with care.
#
# The default is `false`, and a flush + sync will be done in 200ms
# intervals. Even if the application should crash, the OS will take
# care of flushing left-over buffers to disk and no data will get
# lost. If something worse happens, you might lose the last 200ms
# of commits (on that node, not the whole cluster). This is only
# important to know for single instance deployments. HA nodes will
# sync data from other cluster members after a restart anyway.
#
# The only situation where you might want to enable this option is
# when you are on a host that might lose power out of nowhere, and
# it has no backup battery, or when your OS / disk itself is unstable.
#
# `sync_immediate` will greatly reduce the write throughput and put
# a lot more pressure on the disk. If you have lots of writes, it
# can pretty quickly kill your SSD for instance.
#HQL_SYNC_IMMEDIATE=false
# The password for the Hiqlite dashboard as Argon2ID hash.
# '123SuperMegaSafe' in this example
#
# You only need to provide this value if you need to access the
# Hiqlite debugging dashboard for whatever reason. If no password
# hash is given, the dashboard will not be reachable.
#HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0xOTQ1Nix0PTIscD0xJGQ2RlJDYTBtaS9OUnkvL1RubmZNa0EkVzJMeTQrc1dxZ0FGd0RyQjBZKy9iWjBQUlZlOTdUMURwQkk5QUoxeW1wRQ==
Migration (Postgres)
If you use Rauthy with Postgres and want to keep doing that, the only thing you need to do is to opt-out of Hiqlite.
HIQLITE=false
Migration (SQLite)
If you use Rauthy with SQLite and want to migrate to Hiqlite, you can utilize all the above-mentioned new config variables, but mandatory are the following ones.
Backups
Backups for the internal database work in the same way as before, but bec...
v0.26.2
Bugfix
This patch reverts an unintended change to the user:group
inside the container images.
This will fix issues with migrations from existing deployments using SQLite with manually managed
volume access rights.
v0.26.0 changed from scratch
to gcr.io/distroless/cc-debian12:nonroot
as the base image for the final deployment.
The distroless image however sets a user of 65532
by default, while it always has been 10001:10001
before.
The affected versions are
0.26.0
0.26.1
Starting from this release (0.26.2
), the user inside the container will be the same one as before:
10001:10001
839724001710cb095f39ff7df6be00708a01801a
Images
Postgres
ghcr.io/sebadob/rauthy:0.26.2
SQLite
ghcr.io/sebadob/rauthy:0.26.2-lite
v0.26.1
Changes
Upstream Auth Provider Query Params
Some upstream auth providers need custom query params appended to their authorization endpoint URL.
Rauthy will now accept URLs in the auth provider config with pre-defined query params, as long as they
don't interfere with OIDC default params.
Option Log Fmt as JSON
To make automatic parsing of logs possible (to some extent), you now have the ability to change the logging output from
text to json with the following new config variable:
# You can change the log output format to JSON, if you set:
# `LOG_FMT=json`.
# Keep in mind, that some logs will include escaped values,
# for instance when `Text` already logs a JSON in debug level.
# Some other logs like an Event for instance will be formatted
# as Text anyway. If you need to auto-parse events, please consider
# using an API token and listen ot them actively.
# default: text
#LOG_FMT=text
Bugfix
- With relaxing requirements for password resets for new users, a bug has been introduced that would prevent
a user from registering an only-passkey account when doing the very first "password reset".
de2cfea
Images
Postgres
ghcr.io/sebadob/rauthy:0.26.1
SQLite
ghcr.io/sebadob/rauthy:0.26.1-lite
v0.26.0
Breaking
Deprecated API Routes Removal
The following API routes have been deprecated in the last version and have now been fully removed:
/oidc/tokenInfo
/oidc/rotateJwk
Base Container Image Change
With this version, Rauthy switches to the rootless
version of distroless
images.
If you managed your file permissions inside the container manually (for instance for a SQLite file), you may
need to adopt your config.
The user ID inside the container is not 10001
anymore but 65532
instead.
Cache Config
The whole CACHE
section in the config has been changed:
#####################################
############## CACHE ################
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
# 2 nodes must be separated by 2 `\n`
HQL_NODES="
1 localhost:8100 localhost:8200
"
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
#HQL_LOG_STATEMENTS=false
# If given, these keys / certificates will be used to establish
# TLS connections between nodes.
#HQL_TLS_RAFT_KEY=tls/key.pem
#HQL_TLS_RAFT_CERT=tls/cert-chain.pem
#HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=true
#HQL_TLS_API_KEY=tls/key.pem
#HQL_TLS_API_CERT=tls/cert-chain.pem
#HQL_TLS_API_DANGER_TLS_NO_VERIFY=true
# Secrets for Raft internal authentication as well as for the API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
/auth/v1/health
Response Change
The response for /auth/v1/health
has been changed.
If you did not care about the response body, there is nothing to do for you. The body itself returns different values
now:
struct HealthResponse {
db_healthy: bool,
cache_healthy: bool,
}
Changes
ZH-Hans Translations
Translations for ZH-Hans
have been added to Rauthy. These exist in all places other than the Admin UI, just like the
existing ones already.
Support for deep-linking client apps like Tauri
Up until v0.25, it was not possible to set the Allowed Origin
for a client in a way that Rauthy would allow access
for instance from inside a Tauri app. The reason is that Tauri (and most probably others) do not set an HTTP / HTTPS
scheme in the Origin
header, but something like tauri://
.
Rauthy has now support for such situations with adjusted validation for the Origin values and a new config variable
to allow specific, additional Origin
schemes:
# To bring support for applications using deep-linking, you can set custom URL
# schemes to be accepted when present in the `Origin` header. For instance, a
# Tauri app would set `tauri://` instead of `https://`.
#
# Provide the value as a space separated list of Strings, like for instance:
# "tauri myapp"
ADDITIONAL_ALLOWED_ORIGIN_SCHEMES="tauri myapp"
More stable health checks in HA
For HA deployments, the /health
checks are more stable now.
The quorum is also checked, which will detect network segmentations. To achieve this and still make it possible to use
the health check in situations like Kubernetes rollouts, a delay has been added, which will simply always return true
after a fresh app start. This initial delay make it possible to use the endpoint inside Kubernetes and will not prevent
from scheduling the other nodes. This solves a chicken-and-egg problem.
You usually do not need to care about it, but this value can of course be configured:
# Defines the time in seconds after which the `/health` endpoint
# includes HA quorum checks. The initial delay solves problems
# like Kubernetes StatefulSet starts that include the health
# endpoint in the scheduling routine. In these cases, the scheduler
# will not start other Pods if the first does not become healthy.
#
# This is a chicken-and-egg problem which the delay solves.
# There is usually no need to adjust this value.
#
# default: 30
#HEALTH_CHECK_DELAY_SECS=30
Migration to ruma
To send out Matrix notifications, Rauthy was using the matrix-sdk
up until now. This crate however comes with a huge
list of dependencies and at the same time pushes too few updates. I had quite a few issues with it in the past because
it was blocking me from updating other dependencies.
To solve this issue, I decided to drop matrix-sdk
in favor of ruma
, which it is using under the hood anyway. With
ruma
, I needed to do a bit more work myself since it's more low level, but at the same time I was able to reduce the
list of total dependencies Rauthy has by ~90 crates.
This made it possible to finally bump other dependencies and to start the internal switch
from redhac to Hiqlite for caching.
IMPORTANT:
If you are using a self-hosted homeserver or anything else than the official matrix.org
servers for Matrix event
notifications, you must set a newly introduced config variable:
# URL of your Matrix server.
# default: https://matrix.org
#EVENT_MATRIX_SERVER_URL=https://matrix.org
Internal Migration from redhac
to hiqlite
The internal cache layer has been migrated from redhac
to Hiqlite.
A few weeks ago, I started rewriting the whole persistence layer from scratch in a separate project. redhac
is working
fine, but it has some issues I wanted to get rid of.
- its network layer is way too complicated which makes it very hard to maintain
- there is no "sync from other nodes" functionality, which is not a problem on its own, but leads to the following
- for security reasons, the whole cache is invalidated when a node has a temporary network issue
- it is very sensitive to even short term network issues and leader changes happen too often for my taste
I started the Hiqlite project some time ago to get rid of these things and have
additional features. It is outsourced to make it generally usable in other contexts as well.
This first step will also make it possible to only have a single container image in the future without the need to
decide between Postgres and SQLite via the tag.
Local Development
The way the container images are built, the builder for the images is built and also the whole justfile
have been
changed quite a bit. This will not concern you if you are not working with the code.
The way of wrapping and executing everything inside a container, even during local dev, became tedious to maintain,
especially for different architectures and I wanted to get rid of the burden of maintenance, because it did not provide
that many benefits. Postgres and Mailcrab will of course still run in containers, but the code itself for backend and
frontend will be built and executed locally.
The reason I started doing all of this inside containers beforehand was to not need a few additional tool installed
locally to make everything work, but the high maintenance was not worth it in the end. This change now reduced the
size of the Rauthy builder image from 2x ~4.5GB down to 1x ~1.9GB, which already is a big improvement. Additionally,
you don't even need to download the builder image at all when you are not creating a production build, while beforehand
you always needed the builder image in any case.
To encounter the necessary dev tools installation and first time setup, I instead added a new just
recipe called
setup
which will do everything necessary, as long as you have the prerequisites available (which you needed before
as well anyway, apart from npm
). This has been updated in the
CONTRIBUTING.md.
Bugfix
- The
refresh_token
grant type on the/token
endpoint did not set the originalauth_time
for theid_token
, but
instead calculated it fromnow()
each time.
aa6e07d
Images
Postgres
ghcr.io/sebadob/rauthy:0.26.0
SQLite
ghcr.io/sebadob/rauthy:0.26.0-lite
v0.25.0
Changes
Token Introspection
The introspection endpoint has been fixed in case of the encoding like mentioned in bugfixes.
Additionally, authorization has been added to this endpoint. It will now make sure that the request also includes
an AUTHORIZATION
header with either a valid Bearer JwtToken
or Basic ClientId:ClientSecret
to prevent
token scanning.
The way of authorization on this endpoint is not really standardized, so you may run into issues with your client
application. If so, you can disable the authentication on this endpoint with
# Can be set to `true` to disable authorization on `/oidc/introspect`.
# This should usually never be done, but since the auth on that endpoint is not
# really standardized, you may run into issues with your client app. If so,
# please open an issue about it.
# default: false
DANGER_DISABLE_INTROSPECT_AUTH=true
API Routes Normalization
In preparation for a clean v1.0.0, some older API routes have been fixed regarding their casing and naming.
The "current" or old routes and names will be available for exactly one release and will be phased out afterward
to have a smooth migration, just in case someone uses these renamed routes.
/oidc/tokenInfo
->/oidc/introspect
/oidc/rotateJwk
->/oidc/rotate_jwk
Since I don't like kebab-case
, most API routes are written in snake_case
, with 2 exceptions that follow RFC namings:
openid-configuration
web-identity
All the *info
routes like userinfo
or sessioninfo
are not kebab_case
on purpose, just to match other IdPs and
RFCs a bit more.
There is not a single camelCase
anymore in the API routes to avoid confusion and issues in situations where you could
for instance mistake an uppercase I
as a lowercase l
. The current camelCase
endpoints only exist for a smoother
migration and will be phased out with the next bigger release.
Config Read
The current behavior of reading in config variables was not working as intended.
Rauthy reads the rauthy.cfg
as a file first and the environment variables afterward. This makes it possible to
configure it in any way you like and even mix and match.
However, the idea was that any existing variables in the environment should overwrite config variables and therefore
have the higher priority. This was exactly the other way around up until v0.24.1
and has been fixed now.
How Rauthy parses config variables now correctly:
- read
rauthy.cfg
- read env var
- all existing env vars will overwrite existing vars from
rauthy.cfg
and therefore have the higher priority
Bugfixes
- The token introspection endpoint was only accepting requests with
Json
data, when it should have instead been
withForm
data.
Images
Postgres
ghcr.io/sebadob/rauthy:0.25.0
SQLite
ghcr.io/sebadob/rauthy:0.25.0-lite
v0.24.1
The last weeks were mostly for updating the documentation and including all the new features that came to Rauthy in the last months. Some small things are still missing, but it's almost there.
Apart from that, this is an important update because it fixes some security issues in external dependencies.
Security
Security issues in external crates have been fixed:
- moderate matrix-sdk-crypto
- moderate openssl
- low vodozemac
Changes
S3_DANGER_ACCEPT_INVALID_CERTS
renamed
The config var S3_DANGER_ACCEPT_INVALID_CERTS
has been renamed to S3_DANGER_ALLOW_INSECURE
. This is not a breaking change right now, because for now Rauthy will accept both versions to not introduce a breaking change, but the deprecated value will be removed after v0.24.
S3 Compatibility
Quite a few internal dependencies have been updated to the latest versions (where it made sense).
One of them was my own cryptr. This was using the rusty-s3
crate beforehand, which is a nice one when working with S3 storages, but had 2 issues. One of them is that it is using pre-signed URLs. That is not a flaw in the first place, just a design decision to become network agnostic. The other one was that it signed the URL in a way that would make the request not compatible with Garage. I migrated cryptr
to my own s3-simple which solves these issues.
This update brings compatibility with the garage
s3 storage for Rauthy's S3 backup feature.
Bugfixes
- Fetching the favicon (and possibly other images) was forbidden because of the new CSRF middleware from some weeks
ago.
76cd728 - The UI and the backend had a difference in input validation for
given_name
andfamily_name
which could make some buttons in the UI get stuck. This has been fixed and the validation for these 2 is the same everywhere and at least 1 single character is required now.
19d512a
Images
Postgres
ghcr.io/sebadob/rauthy:0.24.1
SQLite
ghcr.io/sebadob/rauthy:0.24.1-lite