-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving Self-Hosting and Removing 3rd Party dependencies. #4465
base: main
Are you sure you want to change the base?
Conversation
@Podginator is attempting to deploy a commit to the omnivore Team on Vercel. A member of the Team first needs to authorize it. |
This is awesome, it looks like it's taking shape! I might try it out this weekend. Are there contributions from the community that you can think of that would be helpful for you? |
import { getSignedUrl } from '@aws-sdk/s3-request-presigner' | ||
import type { Readable } from 'stream' | ||
|
||
// While this is listed as S3, for self hosting we will use MinIO, which is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use Cloudflare R2 as well for self hosting? What was the decision behind using MinIO?
Asking because R2 is also S3 compatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not actually familiar with R2 - but anything that is S3 Compatible should work. Let me take a look later to see whether or not the Storage Client I built works with it.
Minio was chosen because it can be self-hosted along with the rest of the application. There is a docker image, and it can all run on the same server without relying on anything external.
I'm trying to ensure everything here can be run self-contained without any need for external services.
That said, as with some of the email changes, I am looking into ways to simplify parts of it too, and having some external services is ok with me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To find suitable services, I recommend consulting r/self-hosted.
Love the work so far.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 is a nice idea, provides various options, including self hosted ones.
How about local storage? This would reduce the required dependencies by one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh wow I didn't know Minio can be self-hosted! That sounds like a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 is a nice idea, provides various options, including self hosted ones.
How about local storage? This would reduce the required dependencies by one.
The uploads are done via signed-urls, so while local-storage would be feasible it'd require a bit more development work.
does it mean i would be able to deploy open-source omnivore to vercel? and be able to use this great app even after their shut-down? 🙏 |
If this gets worked out I'll add a template for easy self-hosting with Coolify |
Are you guys planning to add a docker container to self-host Omnivore? |
I also face this issue ON my linux vps |
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues. |
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
I made a NGINX custom location with the following parameters: |
Is it because you've pointed it to https but your nginx config specifies http as per your screenshot? |
self-hosting/GUIDE.md
Outdated
server { | ||
listen 80; | ||
return 301 https://$host$request_uri |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you sync this example .conf with your updated version in nginx/nginx.conf? For example, you added a semicolon on this line to fix a bug but the readme still has the bug present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are quite a few changes. It's probably easier to just link to the file itself vs trying to keep this up to date.
With the nginx config I point per http to the specific api docker container over https. If I change it to http i get no connection at all. At the moment my login and the content fetch are working. The content has no images and the URL Save over read.mydomain.com/api/save?={url} are not working. So I assume that the connection to the image-proxy and the api container are not working. |
Will preface this by saying that I changed the NGINX to run with port 80 rather than ssl. The cannot get /api/ makes sense. There is no route /api - The fact that you're getting that express error shows that the API is indeed working. Going there myself I also get the When changing the URL, are you including /api in the places where you are giving the API address. This will cause the url to become /api/api/graphql rather than /api. If you want to continue using that, you can add the following in the NGINX config, which will convert it back to a single /api.
As for imageproxy - I'm not sure what's happening there. For me, this just worked off the bat, but I did have to include /images in the Environment Variable If you are using subdomains instead you can modify the nginx to include this. In the mean time I would just use this. Feel free to reach out on Discord too, we can try further debugging there. |
Thanks for the tip with the environment variable, images work now :) With the API Issue I am trying to use a browser extension called "Omnivore List Popup". The standard URL for the API in the settings is: https://api-prod.omnivore.app/api/graphql I tried using the following URL: https://read.mydomain.com/api/graphql which is not working. Any idea why?
Still not working. I need to say, that I didnt have the /api/api issue u mentioned in the last comment. |
I've figured out the extension issue. The extension has the following in it's manifest to allow cross-site calls. When changing the URL to yours there will be a CORS issue when trying to call it. To fix this you can do the following: Either download the source code here, and in the manifest file add your URL to the array found here. https://github.com/herrherrmann/omnivore-list-popup/blob/27d20f951642ccb8c0f578d9ab05681878470af0/src/manifest.chrome.json#L18 Alternatively you can download the extension using something like https://chromewebstore.google.com/detail/chrome-extension-source-v/jifpbeccnghkjeaalbbjmodiffmgedin and modify the manifest yourself. After installing the unpacked extension you will have your contents load. As for the other problem, I think I have an idea of how to fix this - but I am unable to test right now. In your NGINX add the following:
Then reload. I think the problem might be that this is a route for the frontend application, but it's directing it to the backend. You can see in the past we had a similar problem here:
Give that a try and let me know. If it's solved with that I will add it to the official one. |
I couldn't try the extension atm but the solution with the nginx location worked. Saving by URL is now working properly! |
It has been confirmed that for the Android App's clipping function, even if the self-hosted URL is configured, only the official Omnivore URL will be used to connect. Search for "ApolloClient.Builder" in the app code , then modify these codes to use self-hosted setting, just like networker.kt to do (....."authenticatedApolloClient() = ApolloClient.Builder().serverUrl(serverUrl))....). Rebuild and package apk。then with the self-hosted service, the Android app will be able to add and share link. |
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Hi everyone, very happy to see the level of engagement here! unfortunately this is all quite technical so I was wondering if there was some readme or roadmap somewhere that listed the crurent feasibility of self hosting, gotchas, maturity etc as well as how to import the data we exported from the original server. I surely can't be the only one to have the skills to run a docker ready system but lack the knowledge to follow closely this (very active) PR. And thank you all very much! |
Your best bet is probably to join us in the Omnivore discord, specifically the #self-hosting channel. We have some guides there. |
No description provided.