You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Considering making one of our first features a web server and locally-hosted blogging platforms.
The intention would be to join with others in sharing each other's content, to ensure uptime. To do this, a "trusted ring architecture" would be established.
A simple uptime assurance strategy that would require trusting a peer with privileged information (not ideal):
DNS can serve as a useful form of trusted centralization that is necessary to integrate with web technology that can help visitors connect to the IP pointed to the self-hosted IP in the A record.
Network nodes can periodically check the uptime of other nodes in their network so they can "take over" hosting their content by checking the hostname, hashing it, then pulling content for that CNAME from the IP that is currently in the A record.
This relies on sharing with trusted network nodes your dynamic DNS token to point your A record to their server if yours goes down.
Uptime can be checked in real-time by establishing a persistent socket connection with the other node (say, with ZeroMQ) to double-check uptime of the "trusted ring" health-check
A weakness of this approach is that the peer network node needs to 1) be trusted with serving the fuzzr frontend, and 2) be able to take over rewriting the A record using a request to their DNS dynamic DNS endpoint with their dynamic DNS secret.
The fuzzr frontend needs to pull content from a CNAME that corresponds to the hostname, and this CNAME is checked against the hash of the requested content
To keep the ring secure, a TXT record that isn't updated via dynamic DNS (assuming this can't be set like any other A record using the dynamic DNS secret key on the domain DNS provider...)
Native desktop clients can browse over to any ring-hosted site without needing to deal with DNS shenanigans, and can verify content hashes before rendering, also.
Ideally rendered content isn't rendered as actual HTML text, resulting in possible XSS shenanigans
0.1
Warp server
Instructions for router static config
Iced WASM frontend
CRUD text content for pages
0.1.1
Add images to pages
List images to add to pages
0.2
Dynamic DNS
Veracity checking
Uptime guarantees
Another problem is keeping the latest version of the site WASM binary available, but not so bad, since that can just be uploaded to peers to deliver... ugh, lots of trust issues all around with this approach, but that can then checked against a hash or signature before being executed by the browser.
Create pages
Link to other pages
List other pages to link on the editor
Page list (for, say, a list of blog articles)
Split pages into rows and columns (let's not use panels for now due to lack of serialization capability)
Link to other sites in the ring
Anyway, comments and thoughts on this idea are welcome. This approach might not require IPFS, remarkably enough. Just ZeroMQ, Warp server, dyanmic DNS requests on the backend, and a Rust-native crypto system capable of hashing, signatures, and encryption, some of which will need to happen in WASM clients.
The text was updated successfully, but these errors were encountered:
Honestly, I think this is asking way too much of the users and is overly complex. I would prefer the approach of a program that is dead simple and immediately usable. The idea is to empower publishers to secure their content for people to see, not to require them to understand DNS, a system we ultimately seek to obviate.
I agree, integration with the web (such as DNS, IP, router config, security issues in HTML, security, etc...), would be very complex for content publishers. So, visitors would also have to be encouraged to also need to download the client. I think this is the approach to build towards first. I'll backlog this issue.
Considering making one of our first features a web server and locally-hosted blogging platforms.
The intention would be to join with others in sharing each other's content, to ensure uptime. To do this, a "trusted ring architecture" would be established.
A simple uptime assurance strategy that would require trusting a peer with privileged information (not ideal):
0.1
0.1.1
0.2
Anyway, comments and thoughts on this idea are welcome. This approach might not require IPFS, remarkably enough. Just ZeroMQ, Warp server, dyanmic DNS requests on the backend, and a Rust-native crypto system capable of hashing, signatures, and encryption, some of which will need to happen in WASM clients.
The text was updated successfully, but these errors were encountered: