Instead of email attachments or direct file transfers, upload files from the command line to your web server via ssh
and send the link instead.
The link prefix is generated from the uploaded file's checksum.
Hence, only people with the correct link can access it.
Comes with a few convenience features:
- Has support to expire links after a set amount of time.
- The link "just works" for non-tech-savvy people, but is still only accessible for people who possess the link.
- Does not require any custom binary to be executed to the web server.
- Optional server-side dependencies are readily available (
at
,sha2
). - Easily keep track of which files are currently shared.
- Clean files by index, checksum or age.
- After upload files are verified (optionally).
- Supports aliases at upload because sometimes
plot_with_specific_parameters.svg
is more descriptive thanplot.svg
, especially a few weeks later. - And most importantly, of course: Have a name that can be typed with the left hand on home row only.
asfa
uses a single ssh
-connection for each invocation which is convenient if you have confirmations enabled for each ssh-agent usage (see details).
Alternatively, private key files in OpenSSH or PEM-format can be used directly.
Even though they should not, plain passwords are accepted as well.
A remote server that
- is accessible via ssh
- has a web server running
- has a folder by your user that is served by your web server
- (optional) has
sha2
-related hashing tools installed (sha256sum
/sha512sum
) - (optional) has
at
installed to support expiring links.
Note: All commands can be abbreviated:
p
→push
l
→list
ch
→check
cl
→clean
v
→verify
Push (upload) a local file to the remote site and print the URL under which it is reachable.
$ asfa push my-file.txt
https://my-domain.eu/asfa/V66lLtli0Ei4hw3tNkCTXOcweBrneNjt/my-very-specific-file.txt
See example at the top. Because the file is identified by its hash, uploading the same file twice will generate the same link.
Push a file to the server under a different name. This is useful if you want to share a logfile or plot with a generic name.
Note that if you specify several files to upload with their own aliases, you need to explicity assign the arguments.
Or specify the aliases afterwards.
$ asfa push my-file.txt my-file-2.txt --alias my-very-specific-file.txt my-very-specific-file-2.txt
https://my-domain.eu/asfa/V66lLtli0Ei4hw3tNkCTXOcweBrneNjt/my-very-specific-file.txt
https://my-domain.eu/asfa/HiGdwtoXcXotyhDxQxydu4zqKwFQ-9pY/my-very-specific-file-2.txt
Uploads can be automatically expired after a certain time via --expire <delay>
.
<delay>
can be anything from minutes to hours, days or even months.
It requires at
to be installed and running at the remote site.
List all files currently available online:
List all files with meta data via --details
:
Check if files have already been uploaded (via hash) and print them.
Remove the file from remote site via index (negative indices no longer need to be sepearated by --
):
You can also ensure that a specific file is deleted by specifying --file
:
Note that the file is deleted even though it was uploaded with an alias.
In case an upload gets canceled early, all files can be checked for validity via verify
:
$ asfa verify
✓ my-very-specific-file.txt ..... Verified.
✓ my-very-specific-file-2.txt ... Verified.
Since the prefix is the checksum, the check can be performed whether the file exists locally or not.
All commands accept a --newer
/--older
<n>{min,hour,day,week,month}
argument that can be used to narrow down the number of files.
Cleaning all files older than a month can, for example, be achieved via
$ asfa clean --older 1month
$ asfa clean --older 1M
All files uploaded within the last five minutes can be listed via
$ asfa list --newer 5min
$ asfa list --newer 5m
Uploaded files can be renamed after the fact via the rename
command (shorthand mv
).
The remtoe file is specified either by index (returned from list
) or by specifying the local file to be renamed.
$ asfa rename -1 foobar
┌┤Renaming:├────────────────────────────────────────────────────────────────────────┐
│ my-very-specific-file-2.txt → https://breitwieser.eu/asfa/6x-SVlgRJn39wpsV/foobar │
└───────────────────────────────────────────────────────────────────────────────────┘
$ cargo install asfa
The following AUR packages are provided:
asfa
: Latest stable release built from source.asfa-bin
Pre-built binaries for targetx86_64-unknown-linux-gnu
.asfa-git
Current development snapshot built from source.
Either use your favorite AUR helper or install manually:
$ cd <temporary folder>
$ curl -o PKGBUILD https://aur.archlinux.org/cgit/aur.git/plain/PKGBUILD?h=asfa-git
$ makepkg
[...]
==> Finished making: asfa-git 0.7.2.r16.g763f726-1 (Sun 07 Feb 2021 04:18:12 PM CET)
$ sudo pacman -U asfa-git-0.7.2.r16.g763f726-1-x86_64.pkg.tar.zst
$ git clone https://github.com/obreitwi/asfa.git
$ cargo install --path asfa
Configuration resides in ~/.config/asfa/config.yaml
.
Host-specific configuration can also be split into single files residing under ~/.config/asfa/hosts/<alias>.yaml
.
System-wide configuration can be placed in /etc/asfa
with the same folder structure.
An example config can be found in ./example-config
.
Here, we assume that your server can be reached at https://my-domain.eu
and that the folder /var/wwww/default/asfa
will be served at https://my-domain.eu/asfa
.
A fully commented example config can be found here.
hostname: my-hostname.eu # if not specified, will defaulted from ssh or filename
folder: /var/www/default/asfa
url: https://my-domain.eu/asfa
group: www-data
default_host: my-remote-site
details: true # optional, acts as if --details is given
prefix_length: 32
verify_via_hash: true
auth:
interactive: true
use_agent: true
hosts:
my-remote-site:
# note: port is optional, will be inferred form ssh and defaults to 22
hostname: my-hostname.eu:22
folder: /var/www/default/asfa
url: https://my-domain.eu/asfa
group: www-data
auth:
interactive: false
use_agent: true
private_key_file: /path/to/private/key/in/pem/format #optional
Whatever web server you are using, you have to make sure the following requirements are met:
- The user as which you upload needs to have write access to your configured
folder
. - Your web server needs to serve
folder
aturl
. - In case you do not want your uploaded data to be world-readable, set
group
to the group of your web server. - Make sure your web server does not serve indexes of
folder
, otherwise any visitor can see all uploaded files rather easily.
Your apache config can be as simple as:
<Directory /var/www/default/asfa>
Options None
allow from all
</Directory>
Make sure that Options does not contain Indexes
, otherwise any visitor could very easily access all uploaded files!
location /asfa {
autoindex off
}
Since I handle my emails mostly via ssh on a remote server (shoutout to neomutt, OfflineIMAP and msmtp), I needed a quick and easy possibility to attach files to emails. As email attachments are rightfully frowned upon, I did not want to simply copy files over to the remote site to attach them. Furthermore, I often need to share generated files (such as plots or logfiles) on our group-internal mattermost or any other form of text-based communication. Ideally, I want to do this from the folder I am already in on the terminal - and not by to navigating back to it from the browser's "file open" menu…
As a small exercise for writing rust (other than Advent of Code), I ported a small python script I had been using for a couple of years.
For security reasons I have my gpg-agent
(acting as ssh-agent
) set up to confirm each usage upon connecting to remote servers and the previous hack required three connections (and confirmations) to perform its task.
asfa
is set up to only use one ssh-connection per invocation.
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.