-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] - Add 'boot_command' over VNC like Qemu builder #197
Comments
Hello @danielfdickinson, Another option, as Vultr supports iPXE booting, is to netboot Alpine with an iPXE script. The This should be nicer to work with than your proposed workaround, which while functional as you've mentioned does entail more process in managing the image lifecycle with Alpine releases. If you would prefer to continue with this feature request, please note we will need to review our current roadmap/timelines before we can consider this. Pull requests are always welcome! |
Hello @Oogy, Thank you for your response. I looked the the iPXE and netboot links you provided, and I have done netboot on a local network before. I think netboot needs more 'moving parts' than a Packer I will add adding the Hopefully I am able to get to it sooner than later and will have a PR for you at some point. Thank your for the suggestion. If I was starting from scratch it would be more likely be a worthwhile route for me, so others may benefit from the info. |
I decided to take another look at the iPXE option and found a Libvirt iPXE boot guide that let me know that there were less required moving parts than I thought.
means I would not be able to use If I could specify an iPXE script to boot Alpine Linux, then I don't wouldn't need If it's already possible to use |
Hello @danielfdickinson, I'm glad to see you've revisited the idea, iPXE is quite nice to work with IMO. It appears the description for There are 2 types of scripts supported by Vultr, So in the case of PXE booting, setting Additionally, as you are PXE booting you do not need to specify the |
@Oogy iPXE is now working for me 🎉. The description for It seems though that the kernel and initrd URLs cannot be HTTPS even though iPXE reports as having HTTPS support (HTTPS works for me with iPXE under libvirt though, so it's probably a version or build issue, possibly because my instance uses Let's Encrypt for SSL certificates). |
@danielfdickinson glad to hear it. On Monday I'll open up an issue for updating the docs as well as look into the HTTPS problem. Last I'd experimented with this I was netbooting Flatcar Linux using HTTPS URLs so I'm pretty sure that should work. If you could share any errors or console screenshots that'd be a great help. |
Would you like the netboot screenshots here, or is there a better place (like a Vultr ticket)? I'll also include the applicable iPXE scripts in the info. |
@danielfdickinson here is fine 👍 |
Here is a screenshot when kernel is https: and here is the iPXE script:
And as mentioned, it works if I change
to
|
Hello @danielfdickinson, I've taken some time to look at this and I think the issue may be that the Common Name in your LE cert is different from the domain in the base-url. I have no proof of this as the iPXE errors are not terribly helpful and we cannot enable debug mode(as this requires a separate build of the iPXE binary), but it is the only notable difference I can see between your cert and my test which used https://boot.netboot.xyz. The CA certs for your LE cert are cross-signed by the iPXE CA cert and so that should not be the issue. Could you perhaps try using a new LE certificate with the Common Name of |
Hey @Oogy, Thank you for looking at this. Changing the CN didn't solve the issue, but it did get me looking at things like server logs and DNS records and I realized that I had ipxe-boot.wildtechgarden.ca as a CNAME and the CNAME was not the commonName on the cert. I've switched ipxe-boot to A and AAAA records (since I did switch the CN to ipxe-boot...) and once the TTLs clear out, I'll give it another go, but I think you gave me the right idea where to look (names of CN vs DNS name). Will let you know. |
I have confirmation that it is iPXE rejecting the connection and not on the server side -- with lighttpd I got I wonder if the version of iPXE is too old and it doesn't like the cross-signed certificate (i.e. whether the version of iPXE was before LE dropped the (DigiCert?) cross-sign). I can't test the I found ipxe/ipxe#116 which adds support fragmented handshakes (e.g. due to large certificate chains). Based on that I think it is highly probably the number SANs is the problem in that it causes fragmentation. I unfortunately don't have a DNS provider compatible with a DNS challenge to do a wildcard certifiate, so unless the workaround described in the PR works I might be out of luck until I dedicate a host to serving the iPXE stuff (or at least keeping the SAN list small). |
🎆 🥳 Got it! The PR 116 for iPXE mentioned above showed me the way. I needed to use I also needed to use slightly less secure lighttpd settings than ideal (but which are the current defaults for compatibility reasons).
Although since I've removed the higher security cipher settings I could just omit Shall I close this? |
@danielfdickinson I'd like some more details, please, as lighttpd has announced plans to change TLS defaults to be stricter in a release some time in Jan 2023. What were the client limitations? Most frequently in my experience, |
lighttpd supports Let's Encrypt bootstrap using TLS-ALPN-01 verification challenge |
Yes. I get
In addition at the iPXE crypto docs I see the following table:
The iPXE github repo hasn't had a release in two years (sometime in 2020), and I don't see any crypto-related changes in the repo in that period of time. |
Nice. But not quite solving the wildcard ( |
@danielfdickinson thank you for the details. New releases of lighttpd on or after Jan 2023 will have stricter TLS defaults and "CipherString" will need to be manually configured in lighttpd.conf to include one or more of those ciphers to work with iPXE: Also, you are correct that TLS-ALPN-01 verification challenge is not available for validating wildcard certs. |
For your information:
|
Is your feature request related to a problem? Please describe.
The Alpine Linux ISO does not include cloud-init which means initial setup has to be done over VNC. With the Packer Qemu build there is a
boot_command
option that allows to interact with the instance over VNC in order to do 'just enough' to SSH in.I have utilized this ability to create a QCOW2 image from the Alpine ISO that includes cloud init in a public repo.
The documentation for the
boot_command
capability can be found in the Packer Documentation (see the 'Boot Configuration' section).The source code for the qemu builder is at: https://github.com/hashicorp/packer-plugin-qemu/tree/main
Describe the solution you'd like
A similar
boot_command
capability for the Vultr plugin that allows controlling the instance via VNC in order to enable SSH access (after which regular provisioning can be be used).Describe alternatives you've considered
Since the goal is automation doing this manually as described in the Vultr docs for Alpine Linux doesn't solve the problem, and would require doing this for every new release of Alpine Linux.
Another option would be to be able to upload a QCOW2 or RAW boot image rather than only an ISO (such as can be done with OpenStack). AIUI the snapshots option is not just a disk image but a whole VM image which means uploading a QCOW2 or RAW generated using the public repo I created, above, is not currently an option with Vultr.
EDIT: I was able to upload (but have not yet tested) a RAW image generated use the repo I mentioned from a web hosting instance I have (it would be helpful to be able to upload directly from my local machine, but that's a separate issue), so it looks like there may be a workaround for now.
EDIT #2: While it was possible to upload the image, it fails to boot an instance. So currently there is no automation-friendly workaround.
EDIT #3: Mea culpa, the workaround from the first edit works; I had an error in my Packer scripts that didn't use the snapshot properly. So there is a workround for now. Example repository at https://gitlab.com/danielfdickinson/alpine-two-stage-packer-for-vultr
From what I have read of the Alpine Docs, Wiki, and mailing list, "cloud-init" is considered too heavy and efforts are focused on tiny cloud init, and I am new to Alpine (and have not yet posted to the mailing list), so it seems that requesting adding a 'cloud-init' image to the Alpine releases would be a non-starter.
In addition this would enable more distros to be prepared for use on Vultr from their main distribution ISO.
The text was updated successfully, but these errors were encountered: