Restoring volume snapshots from secondary storage in new CS cluster #12254
-
|
If I need to restore all my volume snapshots to a new cluster, what is the optimal way to do that? Say i have a full backup of my secondary storage and all volumes. I mounted it read only to the new cluster as additional secondary storage. Of course, no volume snapshots from there magically show up in the UI. All the filenames are in UUID so I can't just guess at which is which. Would it be required to import the snapshot list from the other clusters database in order to be able to restore/create templates from the backup storage? I'm assuming yes, or is there a better way? I do see the Snapshot copy feature. Seemingly this will let me copy the snapshots between zones. This looks useful. If I choose to copy a snapshot, but it already exists on the destination, would it just add it to the new zones db? Probably not right. I'm already replicating the storage on the underlying file system, I just want the template list to be in sync. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
|
Looks like if I want to keep both sites mostly separate, I will need to do some scripting to keep snapshots in the db up to date. Otherwise I need to do a full multi-region installation and connect the actual mariadb machines with replication etc. etc. Do-able but now there are greater odds of database issues. Not sure I want to do all this for just snapshot copying to a remote zone. Update: Looks like the process of doing this is quite agonizing , much has to be created in the second clusters db in order to get any UI functionality to restore snapshots from a remote systems storage. It even wants vm instance IDs etc. I realize I can probably just use the qcow2 images to create new VMs but obviously I was hoping for something a little easier and done through the UI. Perhaps the NAS backup functionality will work better for this type of thing - checking that out...
So.. if I have backups from "NAS Backup" plugin on one cluster and want to restore to another - I can't without modifying the database with the old records from the original cluster? Gross I was trying to keep these two clusters completely separate and still have a viable restore option if the original cluster is completely lost. Perhaps if I try to restore a backup from backup and restore it will still prompt for all the needed information and be able to restore, I guess all I can do is try that. |
Beta Was this translation helpful? Give feedback.
-
|
@Jayd603 Since you’re restoring backups into a totally separate CloudStack environment, it’s going to be tough to get them to show up without some database changes. However, I was thinking you might be able to bypass the DB issues entirely by using the KVM QCOW2 import feature. The trick is to register your backup target (the NAS) as Primary Storage instead of Secondary. If you do that, you can use the 'Import QCOW2 image from Shared Storage' option under Tools > Import-Export Instances. I am referring to this feature https://docs.cloudstack.apache.org/en/4.22.0.0/adminguide/virtual_machines/importing_unmanaging_vms.html#import-instances-from-shared-storage.. you can do this from GUI as well ( From Tools --> Import-Export Instances --> Select KVM --> Select "Import QCOW2 image from Shared Storage" under Action) It’s a bit of a 'hack' because that tool is technically for migrating external KVM VMs, but I think it can work for your use case. It lets you pick the raw .qcow2 files directly from the storage and spin them up as managed instances in your new setup without worrying about the old metadata. The only catch is that you’ll need a way to map the files back to the right machines. Since the files are named with UUIDs, you'll need to reference your old database (or a file list) to figure out which .qcow2 belongs to which VM before you start the import |
Beta Was this translation helpful? Give feedback.
@Jayd603 Since you’re restoring backups into a totally separate CloudStack environment, it’s going to be tough to get them to show up without some database changes. However, I was thinking you might be able to bypass the DB issues entirely by using the KVM QCOW2 import feature.
The trick is to register your backup target (the NAS) as Primary Storage instead of Secondary. If you do that, you can use the 'Import QCOW2 image from Shared Storage' option under Tools > Import-Export Instances.
I am referring to this feature https://docs.cloudstack.apache.org/en/4.22.0.0/adminguide/virtual_machines/importing_unmanaging_vms.html#import-instances-from-shared-storage.. you can do this from GUI as w…