This document shows you the compute service available with fog-proxmox.
Proxmox supports both virtual machines (QEMU/KVM) and containers (LXC) management.
You can see more details in Proxmox VM management wiki page and Proxmox containers management wiki page.
irb
require 'fog/proxmox'
with access ticket:
identity = Fog::Proxmox::Identity.new(
proxmox_url: 'https://localhost:8006/api2/json',
proxmox_auth_method: 'access_ticket',
proxmox_username: 'your_user@your_realm',
proxmox_password: 'his_password',
connection_options: { ... }
)
with API user token:
identity = Fog::Proxmox::Identity.new(
proxmox_url: 'https://localhost:8006/api2/json',
proxmox_auth_method: 'user_token',
proxmox_userid: 'your_user',
proxmox_tokenid: 'his_tokenid',
proxmox_token: 'his_token',
connection_options: { ... }
)
connection_options are also available and optional.
Fog provides both a model and request abstraction. The request abstraction provides the most efficient interface and the model abstraction wraps the request abstraction to provide a convenient ActiveModel
like interface.
The request abstraction maps directly to the Proxmox VE API. It provides an interface to the Proxmox Compute service.
To see a list of requests supported by the service:
compute.requests
To learn more about Compute request methods refer to source files.
To learn more about Excon refer to Excon GitHub repo.
Fog models behave in a manner similar to ActiveModel
. Models will generally respond to create
, save
, persisted?
, destroy
, reload
and attributes
methods. Additionally, fog will automatically create attribute accessors.
Here is a summary of common model methods:
Method | Description |
---|---|
create |
Accepts hash of attributes and creates object. Note: creation is a non-blocking call and you will be required to wait for a valid state before using resulting object. |
save | Saves object. Note: not all objects support updating object. |
persisted? | Returns true if the object has been persisted. |
destroy |
Destroys object. Note: this is a non-blocking call and object deletion might not be instantaneous. |
reload | Updates object with latest state from service. |
ready? | Returns true if object is in a ready state and able to perform actions. This method will raise an exception if object is in an error state. |
attributes | Returns a hash containing the list of model attributes and values. |
identity |
Returns the identity of the object. Note: This might not always be equal to object.id. |
wait_for | This method periodically reloads model and then yields to specified block until block returns true or a timeout occurs. |
The remainder of this document details the model abstraction.
Proxmox supports cluster management. Each hyperviser in the cluster is called a node.
Proxmox installs a default node in the cluster called pve
.
List all nodes:
service.nodes.all
This returns a collection of Fog::Proxmox::Compute::Node
models:
Get a node:
node = service.nodes.find_by_id 'pve'
Get statistics (default is data):
node.statistics
Get statistics (image png):
node.statistics('rrd', { timeframe: 'hour', ds: 'cpu,memused', cf: 'AVERAGE' })
Proxmox supports servers management. Servers are also called virtual machines (VM).
VM are QEMU/KVM managed. They are attached to a node.
More details in Proxmox VM management wiki page
You need to specify a node before managing VM. Fog-proxmox enables it by managing VM from a node.
List all servers:
node.servers.all
This returns a collection of Fog::Proxmox::Identity::Server
models.
Before creating a server you can get the next available server id (integer >= 100) in the cluster:
nextid = node.servers.next_id
You can also verify that an id is free or valid:
node.servers.id_valid? nextid
Now that you have a valid id, you can create a server in this node:
node.servers.create({ vmid: nextid })
Get this server:
server = node.servers.get nextid
Add options: boot at startup, OS type (linux 4.x), french keyboard, no hardware KVM:
server.update({ onboot: 1, keyboard: 'fr', ostype: 'l26', kvm: 0 })
Add a cdrom volume:
server.update({ ide2: 'none,media=cdrom' })
Add a network interface controller (nic):
server.update({ net0: 'virtio,bridge=vmbr0' })
Get mac adresses generated by Proxmox:
server.config.mac_adresses
This returns an array of strings.
Get all server configuration:
server.config
This returns a Fog::Proxmox::Compute::ServerConfig
model:
<Fog::Proxmox::Compute::ServerConfig
smbios1: "uuid=ba2da6bd-0c92-4cfe-8f70-d22cc5b5bba2",
numa: 0,
digest: "348fdc21536f23a29dfb9b3120faa124aaeec742",
ostype: "l26",
cores: 1,
virtio0: "local-lvm:vm-100-disk-1,size=1G",
bootdisk: "virtio0",
scsihw: "virtio-scsi-pci",
sockets: 1,
net0: "virtio=8E:BF:3E:E7:17:0D,bridge=vmbr0",
memory: 512,
name: "test",
ide2: "cdrom,media=cdrom",
server: <Fog::Proxmox::Compute::Server vmid: 100, ...>
>
Get nics config:
nics = server.config.interfaces
This returns a hash:
nics = {net0: 'virtio=8E:BF:3E:E7:17:0D,bridge=vmbr0'}
Get IDE,SATA,SCSI or VirtIO controllers config:
disks = server.config.disks
This returns a hash:
disks = {ide2: 'cdrom,media=cdrom', virtio0: "local-lvm:vm-100-disk-1,size=1G", sata0: "local-lvm:vm-100-disk-2,size=1G"}
VNC, SPICE and terminal consoles are availables.
Server needs to be running and a VGA display configured.
Default VGA set to std
implies vnc console:
vnc_console = server.start_console(websocket: 1)
server.connect_vnc(vnc_console)
returns a vnc session hash,
and set to qxl
implies spice console:
spice_console = server.start_console(proxy: 'localhost')
returns a spice session hash,
and set to serial0
implies terminal console:
term_console = server.start_console
returns a term session hash.
Before attaching a hdd volume, you can first fetch available storages that could have images in this node:
storages = node.storages.list_by_content_type 'images'
storage = storages[0] # local-lvm
Four types of storage controllers emulated by Qemu are available:
- IDE: ide[n], n in [0..3]
- SATA: sata[n], n in [0..5]
- SCSI: scsi[n], n in [0..13]
- VirtIO Block: virtio[n], n in [0..15]
The hdd id is the type controller appended with an integer (n).
More details on complete configuration options can be find in Proxmox VE API.
Then attach a hdd from this storage:
disk = { id: 'virtio0', storage: storage.storage, size: '1' } # virtualIO block with 1Gb
options = { backup: 0, replicate: 0 } # nor backup, neither replication
server.attach(disk, options)
Resize a disk:
server.extend('virtio0','+1G')
Move a disk
server.move('virtio0','local')
Detach a disk
server.detach 'virtio0'
Actions on your server:
server.action('start') # start your server
server.wait_for { server.ready? } # wait until it is running
server.ready? # you can check if it is ready (i.e. running)
server.action('suspend') # pause your server
server.wait_for { server.qmpstatus == 'paused' } # wait until it is paused
server.action('resume') # resume your server
server.wait_for { server.ready? } # wait until it is running
server.action('stop') # stop your server
server.wait_for { server.status == 'stopped' } # wait until it is stopped
Fetch server disk_images:
disk_images = server.disk_images
This returns an array of Fog::Proxmox::Compute::Volume
instances.
Delete server:
server.destroy
You can backup all node's guests or just one guest.
You need first to get a node or a server to manage its backups:
node = compute.nodes.get 'pve'
server = node.servers.get vmid
Then you can backup one server:
options = { compress: 'lzo'}
server.backup options
or backup all servers on a node:
node.backup options
You can restore a server from a backup.
Backups are volumes which content type is backup
and owned by a server.
You first fetch the backup volumes of this server:
volumes = server.backups
This returns an array of Fog::Proxmox::Compute::Volume
instances.
Then you choose one:
backup = volumes[0] # local:backup/vzdump-qemu-100-2018_05_15-15_18_31.vma.lzo
This returns a Fog::Proxmox::Compute::Volume
instance:
<Fog::Proxmox::Compute::Volume
volid="local:backup/vzdump-qemu-100-2018_05_15-15_18_31.vma.lzo",
content="backup",
size=376,
format="vma.lzo",
node=nil,
storage=nil,
vmid="100"
>
Then you can restore it:
options = { compress: 'lzo'}
server.restore backup
You can delete it:
backup.delete
More details on complete backup options
configuration hash can be find in Backup and restore wiki page.
You need first to get a server to manage its snapshots:
server = node.servers.get vmid
Then you can create a snapshot on it:
snapname = 'snapshot1' # you define its id
server.snapshots.create snapname
Get a snapshot:
snapshot = server.snapshots.get snapname
Add description:
snapshot.description = 'Snapshot 1'
snapshot.update
Rollback server to this snapshot:
snapshot.rollback
Delete snapshot:
snapshot.destroy
Proxmox supports cloning servers. It creates a new VM as a copy of the server.
You need first to get a server to manage its clones and a valid new VM id:
server = node.servers.get vmid
newid = node.servers.next_id
Then you can clone it:
server.clone(newid)
It creates a new server which id is newid. So you can manage it as a server.
Destroy the clone:
clone = node.servers.get newid
clone.destroy
Proxmox supports Linux containers management.
Containers are LXC managed. They are attached to a node.
More details in Proxmox Linux Containers management wiki page
You need to specify a node before managing Containers. Fog-proxmox enables it by managing Containers from a node.
List all containers:
node.containers.all
This returns a collection of Fog::Proxmox::Identity::Container
models which are inherited from Fog::Proxmox::Identity::Server
because they have many common features.
Before creating a container you can get the next available container id (integer >= 100) in the cluster:
nextid = node.containers.next_id
You can also verify that an id is free or valid:
node.containers.id_valid? nextid
Now that you have a valid id, you can create a container in this node. Before creating the container, you need to have an available template uploaded into the cluster. You can define the rootfs volume (1G), a root password and a SSH public key.
ostemplate = 'local:vztmpl/alpine-3.7-default_20171211_amd64.tar.xz'
container_hash = { vmid: vmid, storage: 'local-lvm',
ostemplate: ostemplate, password: 'proxmox01',
rootfs: 'local-lvm:1' }
node.containers.create container_hash
Get this container:
container = node.containers.get nextid
Add options: boot at startup, OS type (alpine):
container.update({ onboot: 1, ostype: 'alpine' })
Add a network interface controller (nic):
container.update({ net0: 'bridge=vmbr0,name=eth0,ip=dhcp,ip6=dhcp' })
Fetch all nics:
nics = container.config.nics
This returns a hash:
nics = { net0: 'bridge=vmbr0,name=eth0,ip=dhcp,ip6=dhcp' }
Get mac adresses generated by Proxmox:
container.config.mac_adresses
This returns an array of strings.
Get container configuration:
container.config
This returns a Fog::Proxmox::Compute::ContainerConfig
model:
<Fog::Proxmox::Compute::ContainerConfig
memory: 512,
net0: "name=eth0,bridge=vmbr0,hwaddr=BE:3C:A9:3F:4E:39,ip=dhcp,ip6=dhcp,type=veth",
swap: 512,
cores: 1,
rootfs: "local-lvm:vm-100-disk-1,size=1G",
hostname: "CT100",
digest: "e5131befed2f6ff8e11d598c4d8bb6016d5c0901",
ostype: "alpine",
arch: "amd64"
container: <Fog::Proxmox::Compute::Container vmid: 100, ...>
>
Before attaching a volume, you can first fetch available storages that could have images in this node:
storages = node.storages.list_by_content_type 'images'
storage = storages[0] # local-lvm
A default and minimum volume is called rootfs
.
Additional volumes could be attached to a container and are called mount points:
- Mount points: mp[n], n in [0..9]
The mount points id is mp
appended with an integer (n).
More details on complete configuration options can be find in Proxmox VE Linux Container.
Then attach a volume from this storage:
mp0 = { id: 'mp0', storage: storage.storage, size: '1' }
options = { mp: '/opt/app', backup: 0, replicate: 0, quota: 1 }
container.attach(mp0, options)
Extend a volume:
container.extend('rootfs', '+5M') # add 5Mb to rootfs volume
Move a volume:
container.move('rootfs', 'local-lvm', delete: 1) # move rootfs and delete original
Detach a volume
container.detach('mp0') # detach
container.detach('unused0') # remove
Actions on your container:
container.action('start') # start your container
container.wait_for { container.ready? } # wait until it is running
container.ready? # you can check if it is ready (i.e. running)
container.action('stop') # stop your container
container.wait_for { container.status == 'stopped' } # wait until it is stopped
Resume, suspend actions are not implemented.
Fetch container mount points:
mount_points = container.config.mount_points
This returns a hash:
mount_points = { mp0: "local-lvm:vm-100-disk-2,mp=/opt/app,size=1G" }
Delete container:
container.destroy
You can backup all node's guests or just one guest.
You need first to get a node or a container to manage its backups:
node = compute.nodes.get 'pve'
container = node.containers.get vmid
Then you can backup one container:
options = { compress: 'lzo'}
container.backup options
or backup all containers and servers on a node:
node.backup options
You can restore a container from a backup.
Backups are volumes which content type is backup
and owned by a container.
You first fetch the backup volumes of this container:
volumes = container.backups
This returns an array of Fog::Proxmox::Compute::Volume
instances.
Then you choose one:
backup = volumes[0] # local:backup/vzdump-qemu-100-2018_05_15-15_18_31.vma.lzo
This returns a Fog::Proxmox::Compute::Volume
instance:
<Fog::Proxmox::Compute::Volume
volid="local:backup/vzdump-qemu-100-2018_05_15-15_18_31.vma.lzo",
content="backup",
size=376,
format="vma.lzo",
node=nil,
storage=nil,
vmid="100"
>
Then you can restore it:
options = { compress: 'lzo'}
container.restore backup
You can delete it:
backup.delete
More details on complete backup options
configuration hash can be find in Backup and restore wiki page.
You need first to get a container to manage its snapshots:
container = node.containers.get vmid
Then you can create a snapshot on it:
snapname = 'snapshot1' # you define its id
container.snapshots.create snapname
Get a snapshot:
snapshot = container.snapshots.get snapname
Add description:
snapshot.description = 'Snapshot 1'
snapshot.update
Rollback container to this snapshot:
snapshot.rollback
Delete snapshot:
snapshot.destroy
Proxmox supports cloning containers. It creates a new container as a copy of the original container.
You need first to get a container to manage its clones and a valid new container id:
container = node.containers.get vmid
newid = node.containers.next_id
Then you can clone it:
container.clone(newid)
It creates a new container which id is newid. So you can manage it as a container.
Destroy the clone:
clone = node.containers.get newid
clone.destroy
Proxmox supports tasks management. A task enables to follow all asynchronous actions made in a node: VM creation, start, etc. Indeed, some of these tasks could be long to execute.
You need first to get a node to manage its tasks:
node = compute.nodes.find_by_id 'pve'
Search tasks (limit results to 1):
tasks = node.tasks.search { limit: 1 }
Get a task by its id. This id can be retrieved as a result of an action:
taskid = snapshot.destroy
task = node.tasks.find_by_id taskid
task.wait_for { succeeded? }
Stop a task:
task.stop
More examples can be seen at examples/compute.rb or spec/compute_spec.rb.