With a view to greater security and data protection, how do we implement additional security measures in the server room without destroying the budget of annual IT expenses?
In the event of a serious accident (the OVH cloud fire in France in 2021, or the Godaddy fire also in France in 2024) in the Server Room, are you safe? Have you planned a solution?
For example, we could place the backup system outside the Server Room in another Room or even in the Cloud (since vzdump allows us to send only the modified blocks to save time and bandwidth) or both solutions using 2 Proxmox Backup Server (PBS), one locally and one remotely, which will synchronize via the integrated “synchronize datastores” functionality between PBS.
So in case of fire, theft, a short circuit that burns all the power supplies connected to that electrical line, the breakdown of the air conditioning system, a flood or any other magnet that could happen in the Server room, my data (Backup) are physically at safely in another place.
When you implement a backup system you almost always lose sight of one fundamental thing..
THE TIME!
HOW LONG WILL IT TAKE TO RESTORE EVERYTHING?
Yes, because we have backups, they are updated (reasonably), but WHERE do we restore them? How long will it take to reactivate all essential services?
Let’s assume we have a beautiful “Hyper-Converged Ceph Cluster” with Proxmox VE 8.X (Proxmox Virtual Environment), and activate the “High Availability” feature.
We have a highly redundant system thanks to the Ceph RBD (Ceph Block Devices) which allows us to distribute copies of the “VM” disk blocks across multiple cluster nodes, so that each cluster node has a copy available that is updated in real time of our precious data. “High Availability”, which in the event of hardware problems on a node, will automatically move the “VMs” to another functioning node, the Live Migration function of the “VM” from one node to another.
We implement the Proxmox Backup Server 3.X (PBS) as a backup system for the “VMs” in our Ceph Cluster.
Perfectly integrated into the Cluster, it allows us to back up the “VMs” via a very simple interface. Proxmox VE uses vzdump with “dirty bitmaps”, for sending disk images to PBS. The backup of “VMs” of several TB will be performed in a few minutes because the images (BLOCKS) of the disks of the “VMs” are divided into blocks of fixed size of 4MiB, and by keeping track of the changes blocks of an image it is possible to send only the modified blocks instead of the entire disk image, thus saving a lot of time and bandwidth, in fact it will be possible to perform complete backups of our “VMs” of several TB, in just a few minutes. Furthermore, using the ZFS (Zettabyte File System) file system on the Proxmox Backup Server 3.X (PBS) server allows you to have more than doubled storage space, almost unlimited when the real storage is large.
You can also use the PBS for backups, use the “Live Restore” feature, start a “VM” when restoring from a backup before it is completed. That is, starting a “VM” while restoring it (with all the limitations and problems that may arise from it).
In short, this combination is crazy cool and is also Open Source, and has been tested for years since it is all based on KVM, QEMU, Ceph storage platform and ZFS (OpenZFS). Everything redundant, from the VM data, to the UPS, to the Switches, etc.
The Best Possible Solution would be to replicate the Ceph cluster in another location using the built-in replication functions, but not everyone has the ability to set up another Server Room with connectivity greater than one GB.
Even moving everything to the Cloud has the same problem, how long will it take to restore the machines? How much bandwidth will our recovery service provider give us? How fast is the hosting storage? How many machines can I restore at once?
Is the Backup located in another farm DIFFERENT from the Main servers?
I found myself looking for a solution to a significant problem in order to have one or more servers available ready to go with updated data.
If the system had been less complex (a Cluster based on ZFS storage) I could have simply used “zfs send” and “zfs receive” or I could simply have enabled “VM” “Storage Replication” directly from the Proxmox VE interface (Similar to the Hyper-v system of Microsoft, of which I wrote an article on its configuration available here), I could have configured a server located in another Room or in the Cloud, using a reliable and fast system native to the system.
On the Proxmox PBS the ZFS filesystem is used but not as on the Proxmox VE. The data is divided into “Chunks”. Which works great and allows us to make very quick backups. But it is not possible to directly use “zfs send” and “zfs receive” disk image copies of “VMs” from the PBS to a Proxmox VE, to synchronize image changes.
So in this case the only choice available is to restore the backup on one or more servers.
We install Proxmox VE 8.X on one or more servers using the ZFS file system as storage for our “VMs”. I recommend not putting them in the same cluster for various reasons:
1 – We would have to modify the “corosync” voting system for the “HA” to inhibit voting in the additional server cluster, and if we use links on fiber optic interfaces or internal IP classes, we would only complicate the management.
2 – Security reasons, have another cluster or server with different users and passwords in case of unauthorized access. it could prevent them from accessing the last potentially valid data we have left.
Let’s configure a specific “user” for the PBS with read-only access to give the possibility of restoring only the copies without deleting them, and insert it into the storage available for the new Cluster.
Or therefore we first created a simple script in Bash that would automatically restore the latest available backup from the Proxmox Backup Server (PBS) on the Proxmox VE server, so as to have a VM always ready to go, but then it subsequently evolved a bit’…. just a little bit’.
I will explain the reasons for the various most important options, the rest is all commented and the logic is easily understood.
The script is also available on my github repository here
Requirements:
- Proxmox VE 8.X
- “jq” to install type a shell apt-get install jq
- “unzip” to install type a shell apt-get install unzip
- Blocksync “synchronize (large) files to a local/remote destination using a incremental algorithm” script can be downloaded from https://github.com/guppy/blocksync
- Bdsync “a fast block device synchronizing tool” can be downloaded from https://github.com/rolffokkens/bdsync/
Let’s open a shell with “root” permissions and prepare to configure the script.
Blocksync
First of all we copy into the Blocksync root with the command:
wget https://raw.githubusercontent.com/guppy/blocksync/master/blocksync.py
Give it the correct permissions:
chmod 755 /root/blocksync.py
Check that everything works correctly with the command:
./blocksync.py
We will display the list of commands.
Bdsync
Bdsync requires compilation on a machine (In my case Proxmox VE 8.1.X)
with the following additional packages installed:
apt-get install build-essential -y
apt-get install libssl-dev pandoc -y
We download the sources to compile with the command:
wget https://github.com/rolffokkens/bdsync/archive/master.zip
Extract the sources:
unzip master.zip
We enter the directory, and launch the compilation command with:
cd ./bdsync-master
make
Give it the correct permissions:
chmod 755 ./bdsync
Copy the executable to the system folder:
cp ./bdsync /usr/sbin
Of course, once compiled it is possible to copy it to other servers without performing a compilation every time. However, I make the already compiled executable for Proxmox VE 8.1.2 available on my github here.
I have also prepared a script that automatically compiles and installs Bdsync for Proxmox VE 8.X.X available here.
You will find the complete script at the end of the page, or you can download it from my github here.
Remember that it is essential to always have positive or negative feedback on an operation, because if the system crashes we will only notice it when we need to activate it, and we will discover that due to a very small and trivial problem the system never worked.
Let’s first start by modifying the fields for the server description and the email sending parameters.
#SERVER NAME
SERVERNAME="PVE-B"
#DESTINATION EMAIL
TO="your@email.com"
I remind you that I have already written a guide compatible with Proxmox VE 8.X for configuring Postfix for sending e-mails available here.
We insert the list of the VMs we want to restore in:
VM=("100" "101" "102")
Or even just one:
VM=("100")
In “pbsuser” enter the user created on the PBS to have read-only access to the backup list
pbsuser="backupread@pbs"
In “export PBS_PASSWORD” we enter the password of the PBS user
#PBS_PASSWORD
export PBS_PASSWORD="Strongpassword!"
In “pbsdatastore” we specify the datastore on the PBS where the VMs reside
#DATASTORE PBS
pbsdatastore="DATASTORE"
In “pbsip” PBS server IP
#IP PBS
pbsip="192.168.200.2"
In “pbssource” the name of the PBS storage inserted in the Proxmox VE can be viewed in /etc/pve/storage.cfg
# FIND IN /etc/pve/storage.cfg
#SURCE PBS
pbssource="proxmox-backup"
In “pooldestination” the name of the destination pool “zfs” on the Proxmox VE where we will store our VMs, always viewable in /etc/pve/storage.cfg
#LOCAL DESTINATION POOL
# FIND IN /etc/pve/storage.cfg
pooldestination="pool-data"
ATTENTION TO KEEP TRACK OF THE VERSION OF THE RESTORED VM, I WILL CREATE A DESCRIPTION WITH THE DATE. IF THERE ARE DESCRIPTIONS OF THE VM THEY WILL BE REPLACED WITH THE DATE OF THE RESTORED BACKUP IN THIS FORMAT:
2024-03-16T22:50:00Z
Let’s start from the first SYNCDISK option, if set to 0 every time the script starts we will DELETE the VM and RESTORE it as if we were carrying out the procedure from the GUI, I will only modify the DESCRIPTION and disable the automatic start of the VM when the server starts Proxmox VE.
#CONTROL VARIABLE FOR SYNC DISK OR RESTORE, 1 ENABLED, 0 DISABLED
SYNCDISK=1
The script will make a comparison between the current version (THE DATE PRESENT IN THE DESCRIPTION) of the affected “VM”, and the latest backup available for that “VM” on the PBS. If the “VM” is already updated nothing will be executed.
Set to 1 instead, if the “VM” is already present we will only synchronize the disks.
In this case the script will create a snapshot of the “VM” by setting the name to the CURRENT version present in the description which in this case will be the date (modified because snapshot must start with a character and cannot contain some characters such as “:”)
s2024-03-16T22_50_00Z
So we could maintain MULTIPLE versions of the same machine using the native ZFS snapshot function, or in case of problems with sync, return to the previous state in a few seconds.
The SYNCDISKSNAP variable identifies the number of snapshots to keep, if set to 0 they are disabled. I recommend trying to leave a high number of snapshots as they could weigh very little in terms of GB given how ZFS was conceived compared to other filesystems.
##VARIABLE NUMBER SNAPSHOT ZFS TO KEEP COMBINED WITH WITH SYNCDISK, 0 DISABLE
SYNCDISKSNAP=21
After that we will have 2 possibilities to synchronize the disks via the SYNCDISKTYPE variable
We use a system of direct modification of the disks (BLOCKS) of the “VMs”, trying to optimize the transfer time and the use of the available bandwidth, avoiding transmitting or writing identical blocks, or with 0 sectors.
If set to 0 the script in pyton3 Blocksync.py will be used, if instead set to 1 the compiled program Bdsync will be used
The choice of the 2 methods is given by various factors, you choose the one that best suits you.
To synchronize the disks we will use the “proxmox-backup-client map” command integrated in the Proxmox VE, which by specifying the disk within the affected backup present in the PBS, which will create a device (/dev/loopXX) from where it will be possible read the virtual disk. So we will find ourselves on our Proxmox VE (pass me the term) a mount of the VM image disk in a device, from there on we will treat it like any other disk. To view a list of mapped disks (mounts), you can run the command:
proxmox-backup-client unmap
Which will return us the loop device created with the path of the disk on the PBS. Normally they do not have to be present, but only if the replication script is launched. If the replication script is blocked and a map remains active, it can be disconnected with the command:
proxmox-backup-client unmap /dev/loop0
where in /dev/loop0 is the affected device. Looking inside the replication script, you can see that I have implemented a routine to verify that during the execution of the VM replication a map has not already been executed for that “VM”, and if there is it generates an error message .
BLOCKSYNC
Very versatile Python3 script useful for synchronizing block devices via the network. Refer to the official documentation.
The variables to complete are:
“BLOCKSYNC” full path to the script in Python3
#SCRIPT SYNC BLOCK LOCATION
BLOCKSYNC="/root/blocksync.py"
“BLOCKSYNCSIZE” the block size variable (I recommend you do some tests on the size to find the optimal value for your network)
#BLOCK SIZE FOR TRANSFER
#DEFAULT 1MB = 1024 * 1024 = 1048576
BLOCKSYNCSIZE="1048576"
“BLOCKSYNCHASH1” is the HASH algorithm to use, by default it uses sha512, but I recommend you use the more efficient “blake2b” the list of available algorithms is the one present in Python3 via the Hash library https://docs.python. org/3/library/hashlib.html
#TYPE OF HASH TO USE
#DEFAULT sha512
BLOCKSYNCHASH1="blake2b"
Pro = just download the file into Python3 and it is ready to use.
Cons = Slower compared to Bdsync, given that reading and writing is done live, in case of network problems you will need to manually restore the snapshot.
BDSYNC
A very versatile program too, very light and useful for synchronizing block devices via the network. Refer to the official documentation.
Bdsync requires compilation, and is used in 2 phases:
1 – Reads the differences between the current “VM” disk and the backup disk mapped in /dev/loopXX, and creates a compressed (“zstd”) disk differences file (binary patchfile). So it doesn’t write the changes directly to the dev, but simply saves them in a local diff file.
2 – Once the file containing the differences of the two blocks has been correctly saved, it applies (Writes) it.
The variables to complete are:
“BDSYNCHASH1” as for the variable “BLOCKSYNCHASH1” indicates the HASH algorithm, by default it uses “md5”, but as previously written we specify the blake2 algorithm, in this case “blake2b512”.
For a list of available algorithms (since it is compiled with openssl) by launching the command “openssl list -digest-algorithms” a list of those that can be used with Bdsync will appear.
#TYPE OF HASH TO USE
#DEFAULT md5
#LIST openssl list -digest-algorithms
BDSYNCHASH1="blake2b512"
“BDSYNCSIZEBLOCK” specifies the size of the blocks to check, by default it is “4096” (here too I recommend you do some tests based on your network)
Pro = very fast, allows us to save the diff file to apply later
Cons = requires compilation
Let’s focus for a moment on why I used the Blake algorithm and not the classic MD5, SHA-1, SHA-256, SHA-512 and because of two factors:
1 – SECURITY Some algorithms have known hash collision weaknesses, which in this case affects us little, but it is always better to get into the habit of not using them anymore. I invite you to read here https://en.wikipedia.org/wiki/Cryptographic_hash_function#Attacks_on_cryptographic_hash_algorithms
2 – SPEED the “blake2” algorithm is fast, very fast, not to mention the tests on blake3 https://www.blake2.net/ and https://en.wikipedia.org/wiki/BLAKE_(hash_function)
Try playing with the various hashes if you want to clear up any doubts, but know that I have already done it for you 😀
Let’s move on to the log variables, where perhaps at the beginning and out of good habit, check all the steps with the timing, to get a better idea of the times in the various steps.
“ERROREXIT” Indicates whether to exit the script immediately in case of problems “0” continue, “1” exit immediately.
#BLOCK THE REPLICATION PROCESS IN CASE OF ERROR "1" TO EXIT, "0" TO CONTINUE IN CASE OF ERROR
ERROREXIT="0"
“REPLICALOG” Indicates whether to log the disk synchronization progress, useful for seeing the execution times the first time, “0” disabled, “1” include in the log.
#INSERT THE REPLICA PROGRESS IN THE LOG, 1 ENABLED, 0 DISABLED
REPLICALOG="1"
“LOGSIMPLE” This, if enabled with “1”, sends a very simple summary email on the status of each “VM”, without inserting additional messages. With “0” a more detailed email is sent. I recommend enabling it after the system is up and running.
#LOG SIMPLE, 1 ENABLED, 0 DISABLED
LOGSIMPLE="0"
Before launching the replication script, we need to connect the first time manually from the Proxmox VE server to the PBS via the command “proxmox-backup-client snapshot list –repository backupread@pbs@192.168.200.2:DATASTORE” (replace the data connection with your data also inserted in the script) we will be asked for the password once inserted we will have to ACCEPT THE SERVER CERTIFICATE.
fingerprint: b0:35:62:7d:0b:38:92:46:06:6f:f5:9c:17:bf:3f:3d:ca:2d:7a:11:86:34:b3:42:a3:12:12:c9:98:1c:28:98
Are you sure you want to continue connecting? (y/n):
Once done we will be free to start the script as many times as we wish, otherwise the script will block indefinitely, and will appear in the log in “/tmp/vmrestore/restorevm-error.log“ the connection acceptance message.
A small note, this script is used for “VM” and not for containers (I don’t use containers in production currently) if one day I need it I will update it for them too.
Below you can see the complete replication script, and here you can download it from my github repository both Blocksync.py and Bdsync already compiled.
If you need help or want to give suggestions, feel free to contact me on My Linkedin profile https://www.linkedin.com/in/valerio-puglia-332873125/
#!/bin/bash
# Proxmox restore script by Thelogh
#Bash script for automatic restore of VMs from the Proxmox Backup Server (PBS) 3.X to the Proxmox VE 8.X.X
#The script allows the restoration of the "VM" from a backup, the synchronization of the disks, and the use of snapshots on ZFS to maintain previous versions.
#https://www.alldiscoveries.com/prevent-long-disaster-recovery-on-hyper-converged-ceph-cluster-with-proxmox-v8-with-high-availability/
#For all requests write on the blog
#REPOSITORY
#https://github.com/thelogh/proxmox-restore-script
#V.1.0.0
#
#----------------------------------------------------------------------------------#
############################# START CONFIGURATION OPTIONS ##########################
#----------------------------------------------------------------------------------#
#INSERT ACTUAL PATH ON CRONTAB
#echo $PATH
#INSERT THE RESULT ON CORONTAB
#PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
#export -p
#LIST LATEST BACKUP
#proxmox-backup-client snapshot list --repository backupread@pbs@192.168.200.2:DATASTORE
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
#SERVER NAME
SERVERNAME="PVE-B"
#DESTINATION EMAIL
TO="your@email.com"
#LIST OF VMs IN ARRAY TO PROCESS=("100" "101" "102")
VM=("100")
#DEFINE PBS
#PBS_BACKUP_DIR
#PBS_BACKUP_NAME
#PBS_NAMESPACE
#USERNAME FOR AUTENTICATION PBS
pbsuser="backupread@pbs"
#PBS_PASSWORD
export PBS_PASSWORD="Strongpassword!"
#DATASTORE PBS
pbsdatastore="DATASTORE"
#IP PBS
pbsip="192.168.200.2"
#SURCE PBS
pbssource="proxmox-backup"
#CONTROL VARIABLE FOR SYNC DISK OR RESTORE, 1 ENABLED, 0 DISABLED
SYNCDISK=1
#CONTROL VARIABLE FOR SYNC DISK METOD = BLOCKSYNC 0, BDSYNC 1
SYNCDISKTYPE=1
##VARIABLE NUMBER SNAPSHOT ZFS TO KEEP COMBINED WITH WITH SYNCDISK, 0 DISABLE
SYNCDISKSNAP=21
#SCRIPT SYNC BLOCK LOCATION
BLOCKSYNC="/root/blocksync.py"
#BLOCK SIZE FOR TRANSFER
#DEFAULT 1MB = 1024 * 1024 = 1048576
#BLOCKSYNCSIZE="4194304"
#BLOCKSYNCSIZE="1048576"
BLOCKSYNCSIZE="1048576"
#TYPE OF HASH TO USE
#DEFAULT sha512
#
BLOCKSYNCHASH1="blake2b"
#TYPE OF HASH TO USE
#DEFAULT md5
#LIST openssl list -digest-algorithms
#blake2b512 SHA3-512 SHA512
BDSYNCHASH1="blake2b512"
#BDSYNCHASH1="SHA512"
#TEMPORARY DIRECTORY FOR SAVING DIFF FILES
BDSYNCTEMPDIR="/root/bdsynctmp"
#DEFAULT BLOCK 4096
BDSYNCSIZEBLOCK="1048576"
#BDSYNCSIZEBLOCK="1048576"
#LOCAL DESTINATION POOL
# FIND IN /etc/pve/storage.cfg
pooldestination="pool-data"
#TYPE
typesource="backup"
#BLOCK THE REPLICATION PROCESS IN CASE OF ERROR "1" TO EXIT, "0" TO CONTINUE IN CASE OF ERROR
ERROREXIT="1"
#INSERT THE REPLICA PROGRESS IN THE LOG, 1 ENABLED, 0 DISABLED
REPLICALOG="1"
#LOG SIMPLE, 1 ENABLED, 0 DISABLED
LOGSIMPLE="0"
#TEMPORARY DESTINATION DIRECTORY LOG
DIRTEMP="/tmp"
#LOG DIRECTORY
LOGDIR="${DIRTEMP}/vmrestore"
#LOG
LOG="${LOGDIR}/restorevm.log"
#ERROLOG
ELOG="${LOGDIR}/restorevm-error.log"
#CONTROL VARIABLE
ERRORCODE="0"
#VM REPLY EMAIL MESSAGE SUBJECT
MSG="${SERVERNAME} VM replication report"
MSGERROR="${SERVERNAME} ERROR VM Replication Report"
#PBS_REPOSITORY
export PBS_REPOSITORY="${pbsuser}@${pbsip}:${pbsdatastore}"
#----------------------------------------------------------------------------------#
############################# END CONFIGURATION OPTIONS ##########################
#----------------------------------------------------------------------------------#
function send_mail {
#I CHECK THE EXISTENCE OF THE SHIPPING LOGS
if [[ -f ${LOG} && -f ${ELOG} ]];
then
if [ ${ERRORCODE} -eq 0 ];
then
#NO ERROR
cat ${LOG} ${ELOG} | mail -s ${MSG} ${TO}
else
#ERROR IN REPLICATION, I ATTACH THE LOGS
cat ${LOG} ${ELOG} | mail -s ${MSGERROR} ${TO}
fi
else
#THERE ARE NO LOGS
if [ ${ERRORCODE} -eq 0 ];
then
#NO ERRORS BUT THERE ARE NO LOGS
echo ${MSGERROR} | mail -s ${MSGERROR} ${TO}
else
#REPLICATION ERRORS AND THERE ARE NO LOGS
echo ${MSGERROR} | mail -s ${MSGERROR} ${TO}
fi
fi
}
#CHECK THE EXIT STATUS OF THE PROGRAM OR COMMAND LAUNCHED
function controlstate {
if [ $? -ne 0 ];
then
echo "Error executing the command"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
fi
#CHECK IF IT IS NECESSARY TO FINISH THE RESET PROCEDURE IN THE PRESENCE OF ERRORS
if [ ${ERROREXIT} != 0 ];
then
#echo "In case of errors I end the backup"
if [ ${ERRORCODE} != 0 ];
then
echo "There are errors, I'm stopping replication"
send_mail
exit 1
fi
fi
}
#I INCREASE THE ERROR CONTROL VARIABLE
function controlerror {
((ERRORCODE++))
}
#CHECK IF THE DESTINATION DIRECTORY EXISTS
function controldir {
if [ ! -d $1 ];
then
echo "Directory $1 does not exist, I create it"
/bin/mkdir -p $1
controlstate
fi
}
#VERIFY THAT THE JQ PROGRAM IS INSTALLED
function controljq {
#apt-get install jq
if ! jq -V &> /dev/null
then
echo "jq could not be found"
echo "Install with apt-get install jq "
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
}
#VM LOCK FUNCTION
function setqemuvmlock {
vid=$1
qm set $vid --lock backup
if [ $? -ne 0 ];
then
echo "Error set lock mode backup for VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
}
#VM LOCK REMOVE FUNCTION
function remqemuvmlock {
vid=$1
qm unlock $vid
if [ $? -ne 0 ];
then
echo "Error remove lock for VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
}
#VM CREATION FUNCTION
function restorevm (){
#$bktype $vmid $lastbackuptime
bkt=$1
vid=$2
backuptime=$3
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "Starting Recovery"
fi
if [ ${REPLICALOG} -eq 1 ];
then
#Log Replication Enabled
qmrestore $pbssource:$typesource/$bkt/$vid/$backuptime $vid --storage $pooldestination
controlstate
else
qmrestore $pbssource:$typesource/$bkt/$vid/$backuptime $vid --storage $pooldestination > /dev/null
controlstate
fi
if [ ${LOGSIMPLE} -eq 0 ];
then
#I SET THE DATE IN THE DESCRIPTION TO IDENTIFY THE BACKUP VERSION
qm set $vid --description $backuptime
controlstate
#DISABLE START VM ON BOOT
qm set $vid --onboot 0
controlstate
echo "Restore completed"
else
#I SET THE DATE IN THE DESCRIPTION TO IDENTIFY THE BACKUP VERSION
qm set $vid --description $backuptime > /dev/null
controlstate
#DISABLE START VM ON BOOT
qm set $vid --onboot 0 > /dev/null
controlstate
fi
}
#VM CREATION SNAPSHOT FUNCTION
function takesnap (){
#takesnap $vmid $curdesc $lastbackuptime
vid=$1
desctime=$2
newdesctime=$3
#COUNTER SNAPSHOT FOUND
snapcount=0
#SNAPSHOT LIST
snapshotstate=$(qm listsnapshot $vid)
#SAVE THE RESULT IN AN ARRAY WITH DELIMITER \r
readarray -t snapshotstatelist <<<"$snapshotstate"
#EMPTY VARIABLE FOR OLDER SNAPSHOT DATE
oldersnap=""
#EMPTY VARIABLE FOR OLDER SNAPSHOT
oldersnaptime=""
#EMPTY VARIABLE FOR NEWEST SNAPSHOT DATE
newestsnap=""
#CHECK FOR MORE SNAPSHOT
for snapc in ${!snapshotstatelist[@]}; do
#CLEAR THE SNAPSHOT NAME FROM THE SPACES
listsnap=$(echo ${snapshotstatelist[$snapc]} | sed 's/^[ \t]*//g' )
#IF IT IS NOT THE CURRENT STATUS
if [[ ! "${listsnap}" =~ ^"\`-> current" ]];
then
#SAVE THE OLDEST SNAPSHOT TO DELETE
#EXTRACT THE NAME OF THE SNAPSHOT
snapnametime=$(echo ${listsnap} | awk -F " " '{print $2}' )
#EXTRACT THE DATE FROM THE NAME
snapname=$(echo ${snapnametime} | sed 's/^s//g;s/_/:/g' )
if [[ ${snapc} -eq 0 ]];
then
#SAVE THE OLDEST SNAPSHOT
oldersnap=$snapname
oldersnaptime=$snapnametime
fi
#SAVE THE NEWEST SNAPSHOT
newestsnap=$snapname
((snapcount++))
fi
done
#CHECK THE NUMBER OF SNAPSHOTS PRESENT
if [[ ${snapcount} -gt ${SYNCDISKSNAP} ]];
then
#ERROR, THE NUMBER OF SNAPSHOTS PRESENT EXCEEDS THOSE ALLOWED
echo "Error, The number of snapshots present exceeds those allowed for the VM $vid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#CREATE THE FORMAT FOR THE SNAPSHOT NAME
backuptimemod=$(echo ${desctime} | sed 's/:/_/g' )
newsnapshot="s$backuptimemod"
#CHECK THAT THERE IS AT LEAST ONE SNAPSHOT OTHERWISE I WILL CREATE IT
if [[ -z "$oldersnap" ]] && [[ -z "$oldersnaptime" ]];
#THERE ARE NO PREVIOUS SNAPSHOT I CREATE THE SNAPSHOT
then
#CREATE THE NEW SNAPSHOT
echo "There are no previous snapshot i create the snapshot $newsnapshot for the VM $vid"
qm snapshot $vid $newsnapshot > /dev/null
if [ $? -ne 0 ];
then
echo "Error creating snapshot for VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
else
#THERE ARE PREVIOUS SNAPSHOT, DELETE AND CREATE THE SNAPSHOT
#MAKE SURE THAT THE NAME OF THE SNAPSHOT DOES NOT COINCIDE WITH THE NAME OF THE BACKUP DATE IN THE NOTES
if [[ "${newestsnap}" == "${desctime}" ]];
then
echo "Error, the snapshot name matches the current backup date for the VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#I CHECK THAT THE NUMBER OF SNAPSHOTS IS LESS OR EQUAL TO THE MAXIMUM NUMBER
if [[ ${snapcount} -le ${SYNCDISKSNAP} ]];
then
#IF IT IS THE SAME I DELETE THE OLDEST SNAPSHOT
if [[ ${snapcount} -eq ${SYNCDISKSNAP} ]];
then
#DELETE THE SNAPSHOT
echo "Deleting old snapshot $oldersnaptime for VM $vid"
qm delsnapshot $vid $oldersnaptime
if [ $? -ne 0 ];
then
echo "Error deleting snapshot for VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
fi
#CREATE THE NEW SNAPSHOT
echo "Creating snapshot $newsnapshot for VM $vid"
qm snapshot $vid $newsnapshot > /dev/null
if [ $? -ne 0 ];
then
echo "Error creating snapshot for VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
fi
fi
}
#VM BDSYNC SYNC DISK FUNCTION
function bdsyncstart (){
#$mapstatedev $curvmdiskconfpool $curvmdiskconfname
srcdev=$1
zfspool=$2
diskdst=$3
controldir ${BDSYNCTEMPDIR}
#START TIME COUNTER FOR DIFF FILE CREATION
starbdsync=`date +%s`
echo "Bdsync Start `date +%Y/%m/%d-%H:%M:%S` creation of diff file for disk $diskdst"
#CHECK WHETHER TO SAVE LOG OUTPUT
if [[ ${REPLICALOG} -eq 1 ]] && [[ ${LOGSIMPLE} -eq 0 ]];
then
bdsync --zeroblocks "bdsync --server" $srcdev /dev/zvol/$zfspool/$diskdst --progress --zeroblocks --hash=$BDSYNCHASH1 --blocksize=$BDSYNCSIZEBLOCK | zstd -z -T0 > $BDSYNCTEMPDIR/$diskdst.zst
else
bdsync --zeroblocks "bdsync --server" $srcdev /dev/zvol/$zfspool/$diskdst --zeroblocks --hash=$BDSYNCHASH1 --blocksize=$BDSYNCSIZEBLOCK | zstd -z -T0 > $BDSYNCTEMPDIR/$diskdst.zst
fi
if [ $? -ne 0 ];
then
echo "Serious error in Bdsync disk synchronization interrupted for Disk $diskds `date +%Y/%m/%d-%H:%M:%S`"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#STOP TIME COUNTER FOR DIFF FILE CREATION
stopbdsync=`date +%s`
synctime=$( echo "$stopbdsync - $starbdsync" | bc -l )
synctimehours=$((synctime / 3600));
synctimeminutes=$(( (synctime % 3600) / 60 ));
synctimeseconds=$(( (synctime % 3600) % 60 ));
echo "Bdsync End `date +%Y/%m/%d-%H:%M:%S` creation of diff file for disk $diskdst"
echo "Bdsync creation of diff file for $diskdst disk completed in: $synctimehours hours, $synctimeminutes minutes, $synctimeseconds seconds"
#START TIME COUNTER FOR DIFF FILE APPLICATION
starbdsyncrestore=`date +%s`
echo "Bdsync Start `date +%Y/%m/%d-%H:%M:%S` apply diff file for disk $diskdst"
#APPLY THE DIFF FILE TO THE DISK
zstd -d -T0 < $BDSYNCTEMPDIR/$diskdst.zst | bdsync --patch=/dev/zvol/$zfspool/$diskdst
if [ $? -ne 0 ];
then
echo "Serious error in Bdsync apply diff disk synchronization interrupted for Disk $diskds `date +%Y/%m/%d-%H:%M:%S`"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#STOP TIME COUNTER FOR DIFF FILE CREATION
stopbdsyncrestore=`date +%s`
syncrestoretime=$( echo "$stopbdsyncrestore - $starbdsyncrestore" | bc -l )
syncrestoretimehours=$((syncrestoretime / 3600));
syncrestoretimeminutes=$(( (syncrestoretime % 3600) / 60 ));
syncrestoretimeseconds=$(( (syncrestoretime % 3600) % 60 ));
echo "Bdsync End `date +%Y/%m/%d-%H:%M:%S` apply diff file for disk $diskdst"
echo "Bdsync application of diff file for $diskdst disk completed in: $syncrestoretimehours hours, $syncrestoretimeminutes minutes, $syncrestoretimeseconds seconds"
#REMOVE DIFF FILE
echo "Remove diff file $BDSYNCTEMPDIR/$diskdst.zst"
rm $BDSYNCTEMPDIR/$diskdst.zst
controlstate
}
#VM SYNC DISK FUNCTION
function syncdiskvm (){
#$bktype $vmid $lastbackuptime
bkt=$1
vid=$2
backuptime=$3
#DISC FOUND CHECK COUNTER
finddisk="0"
#RESTORED DISK VERIFICATION COUNTER
restoredisk="0"
#I PUT THE VM IN LOCK MODE TO SYNCHRONIZE THE DISKS
setqemuvmlock $vid
#SAVE THE LIST OF DISKS IN AN ARRAY
arraydisk=$(proxmox-backup-client list --output-format=json-pretty | jq -r '.[] |select(."backup-id" == "'$vid'" and ."backup-type" == "'$bkt'")."files"'| sed 's/[][]//g;s/\,//g;s/\s//g;s/\"//g')
#CHECK IF THE ARRAYDISK VALUE EXISTS
if [ -z "$arraydisk" ];
then
echo "Attention! Problem recovering the files list for VM $vid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#SAVE THE DISK MAP FROM THE BACKUP CONFIGURATION FILE
unmaparraydiskmap=$(proxmox-backup-client restore $bkt"/"$vid"/"$backuptime qemu-server.conf - | grep "^#qmdump#map" )
#SAVE THE RESULT IN AN ARRAY WITH DELIMITER \r
readarray -t arraydiskmap <<<"$unmaparraydiskmap"
#CHECK IF THE ARRAYDISKMAP VALUE EXISTS
if [ -z "$arraydiskmap" ];
then
echo "Attention! Problem recovering the map files list for VM $vid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#CHECK HOW MANY DISK ARE AVAILABLE
for diskimg in $arraydisk; do
if [[ "$diskimg" == *.img.fidx ]];
then
#DISC FOUND CHECK COUNTER
((finddisk++))
#MAP THE CONTENT INTO STRING FORMAT
unmapstring=$(proxmox-backup-client unmap 2>&1)
#CHECK THAT THE COMMAND REPORTS THE MAP
if [[ "$unmapstring" != "Nothing mapped." ]];
then
#SAVE THE RESULT IN AN ARRAY WITH DELIMITER \r
readarray -t unmaparray <<<"$unmapstring"
#START LOOP LIST OF MOUNTED DISK
for unmapdev in ${!unmaparray[@]}; do
#SAVE THE MOUNTED "DEVICE".
devdisk=$(echo ${unmaparray[$unmapdev]} | awk -F " " "/$pbsdatastore/{print \$1}" |sed 's/://g')
#SAVE THE MOUNT PATH
diskmountpoint=$(echo ${unmaparray[$unmapdev]} | awk -F " " "/$pbsdatastore/{print \$2}")
#CHECK VM ID
mountid=$(echo $diskmountpoint | grep -oE ":$bkt/.{,3}" | cut -c5-7)
#CHECK THAT THE DISK ALREADY "MAP" IS NOT THE ONE OF THE VM TO SYNCHRONIZE
if [ "$mountid" == "$vid" ];
then
echo "Attention! Problem there are already disks mounted for this VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
done
fi
#THERE ARE NO DISKS MOUNTED I WILL CONTINUE
#CLEAR THE NAME OF THE DISK
diskimgmnt=$(echo $diskimg | sed 's/.fidx//g')
#MOUNT THE VIRTUAL DISC
mapstate=$(proxmox-backup-client map $bkt"/"$vid"/"$backuptime $diskimgmnt 2>&1)
#CHECK THE STATUS OF THE COMMAND MAP
if [[ $mapstate =~ "Error:" ]];
then
echo "Attention! Problem map disk for VM $vid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
else
#NOT IN ERROR CONTINUE
#SAVE THE MAP DEVICE /dev/loopXX
mapstatedev=$(echo $mapstate| grep -oE '/dev/.{,7}')
#CHECK IF THE LOCK VALUE EXISTS
if [[ -z "$mapstatedev" ]];
then
echo "Attention! Problem retrieving the current map device on the VM $vid "
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#FIRST CHECK THE DISK MAP FROM THE BACKUP CONFIGURATION FILE qemu-server.conf
#START THE LOOP TO SEARCH FOR DISK DEVICE
for dkmap in ${!arraydiskmap[@]}; do
#SAVE THE MAP "DEVICE".
##qmdump#map:virtio1:drive-virtio1:rbd:raw:
mapdkstr=$(echo ${arraydiskmap[$dkmap]} | sed 's/#qmdump#map://g')
#SAVE DEVICE TYPE virtio0 scsi0
mapdevice=$(echo $mapdkstr | awk -F ":" '{print $1}')
#SAVE DISK NAME drive-virtio0 drive-scsi0
mapdsk=$(echo $mapdkstr | awk -F ":" '{print $2}')
#CHECK THAT "MAP" IS CORRECT
if [[ -z "$mapdevice" ]] || [[ -z "$mapdsk" ]] ;
then
echo "Attention! Problem identifying the map on the VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#IDENTIFY THE NAME OF THE CURRENT DISK MOUNTED TO FIND IT IN THE MAP DEVICE drive-virtio0 drive-scsi0
diskimgmntsp=$(echo $diskimgmnt | sed 's/.img//g')
#CHECK THAT THE MOUNTED DISK HAS MAPPING
if [ $diskimgmntsp == $mapdsk ];
then
#SAVE THE CURRENT CONFIGURATION IN QEMU AND
#CLEAR THE CONFIGURATION STRING TO EXTRACT THE DATA scsi0,pool-data,vm-701-disk-0,iothread=1,size=8G
curvmdiskconf=$(qm config $vid | grep "^$mapdevice: "| sed 's/ //g;s/:/,/g')
#SAVE THE CONFIGURED UTILIZED POOL
curvmdiskconfpool=$(echo $curvmdiskconf | awk -F "," '{print $2}')
#CHECK IF THE LOCK VALUE EXISTS
if [[ -z "$curvmdiskconfpool" ]];
then
echo "Attention! Problem retrieving the current poll on the VM $vid "
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#SAVE THE NAME OF THE DISC
curvmdiskconfname=$(echo $curvmdiskconf | awk -F "," '{print $3}')
#CHECK IF THE LOCK VALUE EXISTS
if [[ -z "$curvmdiskconfname" ]];
then
echo "Attention! Problem retrieving the current disk name on the VM $vid "
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#VERIFY THAT THE DESTINATION POLL OF THE SCRIPT AND THE ONE CONFIGURED IN THE VM ARE THE SAME
if [[ "$pooldestination" == "$curvmdiskconfpool" ]];
then
#CHECK WHETHER TO SAVE LOG OUTPUT
if [ ${REPLICALOG} -eq 1 ];
then
#PROCEED WITH STARTING SYNC DISK
if [ ${SYNCDISKTYPE} -eq 0 ];
then
echo "Starting disk synchronization with Blocksync `date +%Y/%m/%d-%H:%M:%S`"
python3 $BLOCKSYNC $mapstatedev localhost /dev/zvol/$curvmdiskconfpool/$curvmdiskconfname -b $BLOCKSYNCSIZE -1 $BLOCKSYNCHASH1 -f
if [ $? -ne 0 ];
then
echo "Serious error in disk synchronization interrupted"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
echo "End disk synchronization with Blocksync `date +%Y/%m/%d-%H:%M:%S`"
else
#echo "Starting disk synchronization with Bdsync `date +%Y/%m/%d-%H:%M:%S`"
bdsyncstart $mapstatedev $curvmdiskconfpool $curvmdiskconfname
fi
else
#PROCEED WITH STARTING SYNC DISK
if [ ${SYNCDISKTYPE} -eq 0 ];
then
echo "Starting disk synchronization with Blocksync `date +%Y/%m/%d-%H:%M:%S`"
python3 $BLOCKSYNC $mapstatedev localhost /dev/zvol/$curvmdiskconfpool/$curvmdiskconfname -b $BLOCKSYNCSIZE -1 $BLOCKSYNCHASH1 -f > /dev/null
if [ $? -ne 0 ];
then
echo "Serious error in disk synchronization interrupted"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
echo "End disk synchronization with Blocksync `date +%Y/%m/%d-%H:%M:%S`"
else
#echo "Starting disk synchronization with Bdsync"
bdsyncstart $mapstatedev $curvmdiskconfpool $curvmdiskconfname > /dev/null
fi
fi
#IF THERE HAVE BEEN NO ERRORS UNMAP THE DISK
#I MAP THE CONTENT INTO STRING FORMAT
unmapstatus=$(proxmox-backup-client unmap $mapstatedev 2>&1)
#INCREASE THE SYNCHRONIZED DISK COUNTER
((restoredisk++))
else
echo "Attention! The target poll ($pooldestination)is different from the configured ($curvmdiskconfpool) one for VM $vid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi # FINISH VERIFY THAT THE DESTINATION POLL OF THE SCRIPT AND THE ONE CONFIGURED IN THE VM ARE THE SAME
fi
done #FINISH START THE LOOP TO SEARCH FOR DISK DEVICE
fi # FINISH CHECK THE STATUS OF THE COMMAND MAP
fi #END OF DISK .IMG.FIDX PRESENCE
done #FINE CHECK HOW MANY DISK ARE AVAILABLE
#CHECK IF AT LEAST ONE DISK HAS BEEN FOUND OTHERWISE I GET AN ERROR
if [ $finddisk -eq 0 ];
then
echo "Attention! No disks found for VM $vid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#RESTORED DISK VERIFICATION COUNTER
if [ $finddisk != $restoredisk ];
then
echo "Attention! Inconsistency between available and restored disks VM $vid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#I REMOVE THE VM IN LOCK MODE TO SYNCHRONIZE THE DISKS
remqemuvmlock $vid
#ONCE YOU HAVE FINISHED SYNCHRONIZING THE DISKS UNLESS NEW DESCRIPTION
if [ ${LOGSIMPLE} -eq 0 ];
then
#I SET THE DATE IN THE DESCRIPTION TO IDENTIFY THE BACKUP VERSION
qm set $vid --description $backuptime
controlstate
#DISABLE START VM ON BOOT
qm set $vid --onboot 0
controlstate
echo "Disk sync completed"
else
#I SET THE DATE IN THE DESCRIPTION TO IDENTIFY THE BACKUP VERSION
qm set $vid --description $backuptime > /dev/null
controlstate
#DISABLE START VM ON BOOT
qm set $vid --onboot 0 > /dev/null
controlstate
fi
}
#START RESTORE
function startrestorevm (){
for vmid in ${VM[@]}; do
#START VM CYCLE
#BACKUP TYPE vm|ct
bktype="vm"
#I SELECT THE BACKUP WITH THE MOST RECENT DATE
lastbackuptimestamp=$(proxmox-backup-client list --output-format=json-pretty | jq -r '.[] |select(."backup-id" == "'$vmid'" and ."backup-type" == "'$bktype'") | ."last-backup"')
#CHECK IF THERE ARE BACKUPS
if [ -z "$lastbackuptimestamp" ]
then
echo "Attention! There are no backups to restore for the VM $vmid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
else
#I CONVERT THE TIMESTAMP INTO THE DATE TO BE USED FOR THE RESTORE
lastbackuptime=$(date +"%Y-%m-%dT%H:%M:%SZ" -ud @$lastbackuptimestamp )
#CHECK IF THE VM IS PRESENT
vmfind=$(pvesh get /cluster/resources --output-format=json-pretty | jq -r '.[] | select(."vmid" == '$vmid')')
if [ -z "$vmfind" ]
then
#THERE IS NO VM PRESENT. I PROCEED WITH THE RESTORE
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "The VM $vmid is not present, I proceed with the complete restore"
fi
#START RECOVERY FUNCTION
restorevm $bktype $vmid $lastbackuptime
if [ ${LOGSIMPLE} -eq 1 ];
then
echo "The VM $vmid has been replicated"
fi
else
#THE VM IS ALREADY PRESENT
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "The VM $vmid is already present, check if needs to be updated"
fi
#EXCEPT THE CURRENT DESCRIPTION BY REMOVING THE ADDITIONAL CHARACTERS INSERTED BY QEMU
curdesc=$(qm config $vmid | grep '^description: '| awk '{$1=""}1'| sed 's/ //'| sed 's/%3A/:/g')
#CHECK IF THE DESCRIPTION HAS BEEN SAVED CORRECTLY
if [ -z "$curdesc" ];
then
echo "Attention! Problem recovering the replica version in the description for VM $vmid"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#CHECK THAT THE DESTINATION VM IS NOT LOCKED
curlock=$(qm config $vmid | grep '^lock: '| awk '{$1=""}1'| sed 's/ //'| sed 's/%3A/:/g')
#CHECK IF THE LOCK VALUE EXISTS
if [[ ! -z "$curlock" ]];
then
echo "Attention! Problem on the VM $vmid is in lock mode"
#I INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#echo "I verify that the backup is up to date"
if [ $lastbackuptime != $curdesc ]
then
#THE CURRENT VM HAS A DIFFERENT DATE
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "The current VM $vmid has a different date $curdesc instead of $lastbackuptime"
fi
#CHECK IF THE VM IS RUNNING
vmstatus=$(pvesh get /cluster/resources --output-format=json-pretty | jq -r '.[] | select(."vmid" == '$vmid')| ."status"')
#CHECK IF THE STATUS HAS BEEN SAVED CORRECTLY
if [ -z "$vmstatus" ];
then
echo "Attention! Problem recovering the status for VM $vmid"
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
if [ $vmstatus == "running" ]
then
#GET ERROR BECAUSE IT STARTED
echo "Attention! Error the VM $vmid is in running state "
#INCREASE THE CONTROL COUNTER AND PROCEED
controlerror
#SEND EMAIL REPORT
echo "Sending report email"
send_mail
exit 1
fi
#CHECK WHETHER TO DESTROY THE MACHINE OR SYNC THE DISKS
if [ ${SYNCDISK} -eq 0 ];
then
#START THE DESTRUCTION OF THE VM
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "I start destroying the VM $vmid"
qm destroy $vmid --skiplock true --purge true
controlstate
else
qm destroy $vmid --skiplock true --purge true > /dev/null
controlstate
fi
#START RECOVERY FUNCTION
restorevm $bktype $vmid $lastbackuptime
else
#MAKE SURE THAT THE FILESYSTEM SNAPSHOT NEEDS TO BE DONE
if [ ! ${SYNCDISKSNAP} -eq 0 ];
then
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "Starting snapshot creation process"
takesnap $vmid $curdesc $lastbackuptime
echo "End of snapshot creation process"
else
takesnap $vmid $curdesc $lastbackuptime > /dev/null
fi
fi
if [ ${LOGSIMPLE} -eq 0 ];
then
#START DISK SYNC
syncdiskvm $bktype $vmid $lastbackuptime
else
#START DISK SYNC
syncdiskvm $bktype $vmid $lastbackuptime > /dev/null
fi
fi
else
#THE CURRENT VM HAS THE SAME DATE
if [ ${LOGSIMPLE} -eq 0 ];
then
echo "The VM $vmid is already present and is updated"
fi
fi
#echo "Fine cliclo modifica vm $vmid"
if [ ${LOGSIMPLE} -eq 1 ];
then
echo "The VM $vmid has been updated"
fi
fi
fi
done
}
#----------------------------------------------------------------------------------#
################################### START SCRIPT ###################################
#----------------------------------------------------------------------------------#
#CHECK IF THE DESTINATION DIRECTORY EXISTS
controldir ${LOGDIR} > /dev/null
echo "Start vm replication" >${LOG} 2>${ELOG}
echo "`date +%Y/%m/%d-%H:%M:%S`" >>${LOG} 2>>${ELOG}
#I MAKE SURE THAT THE JQ PROGRAM IS INSTALLED, OTHERWISE I EXIT
controljq >>${LOG} 2>>${ELOG}
#START RESTORE
startrestorevm >>${LOG} 2>>${ELOG}
#END REPLICATION
echo "End of replication procedure" >>${LOG} 2>>${ELOG}
echo "`date +%Y/%m/%d-%H:%M:%S`" >>${LOG} 2>>${ELOG}
#SEND EMAIL REPORT
echo "Sending report email" >>${LOG} 2>>${ELOG}
send_mail >>${LOG} 2>>${ELOG}
#----------------------------------------------------------------------------------#
################################### END SCRIPT ###################################
#----------------------------------------------------------------------------------#