There has always been much discussion about the VirtualCenter database if it was important enough to back it up. Most people agreed that the information that the database held was not important. A datacenter and cluster could be easily reconfigured and all other settings were saved on the host. HA wasn’t even using VirtualCenter, and DRS well a day without DRS is something most companies could afford.
VirtualCenter 2.5 already contained a feature called “Distributed Power Management”, with this feature the VirtualCenter database became more important but still one could easily do without it. But VMware just released VirtualCenter 2.5 Update 2. This update contains a new feature for HA. HA will get it’s IP info straight from VirtualCenter instead of the /etc/hosts file or DNS. With this info HA fills up /etc/FT_HOSTS. This all of a sudden makes the VirtualCenter database and the VirtualCenter server more important than ever.
I guess it’s time to start building the VirtualCenter server in a different way. Going virtual might be the best solution for having a highly available VirtualCenter server and database. But what about actually backing up the Database, via a maintenance plan or a backup engine. In time the VirtualCenter database will only get more important, especially when for instance DPM evolves. I can imagine DPM will detect trends and switch servers on and off accordingly.
Anyway, the only message I wanted to get out is start backing up that database!
Dave C says
Duncan –
I have always thought the database to be somewhat important. Without it, you would need to recreate all of the datacenter, cluster and resource pool info. In an large environment with several datacenters and hosts, this would be time consuming.
A simple scheduled maintenance job to back up the DBs to a central share that gets spun to tape has always been what I preach to customers.
Dave
John says
Duncan – Where else is the Virtual Machines and Templates view stored? We have ~150 VM administrator accounts and well in excess of 500 VMs deployed. If I lost the folder structure it would take days to put it all back together and my users would have a fit!
The other problem we hit once was when our DB was lost we had to manually readd all the templates (over 250 of them) to the VC. (That was back on VC1.3) We take weekly full backups now and even go so far as to put them on tape and ship them off site.
We also hammer our VC with the number of users we have concurrently connected. The CPU utilization is regularly pegged (2 socket dual core box) and we have SQL on an entirely different box just to reduce the load.
Will someone at some point beg borrow and plead for us with product management that VC is the single point of failure in my entire deployment and I *NEED* load balancing or at least redundancy.
mrz says
I totally agree with John – VC is and has been a single point of failure. Would be nice if I could run two VC servers for redundancy.
Rob Mokkink says
For the high available VC we use Neverfail for VirtualCenter, because it has the best of 2 worlds:
– high availability for the OS and VC services
– the data is protected as well
We did a POC and the product works great.
Soon we will use it in the production environment.
Putting the VC server as a virtual machine on a ESX cluster only makes the OS and services high available not the data and the database.
John says
Has anyone seen anything from VMware on how to address this?
I don’t want another vendor to deal with (even if Neverfail looks good). This should be out of the box functionality.
Let’s not even start about poor performance over virtual infrastructure client and long distances…
Magnus says
For the load balancing or high availibility part I aggree it would be nice to have more than one VC.
When it comes to disaster recovery I see no need for VMware need to address this. Use the same backup and maintenance procedues you would for any other SQL Server database. Ship the backup off the server in case of disater.