• Status /
RSS Feed


Scheduled on 22/01/2019 17:27:00 Status Finished Estimated finish 22/01/2019 17:27:00

We have finally achieved this, the latest version of Proxmox already works well from the client area, and no node with virtual machines will ever be restarted in production.

We feel everything that has happened but this improvement was necessary to allow the security and stability of a cloud well built in the long term, because the equipment that precedes me, I leave a poorly designed platform and bad conditions at various levels of security, especially the distributed storage system that was unnecessary and unusable and the worst was poorly mounted so it was not an adequate storage to have VMs in production as all network vats losing speed of response and the worst that a san housed VMs of many nodes.

Regarding the disks of the vm's that are in the SAN01 with the distributed storage system (Ceph) that installed the predecessor equipment, it is in the hands of the company https://croit.io. Again you can rest assured that the Ceph is created on a raid10 unit by HardWare, that is all the storage of the affected VMs in that 3Us booth with hardware controller for 16 disks, the raid10 was created by hardware, which is in optimal state.

Ceph is a very complex system and it did not make sense if we want the vms to have the fiscal disk in each node, and to have it right it should be replicated n + 3, so it did not make sense to do so, it was very insecure, and that's why has been removed from the platform, knowing that Proxmox now allows real HA and Replication between nodes only with two isolated networks, a maintenance one that connects the nodes between without forming the cluster. The other is the one that comes from the Nexus Series 5000 switch, they are fiber ports fsp + 10GE, for each node, this guarantees an optimal gigabit speed in each VM of each node.

I apologize again because those who have suffered these weeks without the VM, is the minimum you deserve.
On the other hand you have cloned the plan that you had with what you now have two identical, both for an annual period without costs. It's the least I could do. I'm still waiting for this company to get Ceph Luminous up because it is a current version and this is causing problems since this san01 with Ceph was installed 4 years ago

Associated servers and services