Loading...
×
×
  • Status /
RSS Feed

Proxmox

Scheduled on 22/01/2019 17:27:00 Status Finished Estimated finish 22/01/2019 17:27:00

We have finally achieved this, the latest version of Proxmox already works well from the client area, and no node with virtual machines will ever be restarted in production.

We feel everything that has happened but this improvement was necessary to allow the security and stability of a cloud well built in the long term, because the equipment that precedes me, I leave a poorly designed platform and bad conditions at various levels of security, especially the distributed storage system that was unnecessary and unusable and the worst was poorly mounted so it was not an adequate storage to have VMs in production as all network vats losing speed of response and the worst that a san housed VMs of many nodes, which it's .

Regarding the disks of the vm's that are in the SAN01 with the distributed storage system (Ceph) that installed the predecessor equipment, it is in the hands of the company https://croit.io. Again you can rest assured that the Ceph is created on a raid10 unit by HardWare, that is all the storage of the affected VMs in that 3Us booth with hardware controller for 16 disks, the raid10 was created by hardware, which is in optimal state.

Ceph is a very complex system and it did not make sense if we want the vms to have the fiscal disk in each node, and to have it right it should be replicated n + 3, so it did not make sense to do so, it was very insecure, and that's why has been removed from the platform, knowing that Proxmox now allows real HA and Replication between nodes only with two isolated networks, a maintenance one that connects the nodes between without forming the cluster. The other is the one that comes from the Nexus Series 5000 switch, they are fiber ports fsp + 10GE, for each node, this guarantees an optimal gigabit speed in each VM of each node.

I apologize again because those who have suffered these weeks without the VM, is the minimum you deserve.
On the other hand you have cloned the plan that you had with what you now have two identical, both for an annual period without costs. It's the least I could do. I'm still waiting for this company to get Ceph Luminous up because it is a current version and this is causing problems since this san01 with Ceph was installed 4 years ago

Associated servers / services

Proxmox

Emergency upgrade for Proxmox Cloud -Removed SAN Ceph Backup Storage, now HA is isolate on each node with separted disks.

Scheduled on 31/12/2018 12:00:00 Status In Progress Estimated finish 02/01/2019 19:38:00

Distinguished customer, we've upgraded to Proxmox 5.3, 5 PVE and soon they will be back online. This was an emergency.

the upgrade to latest Proxmox 5.3.1 for Proxmox IN ALL nodes has been completed. Main changes has been:

Removed Ceph SAN for VMSTORAGE
Provision isolate disks per node,
VMSTORAGE isolate disk per node,
Backup and iso folders on isolate disks has been independed on separate disks on each node
HA available an optional from package area free of charge, Real High Availability.
Qemu Agent activated per VM,
Corosync activated and auto monitored,
Intenernal network 10.000 MiBits
Cloud-Init apt-get package can be installed on a new Debian template.
Option to Choose on the order process and from package area disk controll, netcards, and port speed.
Option to choose the node from package area.

we continue working hard to finishing this big upgrade to this platform, None disk from VM will be affected. All is on the way.

Associated servers / services

PROXMOX PLATFORM

Added AbuseIO Abuse.io

Scheduled on 25/12/2018 14:12:00 Status In Progress Estimated finish 25/12/2018 14:12:00

Features
AbuseIO’s main features:
Receive (through a mailserver handler, e.g. Postfix) abuse messages and automatically parse them into abuse reports
Combine reports that already have an open case to reduce the amount of noise
Classify each type of abuse and create actions on specific cases
Create locally defined customers and/or netblocks or easily integrate your own IPAM system to resolve IP addresses to customers
Set automatic (re)notifications per case or customer
Set automatic escalation paths, triggers and actions
Allow customers to reply, close or add notes to cases, keeping them organized
Link customers to a self help portal in case they need more help
Works with IPv4 and IPv6 addresses
Hook events to external scripts, i.e. tooling that places hosts in quarantaine
Available parsers / collectors:
Any RFC compliant ARF formatted message
Any RFC compliant FBL Messages (Feedback Loop)
Any DNS based RBL
Shadowserver (www.shadowserver.org)
SpamCop (www.spamcop.net)
IP Echelon (www.ip-echelon.com)
fail2ban reporting service (www.blocklist.de)
Junk Email Filter (www.junkemailfilter.com)
Google Safe Browsing reports for ASN’s (safebrowsingalerts.googlelabs.com)
Project Honey Pot (www.projecthoneypot.org)
Clean MX (http://www.clean-mx.de)
Cyscon / C-SIRT (https://www.c-sirt.org)
Netcraft (http://www.netcraft.com/)
SpamExperts (https://www.spamexperts.com)
USGO-Abuse
Microsoft SNDS
Abuse-IX (https://www.abuseinformationexchange.nl/)
Woody (http://www.woody.ch/)
Webiron (https://www.webiron.com/)
Copyright Compliance
Cegtek (http://www.cegtek.com/)
Juno (http://www.juno.com/)
Parsers being developed:
Bambenek
Arbor
Autoshun
Brute Force Blocker project
DragonBot
Malc0de
abuse.ch
Open blacklist
Phishtank
CI Army (http://www.ciarmy.com/#list)

Associated servers / services

IP white-list protection