Use less = get more! Changing the storage systems

Yesterday was the day where I’ve decided that a new solution to store data is needed. If a Service-pack upgrade at a virtual machine takes up to 17 hours than might be something wrong.. but if the OS in the virtual machine is running without failures there has to be a bottleneck.
So checked the main storage, who is dealing with all the data of the ESX Cluster and “tada” disk arrays at 100% load. That meant about 190MB/s read and write at a RAID10 build with six 2TB 7.2k disks. The other RAID array at that server a RAID1 build up from two 300GB 15k disks was also at full load.
Well I was a bit confused went to rack and look at the server. You might know the lights for HDD access, which simply flash if an HDD access or use happens? Those lights weren’t flashing anymore they we were green light permanently – just a burning green light. Okay so the storage is really at full load.
After that I though“Hrmm there might be space left at the other storages, let’s use this..” – but unfortunately there wasn’t. So I started counting all diskspace before RAID arrays and backup volumes together and ended up at 31.8TB. Well after RAID levels and cross backup between the storages only about 12TB are usable – to be honest if you count off the cross backup space etc. it will end up at a data load of 3.5TB for VMs and 6TB in other data. A total of 9.5TB data. That’s why I though downgrading from 4 storages to 2 might be “cool”.
And here comes my plan:

System and VM Status

The plan is to offline one storage and downgrade another to a simply DAS attached via miniSAS to the main Storage.
In detail this means only 2 storage servers left. One storage will stay at WOL and wake up at night and will receive the backups from the main storage and after that will power down again. The deal is less energy, less hardware, less system but more IO performance and more usable diskspace. Sounds to good to be true ?
In detail the system will be build this way:

  • Main storage
    • CPU: 3Ghz DualCore Xeon C2D based
    • RAM: 4x 2GB DDR2 800Mhz ECC
    • MB: Supermicro X7SBE
    • NIC1: PCI-X Broadcom DualPort 1GbE
    • NIC2: PCI-X Broadcom DualPort 1GbE
    • NIC3: PCIe x4 Intel QuadPort 1GbE
    • NIC4: onb. connected via PCI-X Intel DualPort 1GbE
    • HDD for OS: 2x 40GB S-ATA SSD @RAID1 connected to the southbridge
    • RAIDCard: Adaptec 5805 with BBU
    • SAS Expander: Chenbro CK12804 and a bracked for a IO Slot with 2x SFF-8088
    • Case: 19″ 2U Supermicro with 8x SAS Hotswap 3.5″
    • PSU: 2x 720W running 1+1

Inside this System there will be a RAID5 array build up with four 300GB 15k SAS drives and a second RAID5 build up with four 147GB 15k SAS drives. And here comes the clou – I will rebuild an acutal storage based on a Chenbro case with twelve SATA hotswap bays as a simply DAS, which will be connected via two times miniSAS SFF-8088 from the main Storage.

  • DAS system
    • SAS Expander: Chenbro Chenbro CK13601
    • Case: 19″ 3U Chenbro with 12x S-ATA Hotswap 3.5″
    • PSU: 2x 480W running 1+1

Inside this DAS there will be a RAID6 of eight 1.5TB 7.2 disks and a RAID5 of four 1TB disk. All disk etc. will be controled from the Adaptec 5805 inside the main storage.
This change will mean a upgrade from 12TB usable to nearly 14TB. For a dataload of only 9.5TB. And about 200W less power will be needed.

Cause I need daily backups from the main VMFS data and some other stuff this backups will be a backup server. This system will be controlled via WOL and will shutdown after copying etc.

  • Backup storage
    • CPU: 2.6Ghz DualCore C2D E6600
    • RAM: 4x 2GB DDR2 667Mhz ECC
    • MB: Supermicro PDSME+
    • NIC1: PCI-X Broadcom DualPort 1GbE
    • NIC2: PCI-X Broadcom DualPort 1GbE
    • NIC2: PCI-X Braodcom DualPort 1GbE
    • NIC4: onb. connected via PCI-X Intel DualPort1GbE
    • HDD for OS: 2x 80GB S-ATA 7.2k @RAID1 connect to the southbridge
    • RAIDCard: Highpoint 3220 or 3Ware 300-8X or something else not shure yet…
    • Case: Supermicro 2U with 6x SAS HotSwap 3.5″
    • PSU: Single 540W

Actually I’ve tree RAIDCards here which could do the job – so I’m not 100% sure which one will get it. As the backup datapool I’ll create a RAID5 of six 2TB 7.2 disks.

So far for today. If you have any suggestions and ideas how to do it better than tell me! Looking forward to read from you.


  • http://www.mysha.de/ mysha

    I’m not the storage expert, but my approach would be first trying to understand the reason for the disk array being at full load.
    I’m a fan of consolidation and would also prefer RAID5 over RAID10, but are you sure, that this solves the initial performance problem?

  • http://twitter.com/web207 MAFRI

    Disks are at full load, cause from those 2 array there are up 60 VMs running.. 
    a RAID5 gives you more space but less i/O, and more latency and less latency is needed. This is like running 60 computers from those disks. Thats why I’ve choosen a RAID10.
    I’m thinking of RAID5 at the new solution, cause there will be less VMs at one array and I’m able to split root VMFS to a SAS RAID5 and data VMFS to S-ATA array. And those SAS drives have much more i/O power so I hope they will do it.

  • Pingback: Multi-Sockel-Hardwaretreff Part 2 - Seite 48

  • daheym

    Ich hätte noch ein 2U DAS Case für 12Platten mit SFF-8088 Connectoren.
    Melde dich bei Interesse

    • http://mafri.ws MAFRI

      Schick mir einfach ne mail;-)

  • GeorgeZ

    what kind of storage system you are using?
    zfs system?

    • http://mafri.ws MAFRI

      Hi, I’m useing NTFS. As OS for the storage servers I’ve choosen Windows Storage Server 2008

Nach oben scrollen