• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Disk Controller features and Queue Depth?

Duncan Epping · Apr 17, 2014 ·

I have been working on various VSAN configurations and a question that always comes up is what are my disk controller features and queue depth for controller X? (Local disks, not FC based…) Note that this is not only useful to know when using VSAN, but also when you are planning on doing host local caching with solutions like PernixData FVP or SanDisk FlashSoft for instance. The controller used can impact the performance, and a really low queue depth will result in a lower performance, it is as simple as that.

** NOTE: This post is not about VSAN disk controllers, but rather about disk controllers and their queue depth. Always check the HCL before buying! **

I have found myself digging through documentation and doing searches on the internet until I stumbled across the following website. I figured I would share the link with you, as it will help you (especially consultants) when you need to go through this exercise multiple times:

http://forums.servethehome.com/index.php?threads/lsi-raid-controller-and-hba-complete-listing-plus-oem-models.599/

Just as an example, the Dell H200 Integrated disk controller is on the VSAN HCL. According to the website above it is based on the LSI 2008 and provides the following feature set: 2×4 port internal SAS, no cache, no BBU, RAID 0, 1 and 10. According to the VSAN HCL also provides “Virtual SAN Pass-Through”. I guess the only info missing is queue depth of the controller. I have not been able to find a good source for this. So I figured I would make this thread a source for that info.

Before we dive in to that, I want to show something which is also important to realize. Some controllers take: SAS / NL-SAS and SATA. Although typically the price difference between SATA and NL-SAS is neglectable, the queue depth difference is not. Erik Bussink was kind enough to provide me with these details of one of the controllers he is using as an example, first in the list is “RAID” device – second is SATA and third SAS… As you can see SAS is the clear winner here, and that includes NL-SAS drives.

mpt2sas_raid_queue_depth: int
     Max RAID Device Queue Depth (default=128)
  mpt2sas_sata_queue_depth: int
     Max SATA Device Queue Depth (default=32)
  mpt2sas_sas_queue_depth: int
     Max SAS Device Queue Depth (default=254)

If you want to contribute, please take the following steps and report the Vendor, Controller type and aqlength in a comment please.

  1. Run the esxtop command on the ESXi shell / SSH session
  2. Press d
  3. Press f and select Queue Stats (d)
  4. The value listed under AQLEN is the queue depth of the storage adapter

The following table shows the Vendor, Controller and Queue Depth. Note that this is based on what we (my readers and I) have witnessed in our labs and results my vary depending on the firmware and driver used. Make sure to check the VSAN HCL for the supported driver / firmware version, note that not all controllers below are on the VSAN HCL, this is a “generic” list as I want it to serve multiple use cases.

Generally speaking it is recommended to use a disk controller with a queue depth > 256 when used for VSAN or “host local caching” solutions.

Vendor Disk Controller Queue Depth
Adaptec RAID 2405 504
Dell (R610) SAS 6/iR 127
Dell PERC 6/i 925
Dell PERC H200 Integrated 600
Dell PERC H310 25
Dell PERC H330 256
Dell (M710HD) PERC H200 Embedded 499
Dell (M910) PERC H700 Modular 975
Dell PERC H700 Integrated 975
Dell (M620) PERC H710 Mini 975
Dell (T620) PERC H710 Adapter 975
Dell (T620) PERC H710p 975
Dell PERC H810 975
HP Smart Array B110i 1020
HP Smart Array B120i 31
HP Smart Array P220i 1020
HP Smart Array P400i 128
HP Smart Array P410i 1020
HP Smart Array P420i 1011
HP Smart Array P440ar 1020
HP Smart Array P700m 1200
IBM ServeRAID-M5015 965
IBM ServeRAID-M5016 975
IBM ServeRAID-M5110 975
Intel C602 AHCI (Patsburg) 31 (per port)
Intel C602 SCU (Patsburg) 256
Intel RMS25KB040 600
LSI 2004 25
LSI 2008 25 / 600 (firmware dependent!)
LSI 2108 600
LSI 2208 600
LSI 2308 600
LSI 3008 600
LSI 9271-8i 975
LSI 9300-8i 600

Related

Server, Storage, vSAN pernixdata, sandisk, vflash, vfrc, vsan, vSphere

Reader Interactions

Comments

  1. Hans De Leenheer says

    17 April, 2014 at 12:05

    You bring up a very good point Duncan. Thanks for that. The only question that remains then from an architecting/design point of view is how big the impact of the queue depth would be. > 250 would suit well for small/regular workloads but is still a great difference (probably in price as well) then +1000.

    Question 1: Where is it that 250 is not enough anymore, where would you need to go for at least one of those 600?
    Question 2 to avoid the FUD of “we are the best” > is there a useful ceiling? is there a point where a bigger queue depth just doesn’t make sense anymore or is that unexisting and is bigger always better?

    • Duncan Epping says

      18 April, 2014 at 10:50

      good question… In the end it is going to depend on your IO Pattern, number of disks and SSDs attached to it, and what you can afford I guess. Considering though that a controller with a queuedepth of 600 like the 2208 is only a couple of bucks more than the 2008 (150 bucks with supermicro per server typically)

      Q1: SATA Disk typically has a queue depth of 32. While SAS and NL-SAS have 256. So 600 is the minimum you should aim for if you ask me when you do VSAN or local host caching. If you take anything less than you have this nice Hour Glass effect 🙂

      Q2: Not sure if there really is a ceiling to be honest. I mean a bigger queue depth will allow you to more efficiently use the queues of your devices, whether that is SAS magnetic disks or SAS flash devices.

      Hope that helps. (PS, I am not the real expert on this topic either)

  2. Fabian Lenz says

    17 April, 2014 at 12:30

    LSI LSI2004 AQLEN=425

    • Duncan Epping says

      17 April, 2014 at 13:03

      Thanks, added!

  3. Frederic MARTIN says

    17 April, 2014 at 14:28

    Hi Duncan,
    For Dell PERC 6/i Integrated, AQLEN = 975

    • Duncan Epping says

      17 April, 2014 at 14:54

      Which host type, as someone just mentioned: 127 🙂

      scratch that… that is the 6/iR instead of the PERC 6/i

  4. Erik Bussink says

    17 April, 2014 at 14:29

    Hello Ducan,

    For the LSI 9300-8i based on the LSI 3008 the AQLEN=600

    • Duncan Epping says

      17 April, 2014 at 15:28

      Added, thanks!

  5. Peter Vandermeulen says

    17 April, 2014 at 15:20

    I tested a few of our Dell servers
    Dell PERC H710 Mini 975
    Dell PERC H810 975
    Dell PERC H700 Integrated 975

    • Duncan Epping says

      17 April, 2014 at 15:28

      Added, thanks!

  6. Peter Kruit says

    17 April, 2014 at 15:54

    Adaptec RAID 2405, AQLEN=504

    • Duncan Epping says

      17 April, 2014 at 16:47

      Thanks

  7. Wade Holmes says

    17 April, 2014 at 16:33

    Please be aware that queue depth varies depending on driver. For example, the queue depth of the LSI2004 = 25 with driver that will be supported by VSAN (megaraid_sas). For the LSI3008, queue depth can be either 256 or 1024, depending on driver. VMware is working on having this information added to the VMware VCG for VSAN.

    • Duncan Epping says

      17 April, 2014 at 16:47

      Thanks

    • Robert Rizzi says

      19 November, 2014 at 15:45

      Can you provide an update as to how we can control which driver (or did you meet firmware) is selected/used for the LSI 3008 controller. 256 vs 1024 is quite a big difference and I want to be sure we have the benefit of the higher queue dept due to our issues below.

      We have been working with VMware support for about 5 days now at severity 2, and haven’t made any progress with performance yet.

      We are experiencing severe latency issues on a lightly loaded three host VSAN with Fusion-IO 825GB PCI-e SSDs, which are rated for 40K+ IOPS according to VCG for VSAN. According to RVC our cache hit rates are almost 100% but latency appears in RED, among other graphs. The issue might be the write-back to physical disk latency, and the only thing is the middle here is the LSI 3008 HBA. BTW, we have three 1.2TB 10K SAS2 Enterprise disks attached to each HBA. Increasing stripe-width from default to 2 or 3, makes little or no difference either. We have dedicated VSAN VMKernel adapters connected together through a dedicated 10GbE fully managed enterprise switch stack for the uplinks.

  8. Mike D. says

    17 April, 2014 at 17:24

    HP Smart Array P400i – 128
    Qlogic ISP2432 – 2176

    • Duncan Epping says

      17 April, 2014 at 18:17

      The QLogic card is a fiber channel HBA right?

      • Mike D. says

        17 April, 2014 at 19:35

        yes…

    • shaneschnell says

      18 April, 2014 at 01:10

      I can confirm that Qlogic ISP2532 is also 2176

  9. Joe says

    17 April, 2014 at 21:35

    HP P700m – 1200

    • Duncan Epping says

      17 April, 2014 at 23:55

      thanks!

  10. vmkdaily (@vmkdaily) says

    17 April, 2014 at 23:28

    Great post! Always happy to learn a new trick (finding queue depth via esxtop) so thanks!

    You already had the info for the H710 Mini I included below, but I’m also listing server model in case that helps anyone. I’ve also added a few new ones that weren’t listed yet. The following are a few flavors of Dell blades and a couple types of T620’s we use for our field offices running ESXi. Device names were verified via lspci.

    Dell PowerEdge M910 = Dell PERC H700 Modular, 975
    Dell PowerEdge M620 = Dell PERC H710 Mini, 975 (already listed)
    Dell PowerEdge T620 = Dell PERC H710 Adapter, 975
    Dell PowerEdge T620 = Dell PERC H710P Adapter, 975

    • Duncan Epping says

      17 April, 2014 at 23:55

      thanks!

  11. Nico Broos says

    18 April, 2014 at 08:36

    HP Smart Array P220i = 1020

    • Duncan Epping says

      18 April, 2014 at 09:44

      thanks

  12. Marco Broeken says

    18 April, 2014 at 08:47

    HP Smart Array P420i (G8 Servers) also 1020

    • Duncan Epping says

      18 April, 2014 at 09:44

      thanks!

  13. David Pasek says

    18 April, 2014 at 08:54

    DELL (M710HD), PERC H200 Embedded => 499

    • Duncan Epping says

      18 April, 2014 at 09:44

      thanks!

  14. Nico Broos says

    18 April, 2014 at 10:05

    Duncan, the P220i = 1020 not . 128.

    • Duncan Epping says

      18 April, 2014 at 10:30

      thanks, copy/paste gone bad.

  15. Tyson Then says

    18 April, 2014 at 14:45

    IBM ServeRAID-M5015 = 975

    • Duncan Epping says

      21 April, 2014 at 22:02

      Thanks, added!

  16. Michael says

    20 April, 2014 at 13:39

    What happens next with the NVMe SSD’s ? Does this mean they have 64000 Queue depth?

    http://www.extremetech.com/computing/161735-samsung-debuts-monster-1-6tb-ssd-with-new-high-speed-pcie-nvme-protocol

    Look at the comparsion tabel ahci versus nvme
    http://www.extremetech.com/wp-content/uploads/2013/07/NVMe23.png

    • Duncan Epping says

      21 April, 2014 at 22:02

      Nice! I want…

    • Dave Edwards says

      13 June, 2014 at 07:12

      The NVMe specification outlines support for 64k io’s per queue, and many queues. Controller data sheets will specify the max number of queues and max concurrent IO’s that can be processed by the controller; they will if they want to help us anyway :). The equivalent comparison to this article (great article BTW) would be the controller supports 64k (and more) IO submissions, but like the SAS at 254 and SATA at 32 an NVMe controller will need to specify how many io’s maximum it can process concurrently, which I assure you will not be 64k :). That said, the trick will be to balance outstanding requests with the device (and driver’s) ability to keep up. Once you overload the device the kernel will throttle and queue to prevent stalling the system while the device chokes through its list of io requests.

  17. Michael says

    22 April, 2014 at 18:22

    I did not find any storage vmotion tests between two datastores. Is this possible with VSAN to test on the same host to be may saturate a 10 Gigabit link on the vmware layer?

  18. Michael says

    22 April, 2014 at 18:27

    Not cheap, but expect many queue depth for raid controller Nytro Megaraid NMR 8140-8e8i.
    http://www.tomshardware.com/news/lsi-expands-nytro-caching-linup,26427.html#lsi-expands-nytro-caching-linup%2C26427.html?&_suid=139799487758606852617685811935

  19. Patrick says

    23 April, 2014 at 08:17

    Hi Duncan, here’s my addition:

    Dell PERC H200 Integrated (R810) > 600

    gr,
    Patrick.

    • Duncan Epping says

      23 April, 2014 at 09:27

      Thanks!

  20. Sylvain says

    23 April, 2014 at 15:44

    Hi,
    I also checked the parameter on a HP ProLiant Gen8 with a p220i and a p420i and I only see 28 for each in the AQLEN parameter. It’s very far from the value other observed. Is there a parameter somewhere that could have been set and could be limiting? Because I see very poor performance results so far on my VSAN cluster and I think something must be wrong and it might be this queue lenght.
    Sylvain

    • Duncan Epping says

      23 April, 2014 at 16:02

      Are you using the latest supported firmware and drivers?

      • Sylvain says

        23 April, 2014 at 16:52

        Yes: everything updated with the latest firmware disk and I used the HP esxi 5.5U1 iso.
        But I applied existing server profiles to configure the servers and I begin to wonder if an old parameter could be in there, herited from an older server with less capable hardware.
        What do you think?

        • Matt Mabis says

          14 May, 2014 at 22:07

          This is Caused from using the HP ASYNC Driver, the Inbox driver is the supported vSAN driver, and will change back from 28 to 1020 once you revert it. to test this, all you have to do is download the Offline bundle do the esxcli command to remove the ASYNC Driver and install the same Inbox Driver.

          • Sylvain says

            2 June, 2014 at 16:39

            Hello Matt
            Sorry for the late reply but I only saw your comment.
            Thank you for your answer
            Can you elaborate a little more (or give me a link) to the procedure please?

          • Sylvain says

            3 June, 2014 at 10:22

            Hi
            Just wanted to let you know I managed to swap the driver and now I see the correct AQLEN 1020
            For those interrested, here is the process I used:

            Download update-from-esxi5.5-5.5_update01.zip from vmware (esxi download section)
            Unzip it and extract VMware_bootbank_scsi-hpsa_5.5.0-44vmw.550.0.0.1331820.vib
            Upload it to the host or a shared datastore
            ssh to the host (or cli) as root and do

            esxcli software vib remove -n “scsi-hpsa”
            esxcli software vib install -v /path_you_stored_the_vib_file/VMware_bootbank_scsi-hpsa_5.5.0-44vmw.550.0.0.1331820.vib

            reboot, et voilà!

            Thanks again Matt.

    • Ryan says

      8 May, 2014 at 21:36

      I’m seeing the exact same thing (28) – DL380p Gen8, P420i, HP ESXi 5.5u1 fully patched. My Fusion-IO ioDrive2 (PCIe SSD) shows 5000, which sounds nice. 🙂

  21. Eric Gray says

    24 April, 2014 at 23:11

    My LSI2008 shows 600

  22. John Nicholson. says

    4 June, 2014 at 02:11

    Can Confirm the LSI 2008 shows 600 once upgraded. (I upgraded an ASUS 2008 PIKE card)

    • John Nicholson. says

      5 June, 2014 at 19:12

      Quick write up on the performance differences between the 25 and 600 are pretty massive.

      http://thenicholson.com/lsi-2008-dell-h310-vsan-rebuild-performance-concerns/

      • Duncan Epping says

        5 June, 2014 at 23:46

        Which firmware version are you using and which driver version John?

        • Matt Mabis says

          11 June, 2014 at 14:53

          Duncan he used most likely the same method i used to flash my H310’s to JBOD Mode.

          See Link
          http://mywiredhouse.net/blog/flashing-dell-perc-h310-firmware/

  23. Beni says

    19 June, 2014 at 11:25

    little confused

    http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
    is listening the DELL H310 as compliant, but the queue depth is very bad?

    We have several servers based on this configuration. Should we replace them?

    Best regards

    • Duncan Epping says

      19 June, 2014 at 12:01

      I personally would not run these disk controllers myself to be honest. But I cannot make a VMware official statement and say you need to replace them. I suggest contacting your VMware representative and ask him/her to pass this on to product management.

  24. Jesper says

    23 June, 2014 at 20:25

    IBM Serveraid M5110 – AQLEN 975

    • Duncan Epping says

      23 June, 2014 at 21:18

      Thanks!

  25. IronMtn says

    26 June, 2014 at 13:21

    Hi Duncan,

    Been looking at disk performance counters info lately and I was under the impression that QUED was queue depth, not AQLEN. I’ve also seen some people equate QUED to the stat disk.queueLatency.average. Can you help clear my confusion on this?

    • Duncan says

      26 June, 2014 at 13:47

      This Doc may help: https://communities.vmware.com/docs/DOC-9279

      • IronMtn says

        26 June, 2014 at 15:57

        Yes, it’s helpful. I still don’t see the correlation between QUED (a number) and queueLatency (time in ms).

  26. Shafay Latif says

    11 July, 2014 at 01:09

    HP H222 SAS Controller has AQLEN=600
    Best is just to go with HP P420i embedded with AQLEN=1020
    ***Enable HBA mode/passthrough on P420i using HPSSACLI and following ESXi commands
    -Make sure disks are wipe clean and no RAID exists
    -Make sure FW is latest v5.42
    -Make sure ESXi device driver is installed v5.5.0-44vmw.550.0.0.1331820 http://www.vibsdepot/hpq/feb2014-550/esxi-550-devicedrivers/hpsa-5.5.0-1487947.zip

    -Put host in MM, from ilo of ESXi in support mode (Alt+F1) execute the following

    To View controller config using HPSSACLI with ESXCLI
    ~ # esxcli hpssacli cmd -q “controller slot=0 show config detail”
    To enable HBA mode on P420i using HPSSACLI
    ~ # esxcli hpssacli cmd -q “controller slot=0 modify hbamode=on forced”

    Reboot the host & perform a scan and walah … disks will show up in vSphere web client on each host>devices>before you enable vSAN

  27. John G says

    2 August, 2014 at 17:39

    Dell 6Gbps SAS HBA – H200E – J53X3, 012DNW, D687J, 7RJDT – LSI SAS2008 based PCI Express card with 2x SFF-8088 connectors

    From dmesg:

    2014-08-02T07:31:24.351Z cpu2:4603)mpt2sas0: LSISAS2008: FWVersion(07.15.08.00), ChipRevision(0x03), BiosVersion(07.11.10.00)
    2014-08-02T07:31:24.351Z cpu2:4603)mpt2sas0: Dell 6Gbps SAS HBA: Vendor(0x1000), Device(0x0072), SSVID(0x1028), SSDID(0x1f1c)
    mpt2sas0: Protocol=(Initiator,Target), Capabilities=(Raid,TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ2014-08-02T07:31:24.351Z cpu2:4603))

    From iotop:

    ADAPTR PATH NPTH AQLEN CMDS/s READS/s WRITES/s MBREAD/s MBWRTN/s DAVG/cmd KAVG/cmd GAVG/cmd QAVG/cmd
    vmhba2 – 0 600 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

  28. Jason Burroughs says

    4 August, 2014 at 05:05

    check out http://ftp.lenovomobile.ru/Files/_Products/SERVERS_and_STORAGE/SERVERS/RAID_Portfolio.pdf for some LSI controllers and their associated queue depth, as well as the Lenovo RAID card versions.

  29. Kevin S says

    14 August, 2014 at 21:25

    IBM ServerRAID M5016 – 975

  30. [email protected] says

    3 September, 2014 at 02:44

    Doing a lot of research on Controllers and switches right now for our vsan build and I noticed all of our UCS C220 come with LSI Megaraid 9271 8i controllers. Unfurtunately that requires additional work on my end for the drives to be detected which isnt that big a deal. I am however curious what is meant by the Raid 0 1:1 drive mapping. Am I limited to a 1ssd to 1hdd disk group or you just mean that 1 ssd to 3-7 HDDs have a 1 to 1 mapping. As in if one of those drives die. The whole group is considered faulted and will have to be rebuilt as a new Raid 0 disk group?

  31. John Hanks says

    8 September, 2014 at 14:50

    The Perc H710 has a queue depth of 1024, but no pass through — only RAID 0
    the LSI9207 has a queue depth of 600 — but has true pass through.

    Is getting true pass through (with the LSI) worth giving up half the queue depth?

    • Réal Waite says

      7 November, 2014 at 16:12

      Good question, is it better a big queue depth like 1024 for the Dell H710 or a pass through mode with a lower queue depth like the Dell PERC H200 Adapter?

      FYI, the VMware compatibility guide for IO Controller now give the queue depth information.

      • Duncan Epping says

        7 November, 2014 at 21:26

        A queue depth of 600 should typically be sufficient for most environments…

  32. Wojtek says

    4 November, 2014 at 09:57

    Dell PERC H330 – QD 256

    • Spirit says

      27 November, 2014 at 22:04

      Is there also a way to flash the PERC H330 with an IT firmware to get more QD..?
      I am thinking about to buy 3 new DELL R730xd Server. The PERC H330 is the standard controller in it… for better performance should i better take the PERC H730 controller..? is with the H730 also native passthrough possible so that vSAN can directly see the disks without RAID..? Cause on H710 only RAID vSAN is possible.. :/

      • David Pasek says

        1 December, 2014 at 13:01

        I’ve found in documentation that H730P has Adapter Queue Depth (AQLEN) 895. Caution: It is not tested by me personally so don’t blame me in case AQLEN is different 🙂

        This Controller should be able to work in RAID Mode and also in Pass through mode. Pass-through mode is called differently in different H730P docs. You can also see other terms as Controller Mode or JBOD mode.

  33. Sergio says

    28 November, 2014 at 16:30

    LSI Megaraid 9271-8i has a queue depth of 975

    • Duncan Epping says

      11 March, 2015 at 10:11

      thanks!

  34. Юрий Иванов says

    17 February, 2015 at 12:59

    Huh, guys, wanna “cool story” ?
    Once I start to build home-hypervisor and bought “CFI B8283JDGG” for external storage. Load it with 8 WD10EFRX disks, configure RAID5. All review told that is so fast, so cool.
    Connect this thing into ESATA socket, start see, DQLEN=1
    Baduumtsss!!!! 🙁
    50 IOs per seconds, transfer not more that 10Mb.

  35. HPsenicka says

    27 February, 2015 at 20:04

    HP SmartArray P440ar queue depth = 1011

    Seems curious that it is not a nice round number.

    • Duncan Epping says

      11 March, 2015 at 10:12

      thanks

  36. Mike Douglas says

    28 February, 2015 at 20:25

    Hi, thanks for the great info. I have a small cluster (4) servers comprised of DL380 G6/7 I understand they have the P410i and P420i storage controllers in them. From what I have seen on this blog, at least the P410i does not support pass through, so might support vSAN in RAID0 in mode?

    Has anyone had success with vSAN with HP DL380 G6 or G7’s? The queue depth’s seem suffient. We are running ESXi 5.5 on them presently so would like to evaluate vSAN if it would work,

  37. Alexander says

    10 March, 2015 at 16:11

    HP Dynamic Smart Array B120i Controller has a queue depth of 242.

  38. Alexander says

    10 March, 2015 at 22:56

    Cougar Point 6 port SATA AHCI Controller (HP B110i) queue depth – 31 (data from HP Proliant ML10)

    • Duncan Epping says

      11 March, 2015 at 10:12

      thanks, all added

  39. Jesus Martinez says

    25 March, 2015 at 19:27

    I would like to ask some question. If i’m planning to use Intel s3700 SSD (SATA disk) for flash cache for a group of magnetic NearLine SAS disks, how important is the queue depth in the intel SATA disk? I’m going to use the Dell Perc H730p recently added to the HCL (http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=vsanio&productid=34853&deviceCategory=vsanio&details=1&vsan_type=vsanio&io_partner=23&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc) but I don’t like to use the raid-0 configuration for the Intel s3700. I prefer the Passthrough mode but, in this case the queue depth is going to be 32? Is it enough? Could be a problem?

  40. Laurynas says

    27 March, 2015 at 13:32

    Adaptec 51245 queue depth is 504.

  41. Pieterjan Heyse says

    27 March, 2015 at 16:17

    Dell PERC H330 mini – 234

  42. Vincent says

    8 April, 2015 at 15:36

    Hi !

    Here’s what I have in lab :

    LSI2308_2 AQLEN=600
    Dell PERC 6/E AQLEN=975
    Dell PERC 6/i AQLEN=975 (not 925…)
    LSI MegaRAID SAS 1078 AQLEN=975

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in