I was just reading the excellent whitepaper that NetApp just published. The paper is titled “VMware vSphere multiprotocol performance comparison using FC, iSCSI and NFS“. I guess the title says enough and I don’t need to explain why it is important to read this one.
I read the paper twice so far. Something that stood out for me is the following graph:
I would have expected better performance from iSCSI+Jumbo Frames, and most certainly not less performance than iSCSI without Jumbo Frames. Although it is a minimal decrease it is something that you will need to be aware off. I do however feel that the decrease in CPU overhead is more than enough to justify the small decrease in performance.
Read the report, it is worth your time.
John says
Hmm.. so according to the paper there really is no reason to change out your existing network infrastructure but there is some benifit to be had by upgrading to Vsphere.. I would have expected more data on throughput and access times vs all the cpu utilization information that filled the paper.
Maybe its just me though…
Aaron Delp says
I would take this entire TR with a grain of salt. As many know, I’m a huge fan of NetApp but a co-worker discovered some details of this paper to my attention that make me question the entire findings. I’m not ready to say why just yet until we hear some feedback from NetApp on the issue.
Platypus says
You need to remember that NetApp is comparing NFS, FC and iSCSI on their own storage platform. NetApp FC/iSCSI run on top of a filesystem, so you will not see the same performance metrics as other FC/iSCSI platforms on the market that run FC natively on their array. I won’t get into fsck’s, mountpoints, exports, and stuff of that nature, but you don’t have that with native FC systems.
Duncan says
I do understand that it is a NetApp Whitepaper and that all tests are performed on a NetApp array. But those consultants who are not tied to a specific vendor, like myself, will still see the value of this document as it does shed light on which protocol should be used and if Jumbo Frames(for instance) would make a difference or not.
I am curious if NetApp can explain why Jumbo Frames actually slightly decreases performance, which from my perspective does not make sense.
PiroNet says
Jumbo frame or not, the data has still to be ‘transformed’ and ‘splited’ from block based IO to file based IO onto NetApp WAFL container, that is NetApp proprietary file system. What was the default block size of that container during the test?
It’s amazing to see nowadays NAS vendors doing iSCSI and SAN vendors proposing NFS/SMB/CIFS protocols. They are adding both an extra layer with some performance penalties.
dconvery says
Great find Duncan. I didn’t read it completely yet, but I wonder why they didn’t compare to 8Gb FCP. That may change the graphs. Also, FCoE is missing, even though it is not completely ratified, there’s still a big interest in it right now.
Don Mann says
This whitepaper is designed to simulate real-world environments. On pg32 – you see the details of the environment appearing that the 8 nodes in the test are configured as 4 clusters of 2 nodes each. It is not clear if the 20 VM’s per LUN are spread across the 2 nodes in the cluster.
I would suggest that real world environments would be more of a 4 or 8 node cluster environment (given you have 8 nodes), and to test protocol performance we should add VM’s to the datastore to test scalability. I wouldn’t mind if there were individual tests showing VM’s on single host, but the majority of the tests should be on 4-8 nodes clusters with DRS enabled – or manual distribution of the VM’s.
Cluster has datastore1 – 20 VM’s, 5VM’s in datastore1 per host.
The value of NFS for my customers has been larger datastores with more VM’s
Duncan – would you take 8 ESX nodes and split them into 4 clusters of 2 hosts each?
Erik Bussink says
A few days ago I took the time to read this document, and I quickly put it down. I had an issue with the term “Performance Relative to FCP”. No hard facts or numbers (allowing to scale the results), and after a few page, I had the feeling that iSCSI or NFS where always too close of each other in these charts.
Rene says
Just look here for older comparisons and comments and NetApp..
http://thesantechnologist.com/?p=52
Leo says
Hi Duncan, its very simple why the jumbo frames decrease performance of netapp. It’s because NetApp does not have the capability to keep up with the writes that it can receive, so the cache fills up quicker and after that the performance comes to a halt. Let me explain. NetApp has a great “idea” of splitting data in to 4k blocks on to its own proprietary file system but there is very big problem, and that is performance. NetApp has not been designed with performance in mind, so it won’t give you any kind of performance, it’s been designed to give you only the features of de-duplication and replication (which does work pretty well.)