This is Todd Muirhead's Typepad Profile.
Join Typepad and start following Todd Muirhead's activity
Join Now!
Already a member? Sign In
Todd Muirhead
Recent Activity
Jay - Because the system tested was setup as closely as possible to EMCs production system, they asked that we not get into specific configuration details. So you are right this paper has less details than what we normally provide. I will ask them to take a look at your questions and see how much they are willing to talk about. In the meantime, I can confirm that this is the ERP system that's one of the top 10 largest in the world. (I don't know why you refer to it as infamous). There were lots of mount points utilized (much more than a few). They use raw devices on the native system, so this was also used on the VMs - it made it easy to have a virtual / native cluster as well. Thanks - Todd
1 reply
I posted a blog about 10 days later with the Disk I/O performance results. Here is a direct link - http://blogs.vmware.com/performance/2010/05/exchange-2010-disk-io-on-vsphere.html Thanks for reading and commenting. Let me know if you have any questions. Thanks, Todd
1 reply
The support statement from Oracle on VMware was recently modified to include RAC. I blogged about it here - http://virtualtoddsbigblog.blogspot.com/2010/11/support-for-oracle-rac-on-vsphere.html In terms of an Oracle approved solution list, I'm not sure. Can you give me a link or a contact email so that I can look into it? Todd
1 reply
A little bit more on why the intrmode setting was used. There was a bug in RHEL that was fixed in 5.5. This exposed a bug in ESX that is fixed in 4.1 update 1. Setting intrmode to 1 puts the vmxnet adpater into a legacy mode which causes it to behave like a very simple NIC. We have tested the peformance of vmxnet3 in a wide variety of scanarios and in most cases it performs better without this setting. In the case of the RAC interconnect it is extremely sensitive to latency and the slight improvement we get in latency with this setting also resulted in better RAC performance overall. I fixed the link for DVD Store - thanks for spotting it. Todd
1 reply
DAG was not used in the tests in this post. I did some tests with DAG and published the results on my VMware community blog at http://communities.vmware.com/blogs/ToddMuirhead/2010/07/26/measuring-the-performance-impact-of-exhange-2010-dag-database-copies Hopefully these test will be what you are looking for. Thanks - Todd
1 reply
I did struggle a bit with the title for this post, and maybe I could have done better. In trying to keep it brief it may not fully describe the RAM and IOPS interaction I was trying to explore. I'm glad that you decided to comment and now we can have some discussion. It seems that we agree that Fibre Channel is generally the best performing storage solution. It has also been used by many large enterprises for Exchange historically, which is why I used it for these tests. I did use RAID 5 LUNs which is a lower performance option (as compared to RAID 1/0) that provide more useable space. I have not tested with lower performing disks, but reported these results with the raw IOPS numbers. Regardless of the speed of the disk, the number of IOPS can be reduced by adding RAM. This reduction in IOPS could make it possible to use less or lower performing disks. I am planning to do some tests with larger size mailboxes to see how this affects performance. I do not expect it to significantly change the performance in terms of response time, but I won't know until I do the tests. I will not be able to test up to the same 8000 users due to size capacity limits, and will have to user a lower number. Thanks - Todd
1 reply
The NICs used were Intel PRO 1000 NICs based on the 82571EB controller. It does not do TOE offloading for iSCSI. Thanks - Todd
1 reply
We have not published a direct comparison of virtual vs physical of Exchange on Nehalem. We did publish some virtual vs physical numbers on Tigerton earlier this year in a whitepaper - Microsoft Exchange Server 2007 Performance on VMware vSphere 4 - http://www.vmware.com/files/pdf/perf_vsphere_exchange-per-scaling.pdf The results in that paper showed that vSphere VM performance was within 5% of physical as tested with Exchange LoadGen.
1 reply
Hyperthreading or Simultaneous MultiThreading (SMT) was enabled for the these tests. Todd
1 reply
The answer is that a single 1GbE connection was used for the iSCSI and NFS tests. In the spirit of a friendly blogger I would like to say that this info is in the whitepaper along with info regarding a test done with 4 x 1GbE for iSCSI. Thanks for the question.
1 reply