This is Cormac Hogan's Typepad Profile.
Join Typepad and start following Cormac Hogan's activity
Join Now!
Already a member? Sign In
Cormac Hogan
Cork, Ireland
Senior Technical Marketing Architect for Storage at VMware
Recent Activity
Image
I have a whole host of speaking engagements this month. Most of these are at VMUGs (VMware User Group Meetings), but I do have some customer & partner meetings too. This always seems to be the busiest time of year for these events, I guess because it is so soon... Continue reading
Posted Nov 13, 2012 at Cormac's Blog
Image
This was the European leg of the VMworld conference for work. Again, not a huge amount of free time, but my wife came with me on this trip. On our final day, we went on a bus tour of the city. Bus tours were never very appealing to me, but... Continue reading
Posted Nov 12, 2012 at Cormac's Blog
Image
We had our annual conference show (VMworld) in the Moscone Centre in downtown San Francisco this month. Not really too much time for sight-seeing as these conferences are usually pretty hectic. I did get a chance to walk around on the day before. This is the St. Peter & Paul... Continue reading
Posted Nov 12, 2012 at Cormac's Blog
Image
I had a query recently asking if I could explain why the space usage on a dedicated VM swapfile datastore increased during a Storage vMotion operation. I did some testing in-house and noticed that a second .vswp file is created on the dedicated VM swap datastore during a migration. The reason why this occurs is that the VM's swapfile name is appended with a hash of the absolute path to the VM's config file, which is based on the VM's current home directory path. As a result, when you Storage vMotion the VM to a new home directory on a... Continue reading
Posted Aug 15, 2012 at VMware vSphere Blog
I'm guessing you mean VMFS-3 here Rob. We never had a VMFS-4 :-) Yes, my understanding is that it is ok to run the vmkfstools reclaim command against a VMFS-3 datastore. The VMFS drivers which ship with 5.0 and later have the necessary VAAI TP extension code. You can verify that the UNMAPS are occurring with esxtop by looking at the DELETE field in the VAAI stats view: esxtop -> u (device view) -> f (select fields) -> O (VAAI Stats)
1 reply
Image
I was recently involved in some discussions about how Fault Tolerance would behave on the vSphere Storage Appliance. The crux of the matter was what would happen if a host in the vSphere Storage Appliance (VSA) suffered a failure. Those of you who are familiar with the VSA will be aware that the VSA takes the local storage from an ESXi host and presents it as a mirrored NFS datastore. Therefore both compute and storage are on the same host. In the event of a host failure, another VSA node (ESXi host) in the cluster takes over the role of... Continue reading
Posted Aug 13, 2012 at VMware vSphere Blog
Hi Karl, Yes - my understanding is that this article is being reworked to include some of the guidelines in this blog post. Thanks for highlighting. Cormac
Toggle Commented Aug 13, 2012 on VMFS Heap Considerations at VMware vSphere Blog
1 reply
Hi Conor, In this case, presenting those LUNs directly to a VM would necessitate the use of RDMs, which do not require VMFS heap. Therefore you do not need to take VMFS heap into consideration in that case.
Toggle Commented Aug 13, 2012 on VMFS Heap Considerations at VMware vSphere Blog
1 reply
Image
By default, an ESXi host has 80MB of VMFS heap at its disposal. This is defined in the advanced setting VMFS3.MaxHeapSizeMB. The main consumer of VMFS heap are the pointer blocks which are used to address file blocks in very large files/VMDKs on a VMFS filesystem. Therefore, the larger your VMDKs, the more VMFS heap you can consume. This is more true on VMFS-5, where double-indirect pointers exist to allow the unified 1MB file block size back a 2TB VMDK. As a rule of thumb, we are conservatively estimating that a single ESXi host should have enough default heap space... Continue reading
Posted Aug 10, 2012 at VMware vSphere Blog
It could be one of the heartbeat datastores. in your vSphere HA cluster, edit the settings and see which datastores are being used for heartbeating. If this datastore is being used, change the settings to use another one.
1 reply
These test were done on 5.0 Ray. When migrating a vRDM, if you choose to change the format, you can convert it to a VMDK.
1 reply
Image
Posted by Cormac Hogan Technical Marketing Architect (Storage) A brief note to tell you about what I am involved in at this years VMworld 2012. Of course, it is all storage related. I'd be delighted if you come along to one of my 'official' sessions, but I understand that there is so much to see and do in a limited time. I'd particularly encourage you to attend one or more of the Group Discussions. My one is GD10. I'll be joined by one of our support superstars, Patrick Carmichael, and we'll be having a very informal chat around vSphere storage... Continue reading
Posted Aug 8, 2012 at VMware vSphere Blog
Thank you Dario. I will investigate this with the KB team.
1 reply
Sparrow, unfortunately much of the output will not make much sense without a deep understanding of VMFS metadata and layout, and that is not something I can share. The interesting part is the lock, and thank you for replying to Tia in the previous post - you are correct. If no-one has a lock on the file, then the host which runs the command will lock it which is why you see that list of zeros.
1 reply
Thanks for the update Leslie.
1 reply
Image
I was recently playing around with vmkfstools, checking out a few things for one of our storage partners. I noticed that I was using some undocumented options to look at a few things, and thought I would share them with you here. 1. Display hosts which are actively using a volume ~ # vmkfstools --activehosts /vmfs/volumes/VNX-20 Found 1 actively heartbeating hosts on volume '/vmfs/volumes/VNX-20' (1): MAC address 98:4b:e1:0a:24:d8 This option will show the management interface MAC address of any hosts which is actively using a datastore. This is exactly what vSphere HA uses to see if a host is still... Continue reading
Posted Aug 3, 2012 at VMware vSphere Blog
Hi Greg, Nope - no impact. This feature/primitive is all about the ESXi host telling the array that these blocks, which I was using previously but I am no longer using, can be placed back on your free list (reclaimed). CBT is all about tracking which blocks have changed within a VMDK during a particular epoch. A scenario where they might co-exist is a Storage vMotion in 5.0. CBT is used to recursively keep track of blocks changing in a VM during the migration. Eventually the number of blocks which have changed during one of the recursive copies should be small enough in number to allow us to switch over to the VM on the destination. UNMAP can then be used to tell the array that the block which the VM occupied on the source datastore may now be reclaimed. Cormac
1 reply
Hi Martin, Thanks for the feedback. Could I ask you to describe your use case for larger than 2TB VMDK? I'd appreciate it if you could add the comments to this post as we are actively tracking these requirements going forward. http://blogs.vmware.com/vsphere/2012/01/how-much-storage-can-i-present-to-a-virtual-machine.html Thank you Cormac
1 reply
Derek, No - to the best of my knowledge, there are no requirements on the VM HW version.
1 reply
Hi Johnny, That is true, if we were still using the bus walking method. If you have all your disks in a contiguous range starting from 0, then once we meet the first empty position, we stop scanning. However REPORT_LUNs avoids this as it requests a target SCSI layer to return a logical unit inventory (LUN list) to the initiator SCSI layer rather than querying each LUN individually. My understanding is that Disk.SupportSparseLUN doesn't play a role when REPORT_LUNs is used (and that is the default since ESX 2.x I think)
1 reply
We are looking to handle this condition on single-target, single LUN arrays going forward. I can't say anymore about this right now except to keep following the blog.
1 reply
Image
As regular readers will know by now, many of these blog posts are a result of internal discussions held between myself and other VMware folks (or indeed storage partners). This one is no different. I was recently involved in a discussion about how VMs did sequential I/O, which led me to point out a number of VMkernel parameters related to performance vs fairness for VM I/O. In fact, I have seen other postings about these parameters, but I realised that I never did post anything myself. A word of caution! These parameters have already been fine tuned by VMware. There... Continue reading
Posted Aug 1, 2012 at VMware vSphere Blog
Hi Adam, I don't see why not. If there were some constraints, these should be called out in the HCL footnotes.
1 reply
Hey Gareth, We are working on it. Hopefully I'll have some details in and around the VMworld timeframe.
1 reply
Sorry for the delayed repsonse vmitguy. I only just came across your question. The datastore browser does not use the internal VMkernel Data Mover or VAAI for that matter - it has its own API. Therefore what you observe is correct - datastore browser copy/paste operations will not use VAAI.
Toggle Commented Jul 30, 2012 on Low Level VAAI Behaviour at VMware vSphere Blog
1 reply