This is Storagesavvy's Typepad Profile.
Join Typepad and start following Storagesavvy's activity
Storagesavvy
Redmond, WA
I'm an experienced technology professional, run a charity for disadvantaged children, and like to camp, sail, and travel
Recent Activity
Thanks for the reply. I personally do understand the differences between EMC Compression and Deduplication and NetApp Deduplication and Compression features but it's always good to point out to customers. With ALL of the technologies from both vendors, as with anything in life, there are cases where the benefits outweigh the drawbacks and cases where the opposite is true. I was using compression as an example enhancement of the 64-bit aggregates, of course maximum usable capacity is another major one.
5 Reasons to Upgrade to Data Ontap 8.0.1
Last week while hosting our Virtualization Field Readiness Summit I spent some time with Dr. Desktop (aka Chris Gebhardt) and while educating me on advancements in our technologies related to desktop virtualization he shared a few of the enhancements in Data Ontap 8.0.1 that were so exciting I t...
Disclaimer: EMCer here - Compression, 64-Bit Aggregates, and DataMotion all sound like great features from an upgrade.
A Few Questions:
Can you convert existing 32-bit aggregates into 64-bit aggregates after the upgrade and then increase the size? Or do you need to create new aggregates on different spindles?
Can you use DataMotion to migrate existing LUNs from 32-bit Aggregates into 64-bit Aggregates to take advantage of compression for existing data?
Thanks!
5 Reasons to Upgrade to Data Ontap 8.0.1
Last week while hosting our Virtualization Field Readiness Summit I spent some time with Dr. Desktop (aka Chris Gebhardt) and while educating me on advancements in our technologies related to desktop virtualization he shared a few of the enhancements in Data Ontap 8.0.1 that were so exciting I t...
I would assume that Microsoft will ensure that Windows Guest VMs will work well with this approach. However I wonder how that will translate to non-Windows guests? VMWare's approach is nice in that it supports pretty much any guest and if Hyper-V Dynamic Memory only works well with Windows guests, VMWare will continue to be preferred in large orgs.
Regardless, this is an interesting approach and might provide better overall results for an all-Microsoft environment.
Dynamic Memory for Microsoft Hyper-V
While it didn't make a lot of noise, Microsoft released SP1 for Windows Server 2008 R2 a few days ago. So how does this impact Microsoft's virtualization customers? There are two changes in SP1 that have relevance to virtualization; dynamic memory and RemoteFX. While RemoteFX is a significant en...
Good post, I'm also frustrated with the use of the term Cloud lately.
Disclaimer:I work for EMC, so the positioning of cloud to my customers is generally around building flexible infrastructures to support their applications.
However, my personal, more simplistic view of cloud when it comes to storage is that it needs to be globally distributed to qualify as cloud. Right now within EMCs portfolio that product is Atmos, a massively scalable, object-based storage system, built specifically for the purpose of allowing applications to read and write data, against a single shared pool of storage, anywhere in the world, at any time.
As far as most storage and virtualization vendors go, it seems acceptable to simply "enable the cloud" by lowering costs, or adding multi-tenancy, etc. These are valid solutions to valid problems but I don't think they really qualify as Cloud products or solutions. This is my own personal opinion of course.
Cloud Ennui?
I must admit even though I use the term myself, I am getting pretty fed-up with the whole Cloud thing and the pretty constant attempts of vendors to both Cloud-wash their products and generally try to sprinkle Cloud Magic Dust around the place. Cloud has become so vague as a term that it has all...
Thank you for the kind words Barry. My blog topics have obviously shifted a bit since coming to work for EMC but I'm finding that for the most part, discussions I have with customers make for good blog topics.
Oh and the VPLEX Local discussion I had with that customer turned into a VPLEX Local and Metro Proof-of-Concept that we're ramping up soon. So I should have something to write on that in the next month or so.
Richard
3.010: storage savvy: blogging with cred
Just a quick note to give a shout-out to a relatively new EMC employee blogger, Richard Anderson. His personal, not-reviewed-or-approved-by-EMC blog is at storagesavvy.com. Richard joined EMC earlier this year, coming from Nintendo where he managed both EMC and NetApp kit. His experience provi...
Vaughn,
I really appreciated this post, and combined with all of the comments you received, we all, as vendors, should be educating our customers on the different approaches so they can understand where each will help them.
Data Compression, Deduplication, & Single Instance Storage
Today I wrapped up several weeks of travel which included the Charlotte VMUG conference, NetApp's Foresight engineering event, and a number of customer and technical partner meetings. During these travels a small number of individuals would use the term data deduplication as any technology which...
@StorageTexan,
To give you a quick answer to your question..
The intent of VPlex is to do what traditional storage virtualization platforms from other vendors don't, which is to allow you to leverage the technologies inherent in your existing storage platforms - Symmetrix SRDF/Timefinder or Clariion SnapView/MirrorView, for example. Since Vplex is array aware and preserves the array's cache functionality, while enhancing performance with it's own cache, you can still use those tools. That is something you can't do with SVC and USP-V, with those products you are forced to move all of the intelligence into the virtualization layer which may not work for all of your applications. This is a differentiator for VPlex, not to mention VPlex Metro federation which is entirely unique.
VPLEX: The Birth Of A New Storage Platform
One of the bigger news stories to come out of EMC World today is the announcement of VPLEX. Like anything relatively new, it will take a while for people to fully understand the rationale and the strategy behind the product. It took me a good while before I got a full grasp on the implications ...
The interesting thing to me is that there were essentially two contradicting claims...
1.) that tiering is dead/dying
2.) that dynamically moving blocks between SSD and SATA is the future, anything more is pointless.
Regardless of which disk technology you are using (FC, SSD, SATA, CDROM, DVD, etc) anytime you store data on more than one of them you have tiering. Whether it's SSD+SATA (2 tiers) or SSD+FC+SATA+DDUP (4 tiers), you still have tiering and automating the movement of data between those tiers (be it 2 or 10) is the future. The underlying technology doesn't really matter.
2.042: bring out your dead!
My, what a week already. IBM finally got around to putting the still-borne DS6800 out of its misery – something I had thought they were smart enough to do over two years ago (I was apparently wrong). Not to worry, I guess – if you really want to have one of these useless beasts, I understand ...
Vaughn,
I suspect that there is a lot of inference occurring here when it comes to the CIFS-vs-Disk IOPS and price/performance argument.
Specifically, you can't infer that the backend VMax was actually servicing 6 million IOPs just because it's configuration is theoretically capable of doing so. As you know, CIFS and NFS Ops are NOT disk IOs. It is pretty clear from the sizing of the VMax in this test (and supported by Storagezilla's comments) that the goal here was to test the performance of the NS datamovers, NOT the VMax, and as such the VMax was configured for far more throughput than necessary for the test. That being said, making any sort of argument that EMC must need 50 IOPS for a single CIFS ops is impossible with the data at hand, and would be unbelievable anyway.
Further, the context of the test being on the datamovers themselves also makes any sort of price/performance conclusion impossible. A customer requiring 100,000 CIFS Ops would work with EMC to size the backend appropriately and I would venture an educated guess that the backend would be smaller than what was used in this lab scenario.
The only way to get a valid price/performance comparison between two systems is to size the front-end AND back-end for the workload.
The beauty of the EMC approach is you have the ability to address bottlenecks where they exist rather than rip and replace or having to purchase a disparate system.
EMC Benchmarking Shenanigans
I want to tell you a story about how my evening went the other night. I hope you don't mind a narrative. Monday I received an email from a friend in the VMware community, "Did you see the Register, it's unreal, EMC arrays crushed the SPEC benchmarks!" As you'd assume, this news got my attention...
Storagesavvy is now following The Typepad Team
Feb 8, 2010
Subscribe to Storagesavvy’s Recent Activity