This is Dikrek's Typepad Profile.
Join Typepad and start following Dikrek's activity
Dikrek
Recent Activity
Hello all, Dimitris from NetApp here.
You mentioned "When used with an EMC Flash enabled array, VFCache can actually increase throughput up to 3x".
I thought that this first iteration of VFcache has no back-end array awareness. Indeed, it can be run with any back-end array and has no preference there.
Are you talking futures?
Thx
D
Lightning Strikes Again for SQL Server 2012!
Earlier in February, EMC announced a new server Flash caching solution called VFCache, a hardware and software solution that leverages PCIe Flash technology to extend performance based caching from the storage array to the server. VFCache (codenamed "Project Lightning") increases throughput and ...
Hi Marc, Dimitris from NetApp here.
Congrats on your acquisition, hopefully HP will be a good home.
Correction Re Terremark - future storage purchases for new deployments will be NetApp. Existing customers on 3Par will get more 3Par disk if they need more storage.
But you're right to be proud of that logo.
NetApp is the largest provider for the cloud but we're kinda hard to acquire due to the cost :)
Finally HP will have a decent storage product! :)
D
What made 3PAR so attractive? Read this Wikibon case study
Cloud infrastructures need to be efficient if they are going to compete. Money saved on operations is an annuity to cloud service providers that goes to the bottom line every day. Terremark is a leading cloud service provider that delivers just in time infrastructure services, as described...
Hey Chuck,
Interesting marketing exercise - so, doing the same top-down approach, if the customer has a 300TB virtualized workload and after doing a PoC with NetApp he only needs 100TB to accommodate it (proven by the PoC), EMC will guarantee they can do the same thing with 80TB? Or else provide enough storage to accommodate the workload for free? (however much it may be?)
I can see this working out for people that won't evaluate systems and are looking at capacity the old-fashioned way (which is quite a few if not most, so I totally see why EMC did this).
D
An Offer You Can't Refuse
From aspirational to pragmatic: EMC Unified Storage Is 20% More Efficient. Guaranteed. That's the tag line for the storage efficiency campaign we've recently launched in this hotly contested part of the market. And, from all indications, it appears that it's working quite well ... The Backgr...
Hello all, D from NetApp here.
If the guarantee is unconditional, how does it work in the following scenario:
EMC is trying to displace an existing NetApp install that's getting, say, 3x the effective storage due to the various efficiencies on-board.
So, let's say that the 100TB of usable storage of said customer is looking more like 300TB to the outside world.
Will EMC offer 300TB + 20% = 360TB or...
Will EMC offer 100TB + 20% = 120TB?
In the latter scenario, the customer will absolutely not be able to fit their workload in 120TB.
To the customer, all that matters is how much effective storage they're able to use, not how much raw storage is in the box.
I posted something relevant here before: http://bit.ly/d94ikh
Thx
D
An Offer You Can't Refuse
From aspirational to pragmatic: EMC Unified Storage Is 20% More Efficient. Guaranteed. That's the tag line for the storage efficiency campaign we've recently launched in this hotly contested part of the market. And, from all indications, it appears that it's working quite well ... The Backgr...
Guys, the secalc.com site has a bug (that Kusek exploited) when calculating netapp storage with all efficiencies turned off.
226 base 10 TB (or about 200 base2) are needed to provide over 150TB usable without any of the space efficiency features turned on.
I used the internal Synergy calculator to figure that out.
So, it looks to me EMC will have to start giving away a bunch of storage.
D
Playing to Lose, Hoping to Win: EMC’s Latest Guarantee (Part 1)
This is a hard blog for me to write as I like to try and give people the benefit of the doubt. Sometimes individuals make mistakes or misread numbers and come up with inaccurate results. However, Christopher Kusek’s latest blog on EMC’s 20% guarantee program is so full of misrepresented inform...
@ Jonas:
Throwing FUD is not conducive to respectful selling, those same points have been the mantra of anti-NetApp competitive sales for the last 10 years, but the real-life success stories, the company’s earnings and amazing growth tell the real story.
I have large customers with 10,000+ replicated snaps on their arrays, seem to be running just fine... (full, lots of I/O, data warehouses, complex DBs, tons of VMware etc – all without PAM). Funny that, the snapshot comment coming from EMC, a company that only allows 8 snaps per LUN (and with a well-publicized huge 50% performance hit…)
Indeed, even though you work for EMC, you will probably use our storage at least a few times today, since we provide the back-end disk for most of the online providers.
Maybe you need to read http://bit.ly/aNMwon and http://bit.ly/cnO2
Back to actually discussing technology.
This is turning into a post about NetApp instead of answering Chad’s legitimate questions. Let's put it this way:
NetApp provided thought leadership with shipping the PAM cache years before EMC even announced something similar (let's not forget FLARE 30 or sub-LUN FAST with the gigantic 1GB chunk are not even here yet and won't get initial wide adoption until matured). It's silly to think we're not working on new stuff for others to have to catch up on (again) :)
Regarding thought leadership in auto-tiering: Compellent was first with their auto-tiering and has a 512K minimum chunk. How do they do it?
Regarding thought leadership in (true) Unified Storage: NetApp, obviously. The (true) unified EMC system is coming what, (maybe) 2011? Almost 10 years later than NetApp?
Regarding thought leadership in true block-level deduplication of all primary storage protocols: NetApp again. Nobody else is there yet.
What about deduplication-aware cache? Which, in turn, deduplicates the cache itself. Since nobody else deduplicates all primary storage protocols at the block level, nobody else has this cache deduplication technology.
Enough with the trash talk. BTW, I like the V-Max. I hope Enginuity is getting the SSD cache.
Auto-tiering is a great concept but everyone doing it seems to suffer from potential performance issues due to the fact the data movement algorithm won't react fast enough to rapidly changing workloads. It can work well if the workload is predictable and stable over time – enabling you to just dump your data into an array and have it figure out (eventually) where the different hot/cold areas should reside.
The addition of huge chunks of cache goes a great way towards alleviating this, but it's only part of the answer. Otherwise, it's a solution waiting for a problem. Good for some workloads, but not all. Great to have if it gets out of the way when needed.
To answer Chad's question: Each cache card is separate and only seen by each controller - this is, fundamentally, an architectural difference, and it seems to work well in the real world. Upon controller failure the other cache card has to get warmed up with the workload from the failed controller. The cards are fast enough that this happens very rapidly (each board is much faster than several STEC SSDs, the benefits of a custom design - and no, the warm-up doesn’t take "many hours").
But, of course, I will not just go ahead and divulge the NetApp roadmap just because Chad is asking :) (just as Chad wouldn't divulge EMC's roadmap if I were asking, no matter how nicely).
I’ll give you my thoughts on the no-tiering message (may or may not agree with the NetApp CEO, it’s my own opinion):
In many situations, a decently designed box (NetApp with PAM, XIV, possibly CX with FLARE 30 and SSD cache) can get a lot of performance out of just SATA (NetApp has public SPC-1 and SPEC benchmarks for both OLTP and file workloads where PAM+SATA performed just as well as FC drives without PAM).
However, I don’t believe a single SATA tier covers all possible performance scenarios just yet (which is why I don’t agree with the SATA-only XIV approach – once the cache runs out, it has severe scaling problems and you can’t put any other kind of drive in it).
When I build systems, there are typically either 1 or 2 tiers + PAM. Never more than 2 tiers of disks, and very frequently, 1 tier (either all the largest 15K SAS drives, or all SATA if the sizing allows it). I see it this way:
It’s fairly easy to put data that should be on SATA there in the first place – most people know what that is. If you make a mistake, the large cache helps with that. It’s also fairly easy to put the rest of the data in a better-performing layer. Is it ideal? Not really. Should tiering be automated? Sure. But, until someone figures out how to do it without causing problems, the technology is not ready.
I will leave you with a final question: For everyone doing sub-LUN auto-tiering at the moment, how do you deal with LUNs that have hot spots that are spatially spread out on the LUN? (this is not an edge case). For instance, let’s take a 2TB LUN (say, for VMware). Imagine this LUN is like a sheet of finely squared paper. Now, imagine the hot spots are spread out in the little squares.
Depending on the size of your chunk, each “hot” chunk will encompass many of them surrounding little squares (pity I can’t attach an image to this reply), whether they’re “hot” or not.
With sub-lun auto-tiering, the larger the chunk, the more inefficient this becomes. Suddenly, due to the large chunk size, you may find half your LUN is now on SSD, where maybe only 1% of it needs to be there. Cache helps better in that case since it’s a small block size (4K on NetApp, 8K on EMC). It’s an efficiency thing.
It’s not that easy for a cool concept to become useful technology.
D
EMC Unified Storage – Next Generation Efficiency Details
Ok, so what was announced? The current generation EMC Unified platforms got some wicked cool next-generation efficiency technologies. These are: FAST Cache = we’re added the ability to have up to a 2TB read/write cache using cheap hardware. Think 80-90% performance improvement in some cases, ...
Thanks for the great post Chad, Interesting use of SSDs as cache.
Since I'm with NetApp, naturally I have some questions regarding the new caching scheme.
I keep reading in the various EMC SSD cache posts "we cache writes!"
Caching the writes is necessary with EMC's architecture, NetApp uses a different way of writing to the disk, but anyway, that's a different discussion.
My questions:
1. At what part of the write path is the SSD cache? More like a second level cache?
2. What's the page size? Same as sub-LUN FAST (768KB?) or something smaller?
3. Is it tunable by LUN or some other way?
4. What's the latency? NetApp developed the custom cache boards because they fit right in the PCI-E slots of the controllers, for maximum throughput and lowest latency.
Thanks!
D
EMC Unified Storage – Next Generation Efficiency Details
Ok, so what was announced? The current generation EMC Unified platforms got some wicked cool next-generation efficiency technologies. These are: FAST Cache = we’re added the ability to have up to a 2TB read/write cache using cheap hardware. Think 80-90% performance improvement in some cases, ...
Nice article, Marc.
I'm still trying to figure out if the V-Plex virtualization works just like all other virtualizers - i.e. does it render the back-end arrays into plain disk, in order to provide more intelligence.
Does the V-Plex do the same thing?
If so, I see no mention of cloning, snapshots, deduplication, compression, thin provisioning or indeed any intelligent storage function besides replication.
To me, it seems like another EMC attempt to sell us futures. Just like FAST (http://bit.ly/9dj7XW), the current incarnation is really not that useful, the good stuff comes a year or two later.
HDS USP-V, SVC and our very own NetApp V-Series provide virtualization WITH extra functionality (and, in the case of NetApp Metrocluster, the similar ability to provide simultaneous access up to 100 miles apart - for several years now).
Cache coherency is interesting but in order to work it needs super-low latencies per EMC.
I think it's also important to understand the full ramifications of the "simultaneous" access. 2x the storage is needed, and I believe a LUN is still only seen by one side at a time, though I'm sure EMC will correct me if I'm wrong.
But, ultimately, aside from all of us giving EMC free publicity, what extra functionality do V-Plex customers get beyond migrations?
Thx
D
VPLEX undressed
OK, Monday's post lacked the punch that people have come to expect from me where EMC announcements are concerned. Thanks to my readers who were disappointed and told me so. This post is for you. VPLEX (announced Monday with all the hype that EMC could muster) is the result of EMC tryin...
This seems highly interesting. I do wonder though why you spell NetApp as "NotApp".
You can give us the simple courtesy of spelling the name right :)
What are the enterprise deployments this was successfully deployed at BTW? I heard some telco back when it was still YottaYotta.
Thx
D
3.003: to boldly go
Space... the Final Frontier. These are the voyages of the starship Enterprise. Her ongoing mission: to explore strange new worlds, to seek out new life forms and new civilizations, to boldly go where no one has gone before. And so begins our journey Today, EMC announces the introduction of an...
Back to the original subject of the post:
I understand why some of the smaller (some would say irrelevant) vendors make these assertions: ANY publicity is good publicity!
Check here for some craziness from a vendor that hasn't managed to secure decent marketshare in a while: http://bit.ly/bJSvRr and here http://bit.ly/aSbEED
D
Fun with Vendor FUD – Episode #1
UPDATE Feb 12th, 2010: 10:52pm ET: I really do want to reiterate, I posted this in a light spirit, some of the claims struck me as funny ("The only SAN with vSphere vStorage Fault Tolerance support included at no extra charge"), and some so demonstrably incorrect ("The only SAN platform which s...
@ Calvin:
Any array degrades as it's filled up, and Kostadis Roussos explaid the NetApp aspect of this in detail in http://bit.ly/cnO2.
How about showing us the same test with a similar EVA doing RAID-6 (to get the same protection) and like 100 snaps active (since NetApp customers would be doing that as a matter of course).
You see, if you have nothing to compare this to, all you have is a graph for a single product, and your assertion, while seemingly correct, means nothing unless compared to something else.
D
Fun with Vendor FUD – Episode #1
UPDATE Feb 12th, 2010: 10:52pm ET: I really do want to reiterate, I posted this in a light spirit, some of the claims struck me as funny ("The only SAN with vSphere vStorage Fault Tolerance support included at no extra charge"), and some so demonstrably incorrect ("The only SAN platform which s...
Well - what I don't get is why didn't EMC use all 8 data movers since that's the max for the NS-G8? That way since the back-end is not the issue, EMC could have posted a better than 2x number (7 active DM's instead of 3).
Thoughts?
EMC Benchmarking Shenanigans
I want to tell you a story about how my evening went the other night. I hope you don't mind a narrative. Monday I received an email from a friend in the VMware community, "Did you see the Register, it's unreal, EMC arrays crushed the SPEC benchmarks!" As you'd assume, this news got my attention...
Simply speaking, PAMII + WAY LESS old-fashioned disks = cost savings for pretty good performance.
I've seen several workloads (including one on DMX with over 400 drives) where FAS + PAM + way less drives provide equal if not better performance.
Ultimately, that's what customers WANT and NEED.
Bang-for-buck.
And PAM delivers that in spades.
D
EMC Benchmarking Shenanigans
I want to tell you a story about how my evening went the other night. I hope you don't mind a narrative. Monday I received an email from a friend in the VMware community, "Did you see the Register, it's unreal, EMC arrays crushed the SPEC benchmarks!" As you'd assume, this news got my attention...
Dikrek is now following The Typepad Team
Feb 7, 2010
Subscribe to Dikrek’s Recent Activity