This is Mike Riley's Typepad Profile.
Join Typepad and start following Mike Riley's activity
Join Now!
Already a member? Sign In
Mike Riley
http://www.ascafootball.com/
Director of Strategy & Technology, Americas
Interests: husband, father, family......fishing and football
Recent Activity
Which is fastest? This one: http://www.spec.org/sfs97r1/results/res2006q2/sfs97r1-20060522-00263.html. Come on, guys. Don't bother submitting something that is twice as slow. When you reach 927,000 at 2.7ms, give us a call!
Toggle Commented Feb 25, 2011 on 3.021: spec sfs wars at the storage anarchist
1 reply
Hi, Chad. Since this seems to have taken a decided NetApp-centric turn, I posted my response on this NetApp blog: When is it FUD? When is it Ignorance? http://blogs.netapp.com/efficiency/2010/06/when-is-it-fud-when-is-it-ignorance.html Have a great week. Mike
Hi, Chuck, Yes Chuck, I think both the NetApp story and the EMC story are great things. I didn't know that was pimping but, O.K. (Are you the same guy that squirts kids with the hose when they come near your lawn, too?) I kid but isn't there room for two good stories in the industry? I'd be happy to help round this issue out. I wasn't attempting to cloak the issue unless of course you consider design and implementation guides validated by VMware as well as Cisco a giant Kabuki dance. I was merely trying to be brief since both of those guides together amount to about 200 pages of detailed configuration information. If folks want to check out your or my claims, the guides are there and engineers from Cisco, VMWare and/or NetApp would be happy to field questions. I also believe that, yes, having this design validated by strong 3rd party partners is a testimony to the validity of the solution. The alternative ventures down a conspiratorial path with a touch of paranoia thrown in for good measure. I know every security expert needs a touch of paranoia but we're not all out to pull the wool over customer eyes. As testimony to that, I point out that we have many very large customers (service providers) using this SMT solution today. Unless your fairy tale includes a pair of ruby red slippers, I'm not sure how you can get away from the fact that a) it is a validated solution that b) customers are indeed using today in providing c) a secure multi-tenant environment. Res ipsa loquitur. On a technical level, QoS, as Dimitris points out, has been a part of NetApp systems for a few years now. Your simply wrong on that count. I don't mean to question your credibility but this is a binary point. As far as locking out users/administrators, you'll have to give a description of just how a service provider would provide their service? The general answer without knowing details is you can absolutely deploy a vFiler and lock everyone out including the service provider. Now, we can go 'round-and-round with the different permutations and combinations but the basic question is can it be done. Now, one point of confusion may be that we don't provide these services in the same way as EMC. To coin a phrase with from Chad Sakac, I'm not saying EMC can't do it. I'm just saying we can - not wrong but different.
Toggle Commented May 28, 2010 on Once Upon A Time at Chuck's Blog
Hi, Chuck, NetApp employee here. I think the EMC turnaround story is a great one. Sometimes people ask about NetApp's success and the market turns that seem to be hitting NetApp in its sweet spot at just the right time (e.g. virtualization, unified storage, flash as cache, etc.) - was it luck or vision? Just like I'm sure folks at EMC see it: we'll take it either way. Believe it or not, I really don't want to see history repeat itself over at EMC. It's a great story. As far as SMT goes, I'd recommend people refer to the Cisco validated design and implementation guides: http://media.netapp.com/documents/cisco-validated-design.pdf http://media.netapp.com/documents/SMT-CVD-deployment.pdf You'll find the info you need on the layers of security and QoS. We (NetApp/Cisco/VMware) have customers doing this today. You can lock out the SP if you want but at some point decisions do need to be made on items like physical access and the extent of service you would like from your SP. Vaughn Stewart put together some additional detail at his blog: http://blogs.netapp.com/virtualstorageguy/2010/04/cisco-netapp-vmware-secure-multi-tenancy-updates.html Thanks, Mike
Toggle Commented May 27, 2010 on Once Upon A Time at Chuck's Blog
@Jonas Sometimes ex-NetApp employees make for the most passionate evangelists for the competition. That's great passion! All the best to you in your career at EMC - just not too much when competing against us over here at NetApp :-) if you don't mind. Unfortunately, your WAFL analysis is dated (ie. measured in years) in some areas and your proof points simply wrong. I would caution against using some of those "aged NetApp system" points in the field and your data warehouse example. Those are just softballs over the middle of the plate for most of the NetApp field nowadays. I'm not telling EMC how to train their sales folks - in fact, the NetApp in me says keep heading down this path - but we really don't need to get down into the weeds on how WAFL works. At the very least, competitors start with the name WAFL and let their imaginations run wild from there. In sales campaigns once a competitor pulls out the FUD paper, it's almost like witnessing a fender-bender. You know it's going to end badly for them but you just can't take your eyes off it. You can use it these rants if you want but I don't think it works out all that well for you. I do think to one of Chad's points, most customers don't care *how* the solution works. They want to know whether or not it solves their problem and *what* benefits they will see. Much of what has been talked about here - unified storage, snapshots, primary storage dedupe, flash as cache - aren't important because NetApp pioneered in these areas. From a NetApp point of view, these were relatively easy to do because they were already part of the WAFL DNA. Whether by luck or design, WAFL lends itself very well to market shifts, particularly the shift towards efficiency and Cloud architectures. It's not about big beating small anymore. It's about fast beating slow. Nimble 800# gorilla is an oxymoron of sorts, isn't it? :-) Anyway, Jonas is a good guy. I wish him success and I'm sure he will do right by his customers. Based on his post, though, I'm pretty sure WAFL isn't his strong suit but that's O.K. He works for EMC. Have him tell you why you should buy from EMC rather than why you shouldn't buy from NetApp. @Chad, O.K. - I had to chuckle a little at this statement: "I know that might make us hard to follow, but it also means almost anytime anyone says something about us, they are wrong, which makes competing easier :-)" I'm not sure the "Where's Waldo" strategy turns out all that well. I'm thinking having a bunch of incongruous approaches to answer the same basic problem wouldn't be a strength, at least not in a customer's eye. The implicit challenge is to find Waldo and performance is Waldo for EMC. It's a challenge to be dealt with. That's simply not a variable for NetApp nor does NetApp have to amortize development across a wide variety of platforms and features. It just means that comparatively NetApp can be more nimble with their solutions; adapt quickly to changes in customer demands. I'm not saying EMC can't - not wrong; just...different
Sorry - this was bugging me so I had to go look it up. SAS & FC can be combined in the same aggregate (same RPM). SATA cannot.
1 reply
Technically possible vs. shipping. The takeaway is it's not a WAFL limitation.
1 reply
Whoops - that's a future. My mistake.
1 reply
You can mix drive types in an aggregate and there may be good use cases for these types of aggregates for NetApp customers. We haven't taken that off the table. However, it's not a technical limitation of WAFL. I don't know how super secret all this stuff is. You can find the basics in this 2008 Usenix paper by NetApp engineers: http://www.usenix.org/event/usenix08/tech/full_papers/edwards/edwards_html/. When we were dealing with what we called "Traditional Volumes" we used direct block mappings. When we introduced Flexible Volumes in ONTAP 7.0 (released in 2004) we introduced a logical construct - that block abstraction layer you were asking about - between the aggregate and the data container, a FlexVol. This virtualization layer (also called a level of indirection in the whitepaper) allows you to seamlessly introduce storage features dependent on virtualized storage such as (but not limited to) thin provisioning, cloning, deduplication. This level of indirection has also served as the basis for EMC's research into CBFS. It shouldn't be a foreign concept to EMC readers but as EMC begins to introduce this concept into their own products, I suspect we'll hear a campaign along the lines of "WAFL done right." Regardless, we already have a virtualization layer inside of ONTAP and have for years now. It's not a technical gating factor. As far as accessing the blocks of data, it's important to note that WAFL stores metadata in files. There are block map files, inode files, etc. WAFL can find any piece of data or metadata by simply looking it up in these cross-indexed files. The index has a tree structure to it as Alex mentioned. This tree structure is rooted in something we call a vol_info block (like a superblock). As long as it can find the vol_info block, it doesn't matter where any of the other blocks are allocated on disk. WAFL is also "RAID aware" which is somewhat unique but for more on how, why and WAFL integration of RAID-DP, I will point you to the Usenix paper of the year in 2004 written by Peter Corbett and colleagues: http://usenix.org/events/fast04/tech/corbett.html. So, having a RAID-aware virtual storage system is not a technical gating factor. I know that's a lot of reading but you'll find that the story behind PAM has everything to do with economics and nothing to do with some conspiratorial technical cover-up. There's simply no there there.
1 reply
@Chuck - Now that you have the prepositions argument out of your system - only the sun coming up tomorrow would have been more predictable - I still think it has no bearing on the discussion here or topics Dimitris addresses on his blog. Dimitris is also from Greece, how does that figure into the discussion on fixed vs. variable block deduplication (http://recoverymonkey.net/wordpress/) or the topic at hand here? Regardless, it's all water under the bridge now so we move on. Does NetApp innovate? We say yes. A good indicator might be to check which way the industry heads. Does EMC move more towards a NetApp model of unified storage and "file system based" storage or do they stick with their legacy model of "purpose-built" silos? Interesting times ahead.
@Chuck - No (re. Dimitris) - I was able to move my mouse over 3 inches and click on the "About" tab (unless you would like to argue over the use of prepositions some more). @iahnf - Ask your account team for an update. Many of your points can be answered as a) already shipping b)shipping within the next X weeks, or c) not going to do it and here's why. @John H - you're right. We should not put out a complacent vibe and earn every bit of business. @Martin - I do think diversification can strengthen a business and I respect how HP, IBM, EMC and Cisco have executed on this. With respect to retiring lines of business, I don't believe we have (unless I missed the announcement). We have discontinued products within a line of business. There's a question you have to answer with respect to why you would replace a failed product. Lot's of variables that go into it but one answer is to not throw good money after bad and invest elsewhere. That brings you to a build vs. buy vs. OEM discussion. To TimC's point, there are balance sheet, operational and cultural implications to all three so it's not like you impulse-shop these things. I would also suggest that in a market like this it pays to wait. Things get a lit cheaper in a recession. With a cash reserve now over $3B (and growing) I think its safe to assume NetApp always has one eye on the market but, as I'm sure all of us know, that's all the detail you can expect publicly. I think the toughest thing about this is any speculation can be posted but we can't speculate in return, at least publicly. We're in a no-win situation when it comes to this. But, how about we take this list and use it to set up NDAs with all customers who have commented? We can answer these questions there. Sound fair?
Chad, Informative and classy. Thanks for this detailed post. Here's to another year of healthy competition. Mike Riley NetApp
These are great specs but you left out what really makes a guarantee a guarantee: the "or what." If you personally guarantee it and you can't fulfill any part of the guarantee, will you personally pay for the kit? There's no teeth in this guarantee so you can put #18 in there: "You can reduce floor space by having your storage array hover approximately 8' off the ground."
1 reply
Scott, The answer to your question above is no. EMC Virtual Provisioning is not the same as NetApp Thin Provisioning. EMC makes this point quite clear in their best practices guides. Example: "Conceptually, the thinly provisioned storage pool is a file system overlaid onto a traditional RAID group organization of hard disks. This file system imparts an overhead on thinly provisioned LUN’s performance and capacity utilization. In addition, availability must be considered at the storage pool-level with thin provisioning. (Availability is at the RAID group level with traditional LUNs.) Workloads requiring a higher-level of performance, availability, and capacity utilization should continue to use traditional LUNs." -- EMC CLARiiON Performance and Availability: Release 28.5 Firmware Update Applied Best Practices Not only is EMC capitulating to the need for a "file system" layer to provide such a function (ironic given all the FUD they like to throw around about WAFL) but - and this is striking - they admit that you get BETTER availability, BETTER performance, and BETTER storage efficiency by NOT using virtual provisioning. NetApp recommends Thin Provisioning for better storage efficiency without sacrificing performance or availability. Huge difference. NetApp is a huge fan of TP while EMC is not. Example: "I think thin provisioning is not-a-good-thing at a philisophical level. It has a role, but I'd recommend using it very carefully, if at all." -- Chuck Hollis, EMC VP of Global Marketing I believe Chuck is sincere in his belief not to mislead customers. I think that's exactly part of the NetApp culture as well. From that standpoint you would have to take EMC documentation and Chuck at their word: don't use Virtual Provisioning if you care about availability, performance and storage efficiency. As far as RAID-DP vs. RAID-1 is concerned, you can look at any number of 3rd party benchmarks both solicited and unsolicited by Netapp. NetApp provides as good (or better) performance compared to RAID-1 with statistically higher data availability using RAID-DP. It's not marketing spin. It's just math. Now, if a customer wants a RAID-1 type of config, NetApp can provide that too. We do have customers running this config as read performance can improve slightly and the customer can sustain 5 concurrent disk failures within a RAID group and still stay up and running with their data intact. Can EMC and NetApp give the same short answer on an RFP? Sure. But, the devil is in the details so, no, I would not agree that Virtual Provisioning is the same as Thin Provisioning. There's a big qualitative difference there. Unfortunately for EMC, they are slowly starting to realize that it's difficult to bolt-on and shim-in these "file system" services after market to all of their sundry "purpose built" storage systems.
1 reply