This is Evan Plaice's Typepad Profile.
Join Typepad and start following Evan Plaice's activity
Join Now!
Already a member? Sign In
Evan Plaice
Recent Activity
@asbjornu You're assuming the W3C drives the direction of the web. They kinda lost that distinction with the whole XHTML 1.0/HTML 4.01 debacle. Last I checked the WHATWG was the driving force behind HTML5 not the W3C. I would argue that they're probably one of the worst organizations to handle the Markdown specification. They're already too top-heavy. When it comes to creating a new spec the group behind it needs to be small, influential, and highly motivated.
Toggle Commented Nov 8, 2012 on The Future of Markdown at Coding Horror
I see what you did there... def change_the_world(): if (platform == inconsistent || platform == irritating) { incite_the_community() wait(changes) return feeling_accomplished def incite_the_community(platform) { article = complain about the shortcomings of {platform} for(i=0, l=influential_people.length; inever fall in line with a standard. I'd bet that you have heard of 'Confirmation Bias' before. Let me present exhibit A. To break through that the specification and the implementation need to be of a high enough caliber to hijack the 'Markdown' namespace. If that can be achieved then every other iteration will be considered 'just another copy' and the non-standard branches will wither. W3C did it with the HTML spec, Apple did it with electronics design, Google does it with everything they can. I'm not a Sci-Fi enthusiast but even I know that '2001: A Space Odyssey' is the ubiquitous reference for all Sci-Fi. It begs the question, why is that? The interesting thing about OSS projects vs commercial one is, in OSS the community becomes the currency. The larger the community is, the better the feedback, the faster the code quality will increase. Conversely, the better the quality the more people the project attracts. After a certain point the success of a project becomes a runaway effect. At least until somebody screws up (the project gets forked) or the platform the project is built on becomes obsolete. I see your inspirational troll but I like technical pissing contests as much as the next guy... First, for all the people who advocate the use of LL*, ANTLR, or equivalent language generators, take a minute to consider the excessive amount of overhead those approaches create. You're talking about building a complete AST (Abstract Syntax Tree) with a tone of intermediate memoization for what should essentially be a simple top-down parser. It turns out that Chomsky was a pretty smart guy. That may 'work' on local/browser implementations but on the server-side it won't scale for shit. I would argue that Markdown has a simple enough grammar that it should be possible to parse it with a Type 3 parser using a single char regex matching + FSA scheme. We're talking, no AST and very very little overhead. The only memoization overhead expected is equal to the number of chars that are accumulated between state transitions (ie one string, no complex data structures necessary). We're talking a no frills implementation but it should be lightweight enough that further optimizations (ex inlining) will be rendered unnecessary. The only exception to this is where code needs to be further processed such as the numbered link (which I really like) style that SO uses and syntax highlighting. For syntax highlighting, it's trivial to add an inline parser hook that can be leveraged for additional processing. For the numbered links you can do a mark and replace through a second pass. Which could be further optimized by marking string index positions on the first pass. In lower level languages this could probably be optimized even further using non-null-terminated-strings (ie ones that contain a length prefix) but I'm no prolific C hacker. If you'd like to see a Type 3 parser in action, feel free to browse the source @ jQuery-CSV. I created it because I wanted to complete the first 100% RFC 4180 complete parser written in pure javascript. The jQuery isn't necessarily a dependency but if I'm going to go through all the effort to hijack a namespace, I might as well go for the biggest one. ;) It contains two minimal CSV-specific parser implementations, $.csv.parsers.splitLines(), and $csv.parsers.parseEntry() (the name should indicate what they do). Also, the library includes hooks to inline code into the parser stages for further processing (ex auto-casting scalar values). I can't really take credit for the idea though. The newest parser implementation was inspired by some very good suggestions made by the author of jquery-tsv. I didn't even know what a Type 3 parser was a month ago. As opposed to the formally educated, I have zero formal education on programming; I'm just have a talent for picking this stuff up along the way. Will all of the half-assed CSV parsers that can be found on literally thousands of blogs disappear overnight. Of course not. They will still exist but the power of branding is that a name can propagate much faster than a concept. I'm not sure if somebody is measuring but I think we have a winner (me). Either that or my 'confirmation bias' is being a douche again. lol...
Toggle Commented Nov 3, 2012 on The Future of Markdown at Coding Horror
There's always the option to limit Google searches to a particular site. Ex. to search pink butterflies on SO: " pink butterflies" But, I agree. Google search is starting to suck and syndication sites are taking over the internet. I really wish Google could find a way to drop the value of purely syndicated/scraped sites so the good content could be allowed to float back to the surface. I was considering writing something to the Stack Overflow team after this happened to me the first time but, obviously, plenty of other SO users beat me to it.
Toggle Commented Feb 6, 2011 on Trouble In the House of Google at Coding Horror
Just wanted to say thank you for implementing OpenID as the authentication platform for SE sites. The low barrier of entry is one of the primary reasons that I log in to comment/contribute as often as I do. I only wish the Linux and Open Source development world would wake up and do the same. Nowadays, if a site requires registration to join the conversation, I don't waste my time. Aside from the attaboys, there is one other key issue that OpenId addresses. Email addresses are not a good form of identification. The sad fact that many people use the same password for their email addresses as they use on many other accounts creates a massive security risk. A really common attack vector is: - gain a password for the account - use that password to login to their email (email was the account username) - scan email messages for information about accounts on other sites - request password be sent to email from those other accounts - gain access and change passwords on all accounts to limit legitimate access By removing password storage and not requiring email credentials, the security risk is limited to the OpenId account itself and OAuth servers where the OpenId account is stored. It's staggering for me to think of how many accounts known or unknown that I have used similar authentication info on over the years. If my password variations were compromised, there's no way I'd be able to find all of the accounts to update the auth info.
Toggle Commented Jan 4, 2011 on Your Internet Driver's License at Coding Horror
If you use virtual desktops, three monitors aren't necessary... There are exceptions. Like, I'd never use anything less than two monitors if I'm doing some electrical design work or any serious graphic work because the number of tools that you need to use simultaneously. For programming OTOH, one high quality display can easily be enough if you have a good virtual desktop setup. I currently use Linux Mint with 4 virtual desktops setup. One for coding, one for internet browsing (research/code reference), one for revision control, one for unit testing. If 4 isn't enough I can add more as I need them. The trick to using them effectively is having a good key combination setup. I use: * Ctrl-Left Ctrl-Right to cycle back and fourth through the desktops (I think this is the default in *nix). * Ctrl-Up for the compiz 'Scale' plugin which is the equivalent to Expose on Mac. *Ctrl-Down for the compiz 'Desktop Cube' which shows you the desktops in relation the current one in a break-out view so it's easier to see where everything is. That on a 15" laptop screen with 1920x1080 is perfect for my needs. If I had multi monitors visible at any given time it would only distract me from writing code. It's much easier to focus on one desktop at a time.
Toggle Commented Jun 29, 2010 on Three Monitors For Every User at Coding Horror
Thanks a lot for the post. Especially the embedded video. I really love live drawing animations that are geared to prove a point (and the content in the video is definitely something I'm interested in). Here's a pretty good one about talent/luck, Also, if you haven't heard of it yet, I highly suggest you read "The Parable of the Monkeys" found here,
Toggle Commented Jun 24, 2010 on The Vast and Endless Sea at Coding Horror
if(perception == microsoftIsInnovatingWithVirtualization) return fail; :P. We're talking about a technological innovation that Mac OSX offered in it's first release (8 YEARS AGO!!). Plus, virtualization in windows is virtually useless as a platform. Sure, it's great as a sandbox to do potentially risky stuff (like downloading illegal torrents, or hacking on the OS) but at the end of the day you're doing it in windows (which is inherently risky). I'll illustrate with an example... I wanted to try out Linux mint on my laptop but I didn't want to go through the hurdles of making a dual boot system so I installed it using Mint4Win (wubi for mint) which basically creates a virtualized install in windows. My computer eventually got infected (for the first time in 4 years) by a flash animation leaked backdoor trojan (::shakes fist @ adobe::) and my system is hereby hosed. Unfortunately, with the desintegration of windows, followed the loss of my mint virtualization. Why is this example relevant, because if I was running a virtualization in *nix or OSX I most likely would never have to consider the possibility of a backdoor trojan or losing my virtualization to a virus. Which leads to my point. Windows as an abstraction layer sucks (still). Wonder why people still choose XP over windows 7 or vista. Because Vista and 7 are the same crap that XP was, they just have a few fancy features that lead your attention away from the glaring deficiencies of the system. I.e. the foundation (platform, kernel, etc...) of windows is fundamentally flawed and Mac (and soon *nix) HAS proven that security and usability can coexist on a desktop platform. If it takes MS 8 years (xp to 7) to match the progress of Apple (OS9 to OSX) how long will it take them to match the stability and flexibility of the security model? Here's a hint. As long as it'll take before I buy another one of their OSes.
@ Jeff You gotta take some time to crawl out of your Microsoft dominant ecosystem every once and a while. The world of software platform innovation is not bi-partisan. On computers, mac and windows are still dominant but only because combined, they consist of 99% of the pre-loaded operating system market. Even netbooks quit shipping preloaded with Ubuntu because of some sleazy back-door deals with microsoft. Phones are a different story and the Android is the next real game changer. If opening up the mobile phone platform market is the next step in technological evolution, then Android beats the pants off of the iPhone. Take some time to do some dogfooding and eat some of your own advice. Don't write blog posts about the latest and greatest technological trends. We all understand. You got your first iPhone and it made you feel all warm and fuzzy inside. That doesn't mean you should blog about it. Because, everything you stated here was irrelevent or outdated before you even pushed the post button. Google: Google Summer of Code. It has been in the works and largely off the radar for years now but nevertheless is a force to be reckoned with. Seriously tho, you didn't see this coming? ::sigh::
Toggle Commented Mar 30, 2010 on The iPhone Software Revolution at Coding Horror
@ Jeff "I think Windows does a good job utilizing the edge at bottom of the screen with the taskbar (which, by the way, unlike Apple's one-screen menus, can spread across multiple monitors in a logical way) and start menu. I just wish we had more stuff along the top edge!" Fitts' pornography or not, "Start Menu" = Fail and Windows still fails miserably in regards to usability overall. 1. Because the main menu is on the bottom People naturally read from top to bottom, left to right (unless you're reading Hebrew or Arabic). Try dragging the taskbar to the top of the screen. After adjusting to it, everything will feel easier and more natural. 2. You always have to mouseover (and wait for the delay) on a link (all programs) just to see the second level of the menu (which is what you really wanted to see 95% of the time anyway). I work REALLY fast when I get into the zone. Every time I hit that 500ms-1sec wall I scream subconsciously. 3. There's no logical categorization of applications. The start menu is at the complete mercy of the individual application developers. Which means that they can(and will) place folders with their companies name, followed by a sub menu with their application. Or, multiple sub menus with irrelevant/useless tools or links to market their website. <sarcasm>Let me tell you, I love whoring out my OS to application marketing</sarcasm> <sigh />. 4. What is an operating system's sole purpose. Serving files and applications right. Then, doesn't it make sense for links to files (My Computer/My Documents) and applications (Start->All Programs) to, not only have the same weight (size, dimensions, relative location), but also to be sufficiently large enough to make it easiest to click them with the mouse? That's why docks kick a**. Note: not to be confused with docking <NSFW> </NSFW> (sorry, couldn't resist >:P). Not only does it live on the edge of the screen (usually bottom or left depending on user preference). Icons you mouseover grow in size raising their weight(relevance) in importance(the app you want to use) and making it absolutely obvious which application you're going to open. I could go on for hours about the shortcomings of the Window's UI (the "My" prefix <sigh />, etc...) because I've spent countless hours hacking it into something more friendly to my workflows. Don't get me wrong, I love working in windows. I just hate the shell's UI. To avoid making this post any longer. Here's a screenshot of my current *nix layout to illustrate.
Toggle Commented Mar 28, 2010 on Fitts' Law and Infinite Width at Coding Horror
Whitespace is like the virtual camouflaged predator waiting to viciously shank you through the guts while you're staring at in its invisible face. All the while Linus, standing by with his little pet git says "serves you right dumb-s***, I told you you weren't as smart as me." I hate you whitespace.
Toggle Commented Mar 27, 2010 on Whitespace: The Silent Killer at Coding Horror
Thank you for this article. Just a week ago I was tearing my hair out, raving and ranting "why the hell aren't those unicode standards c*&k s&#%#@s smart enough to create a universal cross platform CR/LF/CRLF equivalent!!!" The reason being. I was doing work on an Open Source .NET library called, where my work was being done using VS2008 in Windows and the project admin was working in MD using *nix. git has a hackish solution (autocrlf) to automatically convert ASCII line ending issues but it sucks. I.E. If a file with mixed line endings accidentally gets through in a commit, you're SOL. So, I was forced to resolve them myself. In VS2008 line endings are handled based on the type already used in the file. For instance, if the file was created in *nix, it handles the line endings as LF. If it sees inconsistent line endings (or if it randomly feels like seeing if it can trick you into changing to windows CRLF line endings) it pops up the menu you illustrated. The major shortcoming of VS2008 on handling line endings can be seen if one of the project XML files needs modification (Ex. settings, .vsproj, etc...). Since those are automatically generated by VS they always write line endings as CRLF. Which lead me to my final evaluation. VS sucks at line endings. After getting tired of opening .csproj files in Notepad++, and converting the line endings manually. Or, having patches rejected because I forgot to convert a file. I finally threw in the towel and partitioned my HDD to work as a dual Windows/LinuxMint. Long story (not-so)short. Creating the source files in unicode and using LS as the default line ending would completely solve this issue. No more line ending woes. Now, I wonder if both VS and MD both support it. This article (and discovering the LS char), finally gave me a viable reason to use unicode. ::NepoleonDynamiteSigh:: I was much happier in the days when I was still ignorant of what the term "line ending" meant. SideNote: It's still useful to have LF work as linefeed. Mostly in interactive console apps where you're trying to update a status during processing. Instead of doing a ClearScreen and reprinting all the lines with the status updated, or printing a new line for every update (filling the screen with updates) you can just to print(LF) followed by the updated message and overwrite it to the same line. I know the "Home" key on the keyboard is useless to 99.7% of average computer users, but I use it all the time.
Toggle Commented Mar 27, 2010 on The Great Newline Schism at Coding Horror
LOL. Looks like you did a sufficient job of inspiring one hell of a micro-optimization theatre in the comments with this thread. I completely agree that LinqToSQL is very convenient and useful tool. Since I learned it, I have had only one instance where I'd use traditional SPROCS over it and that required an extensive read/process/write step to occur in no more than 100ms (obviously not web stuff). If there is no better example, the comment thread of this post poses a perfect case where the word scalability has become the next gen ::cringe::paradigm::/cringe:: "web 2.0" of software development. Meaning that, any jackass who can write a loop and store a date/time value will be partaking in creating comprehensive statistical explanations of why solution x is faster than solution y. Welcome back to bike shed painting 101. Disclaimer: I can't claim complete innocence, I have painted that bikeshed my fair share of times too. I think, the real concept of scalability represents 2 things. 1. Raw performance I if the number of executions is increased at a linear rate is the processing time growth linear, exponential, or lograrithmic. I'd consider logarithmic growth = scalable and optimized, linear growth = scalable, and exponential growth = not scalable. Ex. If your site grows 10x in popularity is it going to need 5x, 10x, or 100x the servers to keep up with the demand? 2. Application Domain You put the difference into perspective perfectly. "Let's do the math, even though I suck at it. Each uncompiled run of the query took less than one third of a millisecond longer." Once again, you prove that you're smarter than the average code monkey and that's why I like your blog. I can actually feel myself not grow dumber the longer I read your material. :P Unfortunately, scaling has hit the "scene" and all the code monkeys have their panties in a bunch. There's masses of really bad/incorrect examples going around about optimization. It's gonna take a colossal amount of panty un-twisting to fix it. For websites specifically. I'd specify the domain range as 7 seconds. You have 7 seconds to load everything before the tip of the ADHD afflicted masses start to flee in troves, followed by average people (15 sec), and finally the brutally patient (whatever the timout rate is). If your page can't load in less than 7 seconds you're doing it wrong. There are much better optimizations that can occur in this scope (eliminating file requests from the server, or from multiple servers::cringe::) than trimming off a few ms from a DB query. My .02 on perf. @ Ric Johnson Although most people don't know. Linq is about more than SQL ORM. Linq can be performed on XML, Collections, etc... For anybody that has used Linq extensively, it's pretty obvious that the functionality doesn't do a good job of supporting changes in the database model. I think they're also leading away from LinqToSql because it only supports MsSql. What about all of the other modeling systems out there? LinqToOracle, LinqToMySql, LinqToPostgre, etc... The name and application itself isn't general enough to cover all the systems that people will eventually expect of it. Neither Linq or LinqToSql are going anywhere. Linq will be around forever, and LinqToSql will sit quietly in the .Net framework and do what it does best. I think their development emphasis will just be directed more toward Entities for the ORM part and LinqToEntities for the querying part.
Toggle Commented Mar 27, 2010 on Compiled or Bust? at Coding Horror
So, I just read my previous comment and... I apologise for the harsh tone. Tom, I think the reason that most OOP proponents can't come up with a decent answer to the question "what is so great about OOP" is because most people are introduced to OOP in an academic setting. Where their heads are drilled with everything OOP from abstracts to patterns. So, when they finally get to using it they think they need to use every tool in their repertoire if for no better reason than to prove that they can. The other backlash to the academic mindset I mention is, OOP is inherently more complex than procedural programming. Having a post grad CS major try teach the concept of OOP to a procedural programmer is like a schizophrenic bashing his head against the wall to get the voices out. It may get through but it's guaranteed to be a painful experience. Now, back to the point. Why is OOP useful? Look at procedural programming, in its simplest form it's a pretty easy concept to grasp. Execution starts at one point, does some stuff, and either exits or continues in an infinite loop. When code gets sufficiently complex, you abstract away certain processes into functions. Now you can call a specific action, give it the data it needs, and it spits out a result. No need to know about it's internals as long as you have a good idea what the returned result is supposed to be. It has been proven time and again that procedural programming can model/describe anything. Almost all of your variables are scalar so their functionality is pretty obvious. Easy peasy. Enter OOP. OOP in its most basic form is just a way to group common variables and functions. If you were modeling a real-time simulation of aircraft engines it would make sense to prototype a set of functions that do what an engine does and then call that set in order one time for each engine. In OOP you model the engine in a class, instantiate an object for each engine, and tell them all to run, all variables and functionality is internal to the engine so you don't have to worry how it works. Procedural programmers and OOP programmers see the world through different lenses. Procedural see bottom-up, as in, what needs to take place to make something happen. OOP programmers see top down, as in, here is model of everything that makes our little world. Now, how the hell do we make it all work together. The best part about OOP is it's OOP. It's all about modeling objects and their interactions, it introduces the noun (object) to programming whereas procedural is just verbs (operations, decision logic) and state (variables). The worst part about OOP is it's OOP. For every level of abstraction there are more rules (and who likes rules). Remember what it was like to learn what variable scope was to functions. I'll call that the first level of abstraction. OOP adds multiple levels on top of that. The next is class scope (public, private, internal, readonly, const, for variables and properties; and public, private, and internal for methods). Next, class type scope (static, instance). And finally inheritance scope (virtual, sealed, override, and abstract). Not only that but, how the hell do you know what order everything is executed in? I agree OOP sucks... To learn. And, everybody who is learning it will write a lot of crappy hackish code to try to find ways around the limitations of what they don't know about OOP yet. And, the only way to become proficient is by writing a lot of crappy hackish code. In fact, the day that a person becomes proficient in OOP is the day that they can sit back and ask, "what's the best way to make this code as clean as possible," instead of "how the hell am I going to get this damn thing to work." There's a long series of "oh s***" moments one has to experience to get there and it's a long process. Especially if you already have clear alternative to tackling the same problem like extensive experience in writing procedural code. Basically, if you're worried about design patterns and you don't know the difference between public, private, internal, static, override, virtual, sealed, abstract, etc... You're doing it wrong. Sorry about the length of my post. I figured I'd make an attempt at explaining a realistic perspective of OOP from someone who doesn't masterdebate to class diagrams of design (anti?) patterns. I wish I could reference a book that takes the similar approach but AFAIK it doesn't exist yet. The books about OOP today suck at explaining how to program in OOP for the same reasons that professors suck at producing goods and services. Alhough they're very interesting, they're not very useful. Theory != application and extensive theory without any application is just gibberish. Sorry I didn't include an example along the lines of business modeling. I haven't done any yet so I wouldn't know where to begin.
Toggle Commented Mar 26, 2010 on The He-Man Pattern Haters Club at Coding Horror
Evan Plaice is now following The Typepad Team
Mar 26, 2010