This is Mecki's Typepad Profile.
Join Typepad and start following Mecki's activity
Join Now!
Already a member? Sign In
Mecki
Recent Activity
The terms are just freaking hilarious! :-) And there is so much truth in it. I've seen most of that in real life so many times, some on a daily basis.
Toggle Commented Sep 26, 2012 on New Programming Jargon at Coding Horror
My $0.02 regarding the forum layout as in the first screenshot above: I hate it! I really, really hate it. I grew up with discussions on Usenet. Usenet clients typically display discussion threads as a tree, which is much better to follow than "linear blocks", where one block refers to the block above, another one to the block three pages before and a third one to five different posts scattered all over the place; I cannot follow these discussions comfortably. Also on Usenet people quoted what other people said and put their reply below the quote(s) inline; I hate top posting (first posting the reply, then quoting the text you replied to) and it's even worse if there is no quote at all but instead somebody replies to something someone else has said somewhere else and I have no idea what he said, when he said it, where he said it or why he said it (since there is no context provided either).
Toggle Commented Oct 13, 2011 on The Gamification at Coding Horror
Since accounts are free, banning a user is suboptimal. As soon as he notices, he got banned, he can just create a new account and stir up trouble again. Thus a hidden ban is definitely preferable to one, where the user knows he got banned. However, all the bans you mentioned above will sooner or later cause the user to notice he got banned, so all these bans are suboptimal, too. I would personally go with an gradual hellban system instead. Every user gets a trouble counter. People with enough reputation (or possible only moderators) can vote to ban a user; if a user gets enough votes, his trouble counter is increased by one. On the other hand, you said you are against bans for life, so you could have an automated unban system: Every 7 days the trouble counter of a user is decreased by one, so if a user behaves okay for long enough, his trouble counter will eventually reach zero again. The trouble counter is used to calculate a partial hellban: A trouble counter of 1 means 50%, a trouble counter of 2 means 75%, a trouble counter of 3 means 87.5% and so on. 50% means that 50% of everything a user does (asking a question, answering a question, posting a comment) is hellbanned or IOW, only every second question/answer/commend of this user is ever visible to other users, the other half is visible to the user himself (and maybe moderators), but not to any other community members. That way a user can stir up much less trouble than he could before. If he still stirs up too much trouble, his trouble counter will increase and even more of his contributions are hellbanned. However, since not all is hellbanned immediately, he will still get feedback to some contributions and thus won't immediately notice he got banned, as he cannot tell if he got no feedback to other contributions because of a ban or just because people keep ignoring him. This form of soft hellban is much more democratic than a full hellban, since you are not taking any right away of this user, you only "limit" his rights temporarily. It is like putting someone in jail, which is not supposed to take his right of freedom away, but to temporarily limit it (even a prisoner has a certain degree of freedom left in most countries; a lot less than he used to of course). In your case, you are limiting this user's right of free speech, not by taking this right away, but only by making him produce less noise within a certain community for a certain amount of time.
Toggle Commented Jun 6, 2011 on Suspension, Ban or Hellban? at Coding Horror
Actually some guy from AdLib, a company that went bankrupt in 1992(!), some of you might still remember them, already said that many years ago: "If Moore's law will hold true for the next couple of years, it is only a question of time till a soundcard becomes just a piece of software and the only hardware involved will be a DAC (Digital to Analog Converter) on your motherboard". And he was right, I 'd say. Today, a CPU can do in software what was only possible with dedicated signal processors a couple of years ago and it won't even go beyond 10% load of a single core when doing so. The strange thing is, that people refuse to learn of the past: When I say today, that one day, a graphics adapter will be nothing more than a piece of software (meaning dedicated GPUs will die), I'm being laughed at; people actually get really upset and try to explain me why this cannot ever be the case and that I'm an idiot. But let's face it: Everything a dedicated GPU can do can also be done by a CPU. And dedicated GPUs get closer and closer to generic usage processors. Not too many years ago, pretty much everything was hardcoded in a GPU and it only supported the operations really necessary to bring some 3D scene to the screen. Today, pretty much nothing is hardcoded anymore, developers write vertex- and fragment-shaders (alias pixel-shaders), as well as whole fragment-programs. Actually using OpenCL, you can perform any kind of computation on GPUs just like you can on a CPU. On the latest CPUs, an OpenGL software emulation of OpenGL 1.0 can run faster than a real OpenGL 1.0 implementation was running on a GPU at the time OpenGL 1.0 was released! GPUs are still significantly faster than CPUs today, since they are very limited (they support far less operations than a CPU, but those operations are optimized to the max) and they are optimized for parallelization (your CPU might have 4 cores, but your GPU might have 32 shader pipelines, meaning it can perform 32 calculations in parallel). However, they also run at lower clock speeds in general and with the increasing shader capabilities, the GPUs need to support more and more operations that cannot be optimized beyond a certain point (e.g. conditional jumps!) and may also hinder further parallelism. CPUs catch up because they run at higher clock rates, their number of cores keeps growing (8 cores are available today, in a couple of years, 16 core might be normal for a consumer CPU) and they keep getting better instructions with each new CPU generation (SSE4.2 will soon be replaced by SSE5, further AVX is almost ready and will give CPU x86 a huge speed up). Sure, if the GPU development continues equally fast as the CPU development, CPUs won't ever overtake GPUs... however, a company like AMD (who bought ATI and thus is the biggest competitor to NVidia) might say one day: Given the enormous speed of our CPUs, we stop GPU development altogether. And for many occasional players, a CPU that could render current DirectX10 games with all effects enabled completely in software and will achieve frame rates of 30+ FPS is all they need. And if you compare CPU to GPU speed comparisons, you'll notice, that actually CPUs are catching up, because they are currently evolving somewhat faster than GPUs are (whenever GPUs have doubled their speed, CPUs have almost trippled their speed at the same time). So like soundcards are only for sound/music-enthusiasts today, 3D graphics adapter will only be for hardcore gamers one day, for the rest, a software emulation on the CPU will cut it just as it does for soundcards today.
Toggle Commented May 5, 2011 on Who Needs a Sound Card, Anyway? at Coding Horror
Net neutrality means all bits are treated alike. Some people here argue that this is bad, because it has a negative impact on innovation. However, what shall I prioritize? If I prioritize everything, I'm back to no prioritization at all, so that is pointless. E.g. YouTube videos might need more bandwidth to play fluently, VoIP may need low latencies to work correctly and so on. And already here we have the conflict. Either I prioritize YouTube traffic (gets more bandwidth) or I prioritize VoIP traffic (gets better latencies); I cannot prioritize both. If I go for YouTube, videos will transfer quickly, but they will eat so much bandwidth, VoIP becomes impossible. If I go for VoIP, telephone calls will work nicely, but now videos may not be able to download in realtime any longer. It's always bandwidth vs latency. And what about people that don't care for either one? They never watch YouTube videos, they never do VoIP calls. However, they may use another service, e.g. video chat via Apple's FaceTime. Every user of the Internet has other requirements and if I prioritize his services, he will probably be very happy, but if not, his Internet experience will suck. Assuming I am an Internet provider, who gives me the power to decide which services should run fine for my customers and which services should suck? Do I even know what for my customers are using the Internet and why they have bought a broadband access to it? Should I even care? ISPs are only against net neutrality for a single reason and it is not to bring innovation to anyone or improve anyone's Internet experience, it is "making more money". If you are allowed to discriminate traffic, you can *blackmail* (and IMHO it is just that, blackmailing) a company like Google, eBay or Facebook and tell them "If you don't pay us $... per month, we will set your traffic to lowest priority; BWAHAHAHAHAH". If the law says, traffic must be neutral, this kind of blackmailing is simply impossible. Traffic Neutrality does not mean that an ISP must treat every customer equally. Of course not every customer gets the same bandwidth, but only what s/he paid for and of course a customer wasting an excessive amount of traffic might get throttled if his contract with the ISP allows to the ISP to do so, but in every case, this happens for ALL traffic of this customer and not traffic for specific services.
Toggle Commented Feb 17, 2011 on The Importance of Net Neutrality at Coding Horror
Two comments: 1. I use LastPass. http://www.lastpass.com It is free and it integrates very well into Firefox, IE, Chrome and Safari. Since LastPass remembers my passwords for me "in the cloud" (as people call it today), I have my passwords in every browser and wherever I go, as long as I have Internet access there. Since I only store my Internet passwords there, I won't need access to them if I have no access to Internet. And since I don't have to ever remember any of those passwords again, all my passwords for all my accounts are different and they are all combination of random letters (upper and lower case), numbers and punctuation characters, always as long as the service allows (up to 32 characters). Guessing them? Impossible. Brute Force? If you have the time ;-) If one gets compromised? No problem, has no effect on any other account and I can just pick a new random one for the compromised account. What if LastPass itself is compromised? No problem, passwords never leave the computer unencrypted, so not even LastPass could recover them; the data from LastPass is completely useless, unless the attacker knows the Master Password and my Master Password is very long and immune to dictionary attacks. Further I change it whenever I feel like it. 2. Regarding your Internet driver's license, you will probably love the new German identity card. It is a normal identity card, works like a passport within the European Union, with picture and standard ID information (name, address, date of birth, etc.). However, it also has a chip inside and by plugging a USB device to your computer, you can use it to authenticate online. You can either authenticate as yourself (with your realname and address), or you can use the pseudonym function to authenticate as the person owning the ID card, but without transmitting any personal information (other than that you own the ID card and the PIN to use it). To use the password, a 6 digit PIN is needed, that is of course secret and the card is locked after 3 incorrect attempts - too little to guess a 6 digit number. The pseudonym function is pretty cool: The Site sends a site identifier, the ID card takes this identifier, mixes it up with a unique number stored inside the chip, hashes the result and returns it back to the side, but only if the correct PIN is entered. To avoid man in the middle attack, a site must authenticate towards the ID card and the card towards the site in such a way, that a man in the middle will fail, even if he can see and modify all traffic in between those two (similar to SSL certificate authentication or how VPN tunnels are established based on certificates). Another very useful online function of the ID card: Age verification. It can just verify that you are above a certain age, without revealing you real age or any other personal information to the site owner. The card owner is always shown which information will be revealed to a site owner and its up to him to allow that, by entering the PIN.
Toggle Commented Dec 14, 2010 on The Dirty Truth About Web Passwords at Coding Horror
An ergonomic layout is more important to me than a mechanical feedback. The MS Natural 4000 keyboard may not be mechanical, but I can feel very well, when I pressed the key deep enough and I don't need to feel when I released the key far enough, because I simply release the key all the way. What annoys me most about all your candidates is the space key. Look at the space key, it's "edged". That means my thumbs rest on an edge and when I press it, the edge pushes itself into my flesh and that will hurt if you type on the keyboard 6-8 hours a day. A space bar must be rounded to be comfortable, everything else might have been good in the 80's, but it's not good enough for this century any longer. Look at this old Apple keyboard: http://upload.wikimedia.org/wikipedia/commons/e/e4/Apple_USB_Keyboard_B.jpg I loved it, because the space bar was really good (and also the ALT/CMD/CTRL keys). Then Apple replaced it by this keyboard: http://upload.wikimedia.org/wikipedia/commons/1/1b/Apple_Pro_Keyboard_%28open_top%29.jpg I hated this one, because all the keys in the lowest row were so edged, that it was really unpleasant to use it. A keyboard doesn't have to be split, to be ergonomic, but it must have a rounded/flat spacebar, it needs keys that you can push down with little force and that are softly decelerated. I once had a keyboard where the key was stopped abruptly. I used to type on this for two weeks... then I had a sinew inflammation and couldn't touch a keyboard for 6 weeks! That's because all the "motion energy" is pushed back into your finger, if the key is stopped abruptly. It's the same difference as stopping a car by hitting its breaks (decelerating it) and stopping a car by driving it against a concrete pillar (stopping it abruptly).
Toggle Commented Oct 25, 2010 on The Keyboard Cult at Coding Horror
Interesting topic, indeed. Speech recognition, no matter how good it might become, will never work for all aspects of human-computer-interaction. Very useful for physically challenged people for sure, but your sample with the room full of people trying to control their computers is a good example why it won't work in practice. It is already disturbing enough when somebody next to me starts talking out loud and I wonder if he's talking to me, just to find out, he's using his bluetooth headset for a phone call. Another problem is, that human beings can understand "context" and "situations", computers cannot. So if there are 10 people around me and I'm start talking, people will know when I'm talking to one of them, or to a group of them, or to someone else. They will either know by the fact where I'm looking, who my eyes are focusing or by the context of what I say. How can a computer know something is a command for him or talk to my coworker? When I say to my coworker "Just go to Google and search for ...", I don't want my computer to do this; how shall my computer know? My biggest glitch with computers are mice. I use a trackball, which I consider much better, but still not perfect. Touchscreens, Tocuhpads? Don't like them. Keyboards with sensitive surface and gestures recognition? Don't like those either, because they are basically touchpands. My dream is that I can have a normal keyboard one day, with normal keys, for typing, but I can just lift my hands a bit up and make a gesture in the air and the computer will understand it, so I don't have to move my hands far away from the keyboard at any time, just to move a window to the left, make it bigger or open a menu. Sure, you could do this all with keyboard shortcuts, but that is not as effective as using a mouse, at least not with the operating systems I have to work with.
Jeff, you are a programmer, aren't you? So I fail to see your problem. Okay, I agree with you, that there are some alignment issues in CSS, sometimes things are extremely easy to do with tables and extremely hard to do with CSS, but that is a different issue. Regarding your request with turning CSS into a programming language, why don't you make CSS templates, run them through your own written pre-processor and dynamically serve the result? Nothing is easier than doing a bit of search and replace using PHP (or any other language with regex) to transform your CSS templates into real CSS. People building huge webpages are used to the fact, that pretty much no HTML is static, but everything is dynamically created by PHP, Perl, Ruby, Pyhton, Java or some other kind of comparable language. I bet Stackoverflow works like this. At the same time you request that CSS must be all static and "interpreted" by the browser. The same way I can request that all HTML must be static and there is a HTML Meta language that makes all HTML processing in the browser. At the time being, you have two options: Either you do things in the browser using JavaScript (JS can transform/alter HTML, just like it can transform/alter CSS on the fly, no big deal) or you do it on the server. The later case has the advantage that your page also displays right if the browser has no JS support or JS is turned off. As you decided to do HTML generation on the server, why do you want CSS generation to happen on the client? Why doesn't your PHP code also dynamically create the CSS for you? If you don't want to use PHP, use any other pre-processor. There exists tons of free pre-processor, written in C (very fast!) that you can use as a replacement to the GCC C-Pre-Processor (which doesn't work right if you are not feeding in C/C++ source files). #define ROUNDED_CORNERS \ -moz-border-radius: @radius; \ -webkit-border-radius: @radius; \ border-radius: @radius #define BLUE #3bbfce #define MARGIN 16px #header { ROUNDED_CORNERS; } #footer { ROUNDED_CORNERS; } .content_navigation { border-color = COLOR; } .border { padding = MARGIN; margin = MARGIN; border-color = COLOR; } A C-like pre-processor cannot perform calculations, but other pre-processors can. And have you ever considered to create your OWN layouting language? Instead of using HTML/CSS, have you considered using a language that perfectly fits your personal needs and then have a big PHP library that transforms your own language into HTML/CSS whenever needed? To not kill your servers, you could cache the result and only re-translate a template if it has changed. Your meta language Markup is also translated to HTML on Stackoverflow, why not doing the same for the whole rest of the page as well? It is not a performance issue and using a high level language, it's not such a big coding issue either.
Toggle Commented May 3, 2010 on What's Wrong With CSS at Coding Horror
Fitts' Law is the reason why Apple puts the menu bar for every app on the very top of the screen. If menus are part of the app windows and you have plenty of them open, hitting a menu can be quite tricky. I never thought about that while I was a Windows user, but since using a Mac, I notice the difference each time I sit in front of a Windows/Linux computer. Hitting menus is just so much easier and faster on a Mac, because your mouse stops at the top of the screen where the menu bar is located. However, even when placing "dangerous" buttons far aside, they remain dangerous. I think every dangerous button must have a safety grid. That is - either the button has no immediate and permanent consequence. In that case the button can just perform the action, but the app should offer a user to undo the action, if he still pressed the button in mistake. - the button has an immediate or permanent consequence. In that case it should ask the user again. On Mac the typical dialog will have a closing sentence "This action cannot be undone.". To not bother the user too much, there is a "Don't bother me again"-checkbox that you can check... at your own risk of course. Another idea is to put a safety cap over the button. You know, like in the movies :-) You cannot press the button, because there is a "cap" on top of it. You first have to "open the cap" before you can press the button below it. In case of software, the button would be not directly visible, you first have to press somewhere to make the button appear and then you can press it. A button behind a safety cap needs no safety grid; even if the action cannot be undone, there is no reason to ask the user again, since a user can accidentally click on the wrong button... but he will usually not accidentally click twice on the wrong button. Clicking twice can be seen as "Yes, I want to do that and YES, I know what I'm doing! Don't dare to question my intentions."
Toggle Commented Mar 24, 2010 on The Opposite of Fitts' Law at Coding Horror
Mecki is now following The Typepad Team
Mar 24, 2010