This is Glen B. Alleman's Typepad Profile.
Join Typepad and start following Glen B. Alleman's activity
Join Now!
Already a member? Sign In
Glen B. Alleman
Boulder, Colorado
Performance-Based Project Management®
Interests: Earned Value, Risk, Cost, Program Performance, Integrated Master Plan, Integrated Master Schedule.
Recent Activity
In our domain, Jon Katzenbach's definition of a team informs how we interact with our project members. A Team is defined as ... A group of qualified individual who hold each other mutually accountable for a shared outcome - Katzenbach,... Continue reading
Posted yesterday at Herding Cats
In software development, we almost always encounter situations where a decision must be made when we are uncertain what the outcome might or even the uncertainty in data used to make that decision. Decision making in the presence of uncertainty... Continue reading
Posted 4 days ago at Herding Cats
A Tweet caught my eye this weekend Before moving to risk let's look at what Agile is Agile development is a phrase used in software development to describe methodologies for incremental software development. Agile development is an alternative to traditional... Continue reading
Posted 5 days ago at Herding Cats
The self-selected comment is now seen as in error, in the context of your firms sampling. Boehm's "cone" is based on TRW work, where I also worked - not with Barry, but in the same building - O6 - TDRSS and other satellite work. With the reference to you full presentation, my original post of the single chart has hopefully been corrected. The challenge we face in DOD is determining the Root Cause of "over target baseline" conditions and supplying corrective actions. One critical outcome of our work is - and is not mentioned in your full presentation - the need for "schedule and cost margin" for the irreducible variances in the work. Aleatory Uncertainty. And "risk buy down" of the reducible (epistemic) uncertainty in the work. Without these two activities, the project is over budget and behind schedule on day one. That condition is the basis of all credible estimates and rarely if ever considered outside the defense domain for software development. Determining the margin and risk buy down needed is described here.
Toggle Commented 7 days ago on How to "Lie" with Statistics at Herding Cats
1 reply
Did you read the follow on Post? rwad and before flaming further
Toggle Commented 7 days ago on How to "Lie" with Statistics at Herding Cats
1 reply
In the world of project management and the process improvement efforts needed to increase the Probability of Project Success anecdotes appear to prevail when it comes to suggesting alternatives to observed dysfunction. If we were to pile all the statistics... Continue reading
Posted 7 days ago at Herding Cats
It is conjectured that uncertainty can be dealt with ordinary means with open conversation, identification of the uncertainties and their handling strategies. That quantitative methods are too elaborate and unnecessary for problems except the most technical and complicated ones. When... Continue reading
Posted 7 days ago at Herding Cats
When we hear about software development in the absence of a domain, it's difficult to have a discussion about the appropriate principles, processes, and practices of that work. Here's one paradigm that has served us well. In the Software Intensive... Continue reading
Posted Aug 20, 2015 at Herding Cats
Steve McConnell's recent post on estimating prompted me to make one more post on this topic. First some background on my domain and point of view. I work in what is referred to a Software Intensive Systems (SIS) involving Introduction,... Continue reading
Posted Aug 19, 2015 at Herding Cats
The door of a bigoted mind opens outwards. The pressure of facts merely closes it more snugly. - Ogden Nash When there are new ideas being conjectured, it is best for the conversation to establish the principles on which those... Continue reading
Posted Aug 19, 2015 at Herding Cats
Todd Little posted a comment on "How To Lie With Statistics," about his observations on the chart contained in that original post. As Todd mentions in his response The Cone of Uncertainty chart comes from the original work of Barry... Continue reading
Posted Aug 17, 2015 at Herding Cats
Todd, Thank you for your comment. Let me start with... Statistically the "Ideal" line in your graph is a post-hoc metric, since the "ideal" - project performs as planned - cannot be statistically possible in non-stationary stochastic processes such a SW Development work. That estimate evolves as the project evolves in all project condition, since the network of work activities is itself a coupled stochastic process. The single line must have time evolving variance bands to properly represent the "current forecast" of the Estimate to Complete and Estimate at Complete. Point estimates made at the beginning of the project will rarely is ever be correct. provides some guidance for this approach. So does research material, journal articles, and conference proceeding at That graph was used standalone from a post by an advocate of #Noestimates. I used that standalone graph as the basis for this June post.I have since found the entire presentation, which the #Noestimates advocates ignore. That is original post (with the single graph) is corrected here If will update the June post to reflect the new information from from you briefing The IEEE paper still mentions "self selected" projects (one of the "Lies" in Huff's book), much in a the same way Standish Report and many of our DOD, DOE, and NASA databases do. As a researcher in Root Cause Analysis of Over Target Baseline (OTB) and Nunn McCurdy breaches for ACAT1 programs we face similar "self selected" statistical samples as does my colleague in NASA. The difficultly is the creation of a sufficiently robust sample space for "all" programs in a sample class, since only OTB and NM programs are reported to GAO as candidate for analysis. I will suggest though from our RCA methods, that the unfavorable estimate variances are a symptom NOT the cause of the overages. And the report in IEEE is missing this analysis, while the full presentation contains the basis for the RCA. Here's a few examples of the RCA work in our domain from from one support contact I work Your full briefing provides "10 reasons" but the IEEE article does not. The IEEE article graph is used by the #NoEstimates advocates to conjecture that estimates cannot be done and should not be done on software projects - that they are a waste. Much in the same way it is conjectured Steve McConnell's book on estimating "proves" estimates can't be done, by selectively quoting contents of the 1st Chapter. Further these selected "clips" are used In the same why the #NoEstimates advocates use large DOD ERP overages as example - both without any RCA or proposed corrective actions. I will update this June 21st post to point to the later post from your full length briefing. But as we have learned from our research, decisions made for corrective actions or policies based on observational data of overages, performance shortfalls, and schedule delays, without explicit root causes and possible corrective actions creates the illusion of understanding where there is no actually understanding needed to make policy changes. Ou current RCA process is "reality Charting" found at More work remains, using databases such as yours - adjusted for the biases of self selected samples, rather the entire sample space of all projects corrected to project size, and other attributes - to reveal root causes, so policies and actionable changes can be made to "increase the probability of program success." There is a RFP on the street for Space and Missile Command (LA AFB) to just such work, This is a continuing problem that requires increasing the in statistical integrity of the sample databases. The lack on integrity can be found in DOD, DOE, and NASA databases as well.
Toggle Commented Aug 17, 2015 on How to "Lie" with Statistics at Herding Cats
1 reply
This is my last post on the topic of #NoEstimates. Let's start with my professional observation. All are welcome to provide counter examples. Estimates have little value to those spending the money. Estimates are of critical value to those providing... Continue reading
Posted Aug 16, 2015 at Herding Cats
Decisions are about making Trade Offs for the project that are themselves about: Evaluating alternatives. Integrating and balancing all the considerations (cost, performance, Producibility, testability, supportability, etc.). Developing and refining the requirements, concepts, capabilities of the product or services produced... Continue reading
Posted Aug 15, 2015 at Herding Cats
At a recent conference the discussion of the integration of Agile with Earned Value Management on programs subject to FAR 34.201 and DFARS 252.234-7001 was the topic. Here's my presentation. Turns out it is a match made in heaven. Since... Continue reading
Posted Aug 14, 2015 at Herding Cats
How To Lie With Statistics is a critically important book to have on your desk if you're involved any decision making. My edition is a First Edition, but I don't have the dust jacket, so not worth that much beyond... Continue reading
Posted Aug 14, 2015 at Herding Cats
In a recent post of forecasting capacity planning a time series of data was used as the basis of the discussion. Some static statistics were then presented. With a discussion of the upper and lower ranges of the past data.... Continue reading
Posted Aug 13, 2015 at Herding Cats
The management of projects involves many things. Capabilities, Requirements, Development, Staffing, Budgeting, Procurement, Accounting, Testing, Security, Deployment, Maintenance, Training, Support, Sales and Marketing, and other development and operational processes. Each of these has interdependencies with other elements. Each operates in... Continue reading
Posted Aug 11, 2015 at Herding Cats
Let's start with a background piece on estimating. The Fermi Problem. A Fermi estimate is an order estimate of something. Not an order of magnitude (that's a 10X estimates, easy for anyone to make). These types of problems are encountered... Continue reading
Posted Aug 9, 2015 at Herding Cats
It is popular in some agile circle to use Waterfall as the stalking horse for every bad management practices in software development. A recent example is Go/No Go decisions are a residue of waterfall thinking. All software can built incrementally... Continue reading
Posted Aug 8, 2015 at Herding Cats
All the work we do in the projects domain is driven by uncertainty. Uncertainty of some probabilistic future event impacting our project. Uncertainty in the work activities performed while developing a product or service. Decision making in the presence of... Continue reading
Posted Aug 4, 2015 at Herding Cats
The architecture of COTS products comes fixed from the vendor. As standalone systems this is not a problem. When integration starts, it is a problem. Here's a white paper from the past that addresses this critical enterprise IT issue Inversion... Continue reading
Posted Jul 30, 2015 at Herding Cats
I found another paper presented at Newspaper systems journal around architecture in manufacturing and ERP. One of the 12 Principles of agile says The best architectures, requirements, and designs emerge from self-organizing teams. This is a developers point of view... Continue reading
Posted Jul 29, 2015 at Herding Cats
I was sorting through a desk draw and came across a collection of papers from book chapters and journals done in the early 2000's when I was the architect of an early newspaper editorial system. Here's one on Risk Management... Continue reading
Posted Jul 28, 2015 at Herding Cats
I hear all the time estimating is the same as guessing. This is not true mathematically nor is not true business process wise. This is an approach used by many (guessing), not understanding that making decisions in the presence of... Continue reading
Posted Jul 27, 2015 at Herding Cats