The Science behind Agile thinking

Agile Manifesto and principles form the premise for most Agile thinking. Donald Reinertsen in his book, “The Principles of Product development flow” helps us understand why it is economical to shift to an Agile mindset.

Here are a few key points that interested me (Refer the book for more breadth and depth)

For the product development Flow to be effective, we need to understand the economics of Queue’s, Variation, Work-In-process, batch Size, Fast feedback, Sequencing and their interrelations.

Queue’s increase Cycle Time, Variability and Cost of Process. As capacity Utilization increases Queues increase exponentially and processes become unstable.  The longer we delay adding capacity or shedding demand, the more expensive the problem becomes.

Variation is better managed by addressing the other parameters. Perform many small experiments (rather than one big one). Through repetition, reuse, and standardization variation is reduced.

By reducing project scope, we trigger a regenerative cycle of Faster Cycle time, Lower Risk, and shorter planning Horizons. This results in reduced need for oversight. Outcomes are more predictable as forecasting becomes exponentially easier for short time horizons. Accept variability in content to prevent variability in timing

Smaller Batch Size, improve Cycle Time and reduce variability in flow.

As Batch size become larger the consequences of failure increases exponentially, feedback is delayed, Overheads increase and rework will be more expensive. Large batches dilute responsibility and are de-motivating due to delayed feedback. Large Batches make Schedule & Cost grow up exponentially. When we batch together a no. of items the entire batch acquires the properties of its most limiting element.

Small batches reduces risk, there is less changes in technology or requirements to address. Small changes are easier to debug than large ones. Fast feedback improves efficiency because engineers are more efficient when something is fresh in their mind. Small batch size is a powerful tool to Increase urgency, motivation & Accountability. Short run lengths reduces Queues.

Always maintain a regular cadence. Regular cadence of small batch Size reduces Co-ordination overhead. A regular cadence for product introduction prevents delays in one product from cascading to other products. If we have to meet a regular product launch schedule, we need capacity margin to absorb schedule delays at intermediate milestones & variations.

To better understand on Queue theory, Variation, Work-In-process, batch Size, Fast feedback, Sequencing and their impact on product development, I recommend Don’s “The principles of product development FLOW, Second generation Lean Product development”

Planning for Quality

During the planning stage we plan for resources, we schedule, we allocate time for defect management, we expect certain rework & account for that. We plan the scope, we stagger the development activities and plan for deployment. Planning for quality is mostly relegated to allocating time & Resources for product testing or verification & validation activities.

I would say, we need to ask ourselves during planning, as to “what proactive steps can we take to Improve Quality, Productivity, reduce defects and rework?” The answer to this question can come from previous executed projects,experience or any other sources

The term Execution excellence is intrinsically dependent on planning excellence.
With most failures attributed to deficiencies in planning, Continual proactive strengthening of the Planning activity will result in improved execution with Quality outcomes.

Key Ingredients of a CMMI Process Model

Robert Stoddard has provided an excellent resource to help plan process performance models See here  Reproduced Below is an extract one of the key slides.

  • Statistical, probabilistic or simulation in nature
  • Predict interim and/or final project outcomes
  • Use controllable factors tied to sub-processes to conduct the prediction
  • Model the variation of factors and understand the predicted range or variation of the outcomes
  • Enable “what-if” analysis for project planning, dynamic re-planning and problem resolution during project execution
  • Connect “upstream” activity with “downstream” activity
  • Enable projects to achieve mid-course corrections to ensure project success

Mitigation vs. Contingency

With regards to Risk Management, i recently encountered a query on what are mitigation plans and what is Contingency plan ?

Mitigation actions are taken before the Event occurs, to either reduce the impact or decrease the possibility of occurrence. Contingency is to help support what is required to be done in case the event occurs. Usually the Recommended actions from an FMEA are Mitigation actions. They help reduce the RPN number.

As a practical example, Consider driving a Car, having a speed governor, speed indicator, warning beep, following traffic rules, help Mitigate the possibility of an Accident, these set of actions help detect early, reduce the probability of Occurrence or lower the impact severity. They are mitigation actions.   Insuring the Vehicle & taking a personal Accident Insurance will not prevent Accident, they provide contingency funds to support recovery post the Accident event. (Note: Insurance is considered as Risk Transfer)
In any given scenario, both of them are important and required.

In project execution also, we take actions to mitigate identified risks, & at the same time have contingency buffers to absorb cost & schedule over runs.

Summary – High Velocity Edge

Steven J Spear, In his book “High Velocity Edge” talks about 4 Capabilities that distinguish High Velocity companies from the rest. He illustrates these capabilities with examples of companies that have exemplified them.

The Four Key Capabilities are

CAPABILITY – I – System design and operations

Build a system of “Dynamic discovery” designed to reveal operational problems and weaknesses as they arrive – The Systems & the Processes are so designed, to enable easy & early surfacing of Problems. Thus Problems are made visible as and when they occur. systems & Processes are so designed that the Problems have nowhere to hide.

CAPABILITY – II – Problem Solving and Improvement

Attack and Solve problems as and when they occur converting weaknesses into strengths – As and when problems are made visible, they are addressed with a Sense of Urgency “Swarming & Solving” With Changing market dynamics & continuous disruption, Organizations with this capability will be stronger than their counterparts who are slow to execute or Adopt to the changes.

CAPABILITY – III – Knowledge sharing

Quickly disseminate knowledge gained through problem solving, across the Organization – Minimize Re-learning; different teams in the organization do not solve the same problem; they use the know how to Accelerate towards solutions.

CAPABILITY IV – Develop High Velocity Skills in Others

Create Managers invested in developing everyone’s capacity to continuously innovate and improve – this is the Key to survival. Developing People’s Skills & Competency in the First 3 capabilities is critical to sustained growth

These are also the Key tenets on Lean Thinking, Developing People skills in problem solving & Continuous Improvement is critical for long term sustainence & growth

CMMI, Has outlived it’s purpose !

Let me explain,  Anything new that gets created, is to serve a certain purpose. To address a problem or exploit an opportunity that exists. (of-course excluding Art). On Similar lines, in the mid 80’s the US DOD, was facing a problem. Their software projects were mostly delayed & had Quality issues. the problem was compounded because of the size of the projects & multiple Sub-contracted deliverable s. They approached Carnegie Mellon and took help from Academia to address this.

CMM was first envisioned to solve this. By assessing the software vendors on their capabilities & Maturity of Practices. the US DOD was given assurance of meeting a minimum threshold of performance & predictability.

It has been nearly 3 decades hence, Technologies, tools, programming paradigms have all evolved. Program management practices have improved significantly. Methodologies such as Agile, extreme programming, Test Driven development supported by Code Analysis tools, Continuous Integration & Deployment are the strengths of today’s most successful companies.  Looking back at the original purpose and current realities, the Value provided by CMMI Certification has significantly diluted.

Adaptability, Flexibility & Fast innovations are primary drivers in current environment of constant flux. If CMMI has to retain its value proposition, it will need to Re-purpose itself to Assess & certify organizations as capable & Mature enough to meet these new attributes. CMMI Certified Organizations has to meet current needs & future directions. 

A very short note on Measurement system

A Technical Note from SEI “Experiences in implementing Measurement Programs” has this below picture which illustrates the different types of performance indicators

GIM-Relation For all those wanting to set up a Measurement system, this is a good reference to start from.
In reality though, you may have many more intermediate layers between the Goal & the Tasks. Depending on the breadth & depth of the org. structure, each of the clusters / groups may have their own Goals & Objectives which are derived from the top or based on their internal problems & Market Opportunities they address.

When designing the reporting mechanisms we need to ensure that the aggregations are meaningful & actionable. It should be possible to identify the specific causes of variations and take Corrective & Preventive actions at the right place in the hierarchy.
Wherever possible, an indicator which provides Leading information should be preferred over a lagging indicator.

Effective & Efficient processes – a balancing act?

Set of Interrelated / interacting activities that help transform Needs or Problems into Products & Solutions is a Process.
A process that is capable of Meeting or exceeding desired outcome is considered as effective. A process that consumes less resources than anticipated to meet the desired outcome is considered efficient.

Process-Repn

If there is a Gap in the desired outcome vs actual results, the Effectiveness reduces. Similarly, if the consumed resources exceeds plans the efficiency decreases. below is a visual representation of this relationships to help understand better.

Eff-Eff01

While the core intent is to be able to get to the intersection of planned vs. desired. There has to a be a conscious focus to improve the processes to have a positive impact on both efficiency & Effectiveness.

Erroneous Aggregations

Recently I have had a close encounter with Aggregating Cost of Poor Quality (COPQ) which was simply incorrect.
The case being discussed here is a scenario where COPQ is simply Rework effort / Total Effort. COPQ at project level, Portfolio level, Line of Business, & Business Unit was arrived at by Sum of All Rework (@ that level) / Total Effort (@ that level).  All was well until it was time to infer and make decisions.

Comparisons were made between groups, and teams with too low & Too high values had to explain the reasons. Over a period of time this led to the wrong behavior of managing the Effort Data entries rather then looking for ways to minimize rework & Poor Quality costs. Variations were mostly attributed to incorrect Data collection (on Rework) rather than real contributing factors to Poor Quality.

A more appropriate & meaningful representation is to calculate the COPQ only at the project level. All aggregations should be about % of Projects that meet / do not meet certain criteria. Hence the Reporting of COPQ at different levels would simply state % of projects with high Rework / Moderate Rework or low Rework. The Management systems should be tuned to monitor this and take corrective & Preventive actions.

This is not the solution, it is only a way forward. Since projects differ by complexity, team Size, Team Competency and other such factors. having a common goal may not suffice. E.g. a goal of 5% COPQ for both the low complexity / Small team  & High Complexity /Big team is not fair.

We will need to work on an approriate metric to that can help reinforce right behaviors and at the same time help identify the right contributing factors of Poor Quality Costs