ROTI – Food for Thought

Reading is such a wonderful thing, the more variety you read, the more wilder is your imagination. Last weekend I had an epiphany, R.O.T.I – Return On (TIME) Invested

Wow, What If that could be a Real measurable metric? A quick search on google, and i realized the Acronym is taken 😦  It is in use to evaluate agile meeting effectiveness. So my momentary dream of registering (R.O.T.I) acronym and having copyrights, died in its own womb.

Never Mind, so continuing the quest, If we could  realize the importance of Quality time, that would act as a natural fuel or catalyst towards building a Culture of Quality.it would trigger a virtuous cycle of Improvement.

Unlike Money, Time is a perishable asset, with a constant unit rate of depreciation (i.e. if we have 24 hours at the start after 6 hours, we have only 18 Hours left – or One Unit depreciation / Hour)

So after some more deliberation, The first preliminary draft of the equation.

R.O.T.I = (Utility value of an activity over applicable period of time) / (Time Invested in the activity) + (Additional Time Re-invested in same activity)

Leading Metrics – Underutilized Opportunity

I have time & again come across senior leaders, who focus on the end result. they question the team and look for root causes well after the path taken has had its undesired impact.
examples of schedule and cost over runs or product quality issues are many.

Having a robust system to track ongoing performance, with leadership oversight on the path being traversed would have averted such undesired eventualities.

Most Processes can be broken down to their constituent Sub-processes. Each of Sub-processes may further be broken down to a series of activities. Sub-Processes take inputs and consume resources to produce the desired output. Having Indicators at appropriate level, to determine the quality and effectiveness of the Inputs, Processes & Outputs, will help identify deviations early and at the right level of granularity.

Having systems in place with leadership oversight, to recognize such deviations in Sub-processes and take corrective actions at the respective levels, can help arrest magnification of problems.

By Identifying Issues at such granular levels, the corresponding corrective actions are also small and easy to implement. Teams have the required motivation to make corrections and results are immediately visible, the right behaviors are thus reinforced.  By delaying or not making the fix at the right granular level, we let the small issues magnify into problems of significance. this further has negative impact on the teams motivation to address them.

Thus having good Leading Metrics or Indicators of progress on the path taken, go a long way in enabling desired outcomes.

A very short note on Measurement system

A Technical Note from SEI “Experiences in implementing Measurement Programs” has this below picture which illustrates the different types of performance indicators

GIM-Relation For all those wanting to set up a Measurement system, this is a good reference to start from.
In reality though, you may have many more intermediate layers between the Goal & the Tasks. Depending on the breadth & depth of the org. structure, each of the clusters / groups may have their own Goals & Objectives which are derived from the top or based on their internal problems & Market Opportunities they address.

When designing the reporting mechanisms we need to ensure that the aggregations are meaningful & actionable. It should be possible to identify the specific causes of variations and take Corrective & Preventive actions at the right place in the hierarchy.
Wherever possible, an indicator which provides Leading information should be preferred over a lagging indicator.

Erroneous Aggregations

Recently I have had a close encounter with Aggregating Cost of Poor Quality (COPQ) which was simply incorrect.
The case being discussed here is a scenario where COPQ is simply Rework effort / Total Effort. COPQ at project level, Portfolio level, Line of Business, & Business Unit was arrived at by Sum of All Rework (@ that level) / Total Effort (@ that level).  All was well until it was time to infer and make decisions.

Comparisons were made between groups, and teams with too low & Too high values had to explain the reasons. Over a period of time this led to the wrong behavior of managing the Effort Data entries rather then looking for ways to minimize rework & Poor Quality costs. Variations were mostly attributed to incorrect Data collection (on Rework) rather than real contributing factors to Poor Quality.

A more appropriate & meaningful representation is to calculate the COPQ only at the project level. All aggregations should be about % of Projects that meet / do not meet certain criteria. Hence the Reporting of COPQ at different levels would simply state % of projects with high Rework / Moderate Rework or low Rework. The Management systems should be tuned to monitor this and take corrective & Preventive actions.

This is not the solution, it is only a way forward. Since projects differ by complexity, team Size, Team Competency and other such factors. having a common goal may not suffice. E.g. a goal of 5% COPQ for both the low complexity / Small team  & High Complexity /Big team is not fair.

We will need to work on an approriate metric to that can help reinforce right behaviors and at the same time help identify the right contributing factors of Poor Quality Costs

Irrelevance of Effort /KLOC in PPM’s

It is high time we looked for alternatives to Effort & Lines of Code, in identifying x’s for Process performance or prediction models.

during the 80’s and early 90’s Structured / procedural programming was the de-facto method. In that scenario LOC was the primary measure to indicate progress. when we had to compare between programmers on their efficiency we resorted to Effort / KLOC. so productivity meant delivering higher Lines of Code with lesser efforts.

2 decades have passed since then, New programming languages & paradigms came. Tools & Techniques evolved. Time to write an application for modern day devices has drastically reduced. Newer Life-cycle models (Agile / Scrum / ..) have helped us go from Concept to Cash in very short times. In spite of all these changes we continue to rely on Effort & Lines of Code.

Prediction Models should be relevant & Valuable. they need to have practical applicability, so that businesses can take preventive actions based on predicted performance.