- Zsolt Berend
Improving through learning
Building a product like a start-up in a large organization. Sounds like an oxymoron? Yes, it may, but this is our story …
It all started with a coffee with Richard James, head of ways of working at Nationwide. I pitched the idea of a ways of working data and insights product that can be accessed by all colleagues at all levels as self-serve.
Decision #1: Off-the-shelf or in-house?
Shall we buy an off-shelf dashboard product or shall we build it in-house from scratch. There are pros and cons both ways, but for us the opportunity of going through the learning journey and taking teams, colleagues with us outweighed any potential cons.
We chose the in-house option, and formed a small, initially 3-member team to start experimenting.
. . .
Enabler #1: Autonomy
‘Accountable Freedom’ is one of our core principles at Nationwide as Tony Caink mentioned in the blog: Trust Part 1: Governance and Bureaucracy
For us, leadership support enabled true practical autonomy. We are empowered to self-organize, to decide what team-level processes we follow, how we go about achieving outcomes.
Enabler #2: Connected datasets
It is rarely a single team that covers flow of work end to end from idea to production, into the hands of the customers.
There are hand-offs within teams, in the form of role-based silos (e.g. analysis, design, code, test). Also, there are hand-offs between individual teams, or even between departments (example below).
Figure 1: Time to Customer Value
Individual teams can work on improving their cycle times to get work done, but it often remains a local optimization with little or no impact on time to customer value. The charts below shows an example of the gap between team level cycle time (number of days from in progress to done) and lead time (number of days from in progress to a release in production).
Figure 2: Cycle and lead time
Making visible the flow of work across the horizontal from idea to production with all teams involved helps identifying hand-offs, potential impediments, and unlocks measurability.
The chart below shows an example of time spent on average across the various work states and hand-offs, queues, wait states on the path to production. The high-bar ‘in design’ indicate a big up-front design process, and the various higher wait times like in blocked, on hold, ready for production imply impediments to flow. Context matters, we can have theories of what is causing the shape of the chart but only the teams doing the work will have the answers of the underlying factors. All these charts have drill-down capabilities to further analyses, of the various layers of Whys.
Figure 3: Value stream map - work and wait time
These can be awesome potential insights, but only if we unlock it, if we have high visibility of flow of work end to end. We measure this as a percentage of work items that are linked to production. The linkage goes between Jira releases/fix versions and changes in Service Now.
Enabler #3: Organic growth: invite over inflict
Jonathan Smart described the pattern, invite over inflict in Sooner Safer Happierand pointed out that “it is critical to communicate, communicate, communicate”.
This is what we did. We started with only a few colleagues as customers, early adopters, and established short learning loops. They started using our data and provided feedback on how they use it, why is it useful for them and what else would they like to measure. From the very beginning we introduced a regular fortnightly show & tell series where we invite our colleagues to present how they use the various pinboards, charts, and talk about the whys, what insights they get and how do they use it for decisions and improvements. Sharing these stories has been a very important enabler to go beyond the early adopters and innovators to attracting the wider population within the Society.
Besides ‘show & tells’ we also host regular drop-in sessions.
We focus on meeting people where they are. Instead of inflicting industry golden measures as ‘silver bullets’, we believe the learning journey is more important to go through and self-explore measures that matter in their context. We provide trainings like flow101 on demand, and coaching as well.
The chart below shows our growth trend from inception, we have now close to 1300 colleagues.
Figure 4: Data usage and adoption over time
Enabler #4: Self-serve
As a team we strive to create a learning culture. We are removing the key person dependency with the self-service functionality. Colleagues in control of their data in how they create and present the insight linked to key discussion points. Teams are self-sustained, creating, changing their own dashboards, re-using, sharing each-other’s charts. We chose ThoughtSpot as the visualization layer, which helped limiting the need for backend changes, unnecessary change lifecycles and opened up the ability to self-explore of data, and insights.
“This tool has allowed me to have quick and easy information to allow meaningful conversations around the work, be it flow, dependencies or linkage up to OKR’s. I have been able to easily put boards together …” Liz Cawley, scrum master, shared technology platforms
Enabler #5: Value stream level insights
Each months we provide value stream level insights for senior leadership. This consists of 5 key metrics, trend and in insights (picture below).
5 metrics, how do we measure it and why does it matter:
Golden Thread is described by Tony Caink in his awesome blog: Trust Part 2: Capacity Based Funding Experiment. This is measuring alignment, the % of stories (execution) that linked to strategic outcomes. The number of backlog items that are transparently linked all the way up to portfolio Outcomes indicates teams and product owners aligning their work to Society strategy. It is thus a leading indictor of the value of the strategic outcomes.
Release frequency: one of the DORA metrics is deployment frequency which measures how often code is deployed into production. Our measure is slightly different, we measure the % applications that have a release at least once in every 30 days. The added insight here is the regularity, the consistent pace of changes. Incremental delivery of value to customers depends on the ability to release software regularly. Without this ability, delivery is likely to still be high risk traditional releases.
Visibility top-down: this is the % of in-flight portfolio level Outcomes that have at least one story (execution level) linked. This tells us what proportion of outcomes have started. Only once this linkage exists can we start measuring flow at all.
Visibility end to end: this is the % of stories (done) that linked to a release in production. This enables measuring flow across the work done by team of teams up to production as described above in the connected datasets section.
Lead time: this is another DORA metrics with a slightly different starting point. We measure lead time from when work started by any individual, and team all the way to production, until when the change is implemented. This helps to unpack local context, local work cycle times and overall time to customer value. This shines light on hand-offs times, between teams, release approval process times, dependencies and impediments to flow of value. We are making the starting point agnostic to local workflows by mapping all individual workflow steps to todo, in progress, done, or cancelled in the backend. This helps having conversations about the genuine underlying causes of trends instead of just blaming data quality, and non-reliability because of “teams not moving tickets” argument.
Figure 5: Ways of working metrics - May 2022
This data and insights feedback loop drives positive behavioural changes in terms of increased visibility of work, and unlocking measurability of flow and value.
We proud to share that our team won this year’s Learning at Work Week award in the category of Improving through Learning.
Kudos to the team: Marc Price, Mark Beggs, Nathan Meek, Ryan Maloney, Andy Camp, Gareth Thomas, Andrew Di-Lella, Tom Mitchell
by Zsolt Berend, 11/10/22