Benefits of Just-in-Time Analysis
- Danielle Downs
- Nov 10, 2023
- 5 min read
What is Just-in-Time Analysis?
In traditional waterfall project methodologies, a lot of time is spent on up-front analysis and detailed technical and functional specifications. This enables the team to give precise (but not necessarily accurate) estimates in advance, before development starts.
The reason the estimates are not necessarily accurate is that complexities often emerge during development that weren’t anticipated, even in the detailed analysis phase. You therefore end up taking a “hit” twice: once to do the time consuming analysis and estimation and again when unexpected complexities or undiscovered requirements arise during development. You invest a lot of time and effort up-front but there is still a risk the project will overrun its original estimates.
Instead of taking this approach, Agile has a concept of “Just-in-time Analysis”, or backlog refinement, where development teams discuss and estimate the work required to deliver each increment of functionality (user story), just before development starts.
What are the Benefits of Just-in-Time Analysis?
The team has exponentially increasing knowledge of the project, technology and work already completed, so the estimate of each new feature is likely to be more accurate than the last and certainly better than it would have been at the start.
Time is less likely to be wasted discussing and estimating requirements that may change prior to delivery.
The analysis is fresh in the minds of the development team before they start work.
It is more likely that the people estimating the work will be those working on the functionality, which may not be true if there is a significant delay between analysis and delivery, during which time personnel on the team or project may change.
Reducing the Cost of Change
One of the primary advantages to Agile working practices and just-in-time analysis is that it allows the flexibility to change course completely at almost any time. If you discover that a feature on the roadmap is no longer needed by the market, or a piece of regulatory legislation comes in and changes have to be pushed in, ahead of other planned features, the cost of making that change is relatively low when using an Agile project methodology compared to waterfall.
Some time will be needed to work up the backlog for the next feature, in order for the team to refine it, but months of effort won't be wasted and development won’t be held up for too long, while the new feature is being specified. Instead, the team may be able to get involved and help the Product Owner elaborate and refine the stories as they are being written.

Reducing the Cone of Uncertainty
Agile development teams typically refine user stories up to 3 sprints (~6 weeks) before development is due to start. This can help uncover dependencies and potential areas of complexity early, so they don’t block development, as well as identify additional scenarios (undiscovered requirements), for which acceptance criteria need to be written. This can be very useful in reducing the cone of uncertainty and increasing the likely accuracy of the estimates, but it doesn’t make them bulletproof, as things will still be uncovered once development gets underway.
Generally speaking, the closer refinement to the start of development, the more accurate the estimates because the team will have the benefit of the greatest possible knowledge of the feature, as well as learnings from those that have gone before it, when arriving at their estimate.
This has the disadvantage of not being able to provide estimates early on in a project; but the value of very detailed but very imprecise estimates is highly questionable and can often result in a false sense of security, leading to the disappointing duo of over-promising and under-delivering against the originally estimated timeline.
The Law of Diminishing Returns
In a bid to minimise uncertainty yet further, some teams undertake investigative tasks (“spikes”) in previous sprints, but it is all too easy to forget that these are not “free”. They do of course detract from the team’s development capacity in the sprint in which they are carried out and do not always save time overall. When you factor in the context switching between work currently underway and thinking about upcoming work, together with the fact that you don’t often know “what you don’t know” until you actually start looking under the hood, it can turn out to be false economy.
Performing very detailed analysis or investigation too early can result in a “mini-waterfall” effect, where the team (or the one team member carrying out the “spike”) thinks they know everything they need to start development, but greater complexity is still discovered once the whole team starts work. Once again, you end up taking a “double hit” both in the previous sprint, where one person was distracted from working towards the sprint goal, then again during development. At the other extreme, the developer working on the “spike” may find that, in order to achieve the desired outcome, they end up doing the bulk of the development work on the feature! This may be seen as positive but not if it is to the detriment of the work taking place in the current sprint. You just end up shifting the effort from one sprint to another.
Experts will often recommend “timeboxing” the investigation and limiting the scope of the spike, with clear acceptance criteria for what needs to be discovered; stopping short of doing any actual development. This may mitigate the impact on work already underway but experience has shown that it risks arbitrarily cutting short potentially valuable investigation and only a finite number of solution options may be explored, sometimes at the expense of what could be the best option. Worst of all, I have found that it doesn’t necessarily save any time because theoretical investigation is no substitute for hands-on trial and error, which will inevitably need to take place anyway.
In reality, I have found that it usually pays to be brave and let the team discover the complexity as they go. Refinement sessions are great for identifying what the unknowns might be and agreeing the architectural approach, but the human mind is such that focus is sharpest when operating on the task at hand, not discussing it in abstract terms.
Conclusion
Just-in-time refinement is valuable in familiarising the team with upcoming work; enabling them to identify possible dependencies that might prevent it being taken into a sprint and to give their best estimate of the likely effort involved.
While this method has the disadvantage of not being able to estimate effort for an entire project or feature early on, the closer refinement takes place to development starting, the more informed and therefore accurate the estimates will be.
The perceived need to estimate a whole project or piece of work up-front has to be balanced against the usefulness of the estimates if they are likely to be inaccurate.
Just-in-time estimates are usually sufficiently accurate to enable the Product Owner to prioritise the work based on its potential ROI and gain a broad understanding of the likely projected timeframe, unexpected complexities notwithstanding. This level of accuracy is typically the best achievable without more detailed investigations, which consume developers’ time and are unlikely to significantly reduce the overall time required to complete the work.
A distinction should be made between refinement and more detailed analysis and investigation. Unless the investigation is needed before development can physically start (e.g. a component has to be selected before it can be built), it may be false economy to attempt to perform or timebox the investigation in a previous sprint, as it will distract the team from that sprint’s goal, as well as potentially fail to highlight complexities that will only be discovered once work is underway.
Comments