Looking Beyond the Critical Path

Andrew White
The progress position of a project can be assessed using a variety of methods, each dependent on different combinations of data such as physical progress, hours worked, expended resources and costs, with different data providing different perspectives of progress.

Concluding a progress position invariably requires assessing what has been achieved to identify any slippage or gain in the period of analysis and consequently, its anticipated effect on the planned completion. 

Any method needs to be applied competently, consistently and at regular predetermined intervals based on a comparison against an original and where appropriate, further contemporaneous baseline(s) to identify meaningful trends, realistic forecasts of the effect of delay and accordingly, any corrective actions.

There is an understandable reliance on the critical path methodology (CPM) to assess the effects of progress given its dominance as a planning and programming tool. However, it may be appropriate to use other methods such as 'Earned Schedule Analysis' (ESA), to provide a further dimension to the assessment of the project progress position and its potential or forecast effect on the planned completion.

Stacking and forecasting: the realistic effect of progress

For example, assuming the baseline programme is realistic and practicable, a recalculation or ‘rescheduling’ of the programme based on progress at the assessment date and subject to any reasonable adjustments to logic, may demonstrate a change in the planned completion based solely on the critical path(s). However, this process may not provide a realistic demonstration of the potential effect of progress on non-critical activities, sometimes referred to as 'out of sequence working' whereby remaining planned activities slip to the later stages of the programme, causing a high level of concurrency between activity bars or 'stacking'. 

There is an understandable reliance on the critical path methodology (CPM) to assess the effects of progress given its dominance as a planning and programming tool.

'Stacking' may become apparent through increases or 'spikes' in overall planned resources for the project, but not necessarily particular trades or disciplines. Over several programme updates, this may lead to an unrealistic concurrency of planned activities and potentially, an unrealistic remaining programme of work to completion, despite the demonstration of a critical path(s).

Programme activities driven by preferential ('soft') logic or programmes with an undeveloped Work Breakdown Structure (WBS) may be more vulnerable to this potential under assessment. For example, a programme developed on a 'rolling wave’ basis will have a relatively undeveloped WBS in its later stages or 'low density' zone.

Before 'stacked' activities become critical, ESA may provide an early warning of potential problems and / or a more realistic range of planned completions as demonstrated in figure 1 below;

 

 Figure 1

Figure 1

Figure 1 demonstrates slippage X-Y at the progress assessment date and potentially, a more realistic updated forecast of the completion date, when compared with an update derived solely from the critical path(s).

Similarly, when undertaking an analysis of delay, selection of the appropriate methodology will be dependent on several factors including the form of contract, circumstances of the analysis / dispute, availability and extent of records etc. ESA may support a more detailed understanding of progress when analysing a particular discipline, trade or work element.

Potential problems and pitfalls

Critics of ESA often cite problems with changes in scope and its potential variance with the critical path(s) as barriers to its use, but this later point can be its very strength, because subject to its correct application and by using it in conjunction with CPA, it may provide a more realistic range of forecast completion(s).

In the case of change, as with any project management tool or technique, its selection and method of implementation requires careful consideration based on circumstances and recognised practice, to avoid unrealistic or erroneous conclusions.

Conclusions 

  • A range of forecast planned completion dates derived from multiple methodologies may be more credible than sole reliance on a CPA based forecast.

  • Any methodology applied without an appropriate and consistent approach will yield unreliable results.

  • ESA will tend to provide a more pessimistic progress position and potentially an early warning of the risk of activity stacking. 

  • ESA should be undertaken using a baseline S-curve derived from an appropriately cost loaded programme. 

  • The approach to any delay analysis is dependent on the circumstances in question.