EVRA consulting
Header image (stock image used if left blank)

Nieuws

Content
View all

05/06/25

Delay analysis: are we really applying the methodology we claim to use?

Delay Analysis: Are We Really Applying the Methodology We Claim to Use?

Often, I come across delay analysis reports and claims, especially those prepared by contractors inhouse, where a specific method is clearly identified — for example, Time Impact Analysis (TIA), Windows Analysis, or Impacted As-Planned (IAP). But when you review the actual application, what’s been done tells a different story; the delay analysis method named in the report doesn’t match the methodology or the steps applied.


Author: Ibrahim Elsisi, Technical Director, Australia


The issue becomes more serious when parties in dispute each engage consultants and agree — at least on paper — to use the same method. After weeks of effort, report writing, programme modelling, and critical path discussions, they end up with two different analyses, both claiming to follow the same methodology of that agreed method. Both sides can reference industry best practices. Yet what’s been applied is fundamentally different — and that just widens the gap. This gap raises serious concerns about the credibility and reliability of delay claims. It also highlights the importance of understanding not just what method you say you’re using, but how you actually apply it — and whether your application aligns with the principles of that method.

Even in AACE® International’s RP 29R-03, the Method Implementation Protocols (MIPs) carry different names. In fact, while RP 29R-03 lists 9 primary MIPs, the document itself refers to over 30 different common names used across the industry for these MIPs. This is a clear and compelling example of how widespread and ingrained the confusion has become. When the same MIP can be referred to by multiple names — and the same name can point to more than one MIP — it’s no surprise that practitioners, consultants, and parties to a dispute often walk away with completely different understandings of what method is actually being applied. If they’re truly based on the same methodology, why do they have different labels? These different names lead to different interpretations — and therefore different applications.

‘Windows Analysis’ is another classic example where a common method name leads to multiple interpretations and hence different methodologies. To add to the complexity, 7 out of the 9 MIPs can also be referred to — in one way or another — as “Window(s) Analysis.” In most cases, “Window Analysis” refers to Time Slice Window Analysis (or I should not name the method!) — an observational approach where delays are assessed across multiple time intervals, typically using contemporaneous updates. However, others might rightly argue that “Window Analysis” is not a delay analysis method in itself, but rather a framework or approach that can be applied in conjunction with various methods. For instance, it could take the form of a TIA Window Analysis, where delay impact is applied across successive time windows, or an As-Planned vs As-Built Window Analysis, where the planned and actual progress are compared over some periods. Whether it's a TIA applied across multiple windows, an As-Planned vs As-Built Window approach, or a Time Slice Windows Analysis, the term “windows” appears again and again as a descriptor. Yet, each of these MIPs may have entirely different structures, purposes, and assumptions. Some are modelled, others observational. Some use dynamic logic updates; others don’t. Once again, we are faced with a situation where the same term or method refers to different methodologies, each producing very different results.  This doesn't just delay the process — it adds fuel to the dispute, increases costs, and undermines confidence in the reliability of delay analysis as a tool for resolving time-related issues.

One possible reason for this is that many delay analysis methods share similar characteristics, but their purpose and underlying logic are significantly different. For example, Time Impact Analysis (TIA) vs Impacted As-Planned (IAP) are both modelled, additive, and prospective methods. Both involve inserting delay events as fragments (fragnets) into a programme to assess the potential impact. Despite appearing similar, these are two fundamentally different methodologies that produce different outcomes — yet are often confused or misapplied. But their foundations are different:

  • TIA is typically applied to a progress update just before the start of the delay event — meaning it includes actual progress and reflects project realities.

  • IAP, on the other hand, is usually applied to the baseline programme — but even the term “baseline” can be misunderstood. Does it refer to the original approved programme at the start of the project (with no actuals)? Or the latest approved schedule at the time of analysis? That’s a whole different debate.

However, an important nuance is often overlooked: if two experts are conducting their analysis during the very early stages of the project, where no progress updates exist yet, and the only available programme is (or should be) the baseline programme, then both parties are essentially applying the same methodology — regardless of whether they label it as TIA or IAP. In such a scenario, they are both inserting modeled fragnets into the baseline to assess delay impact prospectively. Yet, one might call it TIA and the other IAP, leading to the false impression that two distinct methodologies are being applied. This highlights how naming conventions can mislead and widen the gap between parties, even when the technical work is fundamentally the same.

So how do we reduce this confusion? One practical approach I personally apply is to avoid naming the method altogether. Rather than stating “this is a TIA” or “this is an IAP,” I focus on clearly describing the methodology being used. I explain what base programme is relied upon, whether the analysis is modeled or observational, whether actual progress data is included, how the delays are inserted, assessed, and interpreted, and whether the critical path is treated as dynamic or static. This approach helps eliminate ambiguity and encourages clarity. More importantly, it shifts the discussion towards the substance of the analysis — its strengths, weaknesses, and assumptions — rather than getting stuck in debates over whether the correct label has been applied.

I believe this is a conversation worth having across the industry. Are we consistent and accurate in the way we apply delay analysis methods? Are we contributing to clarity and resolution — or unintentionally adding to the confusion?

I’d love to hear others’ thoughts on this.


To discuss any of the points raised in this article, or to enquire about our delay analysis services, please email: Ibrahim.Elsisi@diales.com 


 

ArticlesAsia PacificGlobal

Related Articles

Content
Half width content (used for Videos/iframes)
Half width content (used for Videos/iframes)
Content
Content
Full width content

Meer dan 250 ervaren professionals in 15 landen, die in meer dan 17 talen werken, staan klaar om je te helpen de best mogelijke oplossing voor uw bedrijf te vinden.

NEEM CONTACT OP