ESG Reporting

Why Most Double Materiality Assessments Fail Assurance (and How to Fix Them)

Double Materiality Assessments

Introduction

Double materiality assessments sit at the core of CSRD; however, an increasing number of organisations are failing assurance reviews—not because the concept is misunderstood, or because regulatory expectations are unreasonable. They fail because most organisations treat double materiality as a reporting deliverable, rather than what it actually is: a structured and governed decision system that determines which sustainability impacts/ risks the organisation considers significant and tangible. In fact between 2023–2024, multiple European audit regulators and standard bodies observed that early CSRD reviews are not failing due to missing disclosures, but due to weak decision processes underlying these outcomes. European authorities have reported multiple instances of undocumented judgments, inconsistent methodologies, and ad-hoc processes behind failed assurance exercises.

In most failed cases, the problem is not what was deemed material but how that conclusion was reached, evidenced, approved, and maintained over time. It is a reflection of the systemic failure of how an organisation approaches, addresses, and delivers material outputs. 

This article examines where double materiality assessments typically break down, what auditors actually challenge, and how organisations can build assessments that are defensible, repeatable, and assurance-ready. 

The Core Problem: Materiality Is Treated as an Output, Not a System

Most double materiality assessments are designed backwards. The starting question is often, “What do we need to disclose under CSRD?” rather than, “Which sustainability impacts and risks materially affect enterprise value or society, and how do we know?”

The consequences are usually one or more of the following:

  • The assessment becomes a workshop artefact rather than a decision framework, which signals a one-off compliance activity.
  • Scoring is driven by opinion rather than criteria, which is subjective in nature.
  • Documentation is created after conclusions are reached, and often evidenced retrospectively.
  • Governance exists on paper, not in practice.

This pattern mirrors findings from the initial CSRD dry runs, where organisations were able to produce comprehensive topic lists but were unable to reasonably demonstrate how the scoring criteria were applied across business units or years. In several of these pilot assurance exercises, auditors reported that materiality conclusions could not be replicated independently using the organisation’s own methodology documentation!

Common Failure Modes That Trigger Assurance Challenges

  1. Weak or Unverifiable Evidence:

The most common reason double materiality assessments fail assurance is the absence of evidence behind the scoring decisions. Typical examples include:

  • Topic inclusion justified by “industry relevance” without company-specific analysis.
  • Impact severity is scored without reference to incidents, complaints, regulatory actions, or value-chain data.
  • Financial risk assessments are disconnected from enterprise risk registers or financial planning assumptions.

A common case from initial CSRD assurance reviews relates to biodiversity and land-use impact. Multiple organisations have concluded that biodiversity is non-material citing the rationale that, “Operations are not located in sensitive areas.” However, when assurance teams challenged this, the organisations were unable to evidence site specific evaluation, supplier/ geography analysis, or adequate assessment of potential exposure to protected areas. In multiple cases reviewed by European auditors, the conclusion itself was not disputed, but could not be justified through appropriate  documentation which meant that the judgment could not be validated.

  1. Undocumented Scoring Logic:

Many organisations apply scoring scales without documenting what those scales actually mean.

Common issues include:

  • Undefined thresholds (what distinguishes a score of 3 from a 4?)
  • Inconsistent interpretation across teams
  • No documentation explaining how the likelihood, severity, or scale was assessed

Studies examining previous sustainability assurance engagements have consistently found that undefined scoring criteria and inconsistent threshold application are the biggest drivers of scope limitations and disclosures which are hard to validate.

  1. Stakeholder Bias and Uncontrolled Subjectivity:

Stakeholder engagement is often presented as proof of robustness. In practice, it is one of the weakest elements in many assessments.

Frequent issues include:

  • Over-reliance on internal management views
  • Limited or unrepresentative external stakeholder samples
  • Weightings applied without justification.
  • No process for resolving conflicting perspectives

This issue has been categorically documented during regulatory reviews of materiality processes, where auditors have found that external inputs were collected but not meaningfully integrated. In many cases, regulators flagged that management overrode external inputs without appropriate reasoning; thus creating grounds for lack of substantive review/ challenge.

Auditors are not testing whether stakeholders agree with management. They are testing whether disagreement is handled transparently and consistently.

  1. Incomplete Value-Chain Coverage:

Many assessments focus heavily on own operations and first-tier suppliers, while upstream and downstream impacts are treated narratively rather than analytically.

This is particularly problematic for:

  • Consumer goods companies (use-phase impacts)
  • Financial institutions (financed emissions and social exposure)
  • Organisations with complex global supply chains

Acknowledging value-chain impacts without applying the same scoring rigour creates internal inconsistency—one of the fastest ways to attract assurance scrutiny.

  1. Lack of Separation of Duties:

In many organisations, the same team:

  • Identifies topics
  • Applies scoring
  • Determines thresholds
  • Approves the final list

Auditors mostly consider limited role/ duty segregation as a potential area of governance weakness. This view is also relevant for double materiality – if the processes of identification, scoring, and approval are run within a single function, it can raise questions regarding the credibility of the process.

What Auditors Actually Challenge (and What They Don’t)

A common misconception is that auditors disagree with materiality conclusions. In reality, they rarely do; in fact what they test is process integrity. Auditors typically focus on five areas.

  1. Methodology Existence and Consistency: Is there a documented methodology? Was it applied consistently? Are changes from prior years explained and approved? If the methodology exists only in slides or narrative text, it will not pass.
  2. Traceability from Assessment to Disclosure: Auditors expect a clear traceability across the following areas:
  • Risk or impact identification
  • Scoring rationale
  • Materiality conclusion
  • Disclosure decision
  1. Evidence Supporting Judgment: Auditors expect references to internal reports, risk registers, incident logs, benchmarks, and assumptions. Unsupported assertions usually do not work well for assurance activities.
  2. Governance and Approval: Who reviewed the assessment? Who challenged it? Who approved it—and when? Informal or post-hoc approvals adversely impact the assessment outcome.
  3. Audit Trail: Every score should be traceable to documented evidence and recorded decisions. If approval occurs after outcomes are finalised, governance is found lacking.

In reality, assurance findings focus on missing audit trails, inconsistently applied methodology, undocumented changes from prior reporting periods, approvals occurring after outcomes have been reported/ finalised. If an organisation has diligently run the processes without bias, materiality assessment outcomes are mostly favourable.

How to Structure Defensible Materiality Decisions

Some of the best practices to follow are: 

Step 1: Define Decision Criteria Upfront

Before identifying topics, define impact dimensions, financial risk pathways, likelihood horizons, and evidence expectations. This prevents retrofitting logic after results are known.

Step 2: Separate Identification from Scoring

Topic identification should be broad and inclusive. Scoring should be applied later, using predefined criteria. Mixing the two introduces bias.

Step 3: Anchor Scoring in Evidence

For each topic:

  • Document primary evidence
  • Explicitly note gaps and uncertainty.
  • Explain how the judgment was applied.

A concise, structured rationale is more effective than an extensive narrative.

Step 4: Apply Thresholds Transparently

Materiality thresholds must be defined, approved, and applied consistently. 

Step 5: Ensure Challenge Is Real

Introduce formal challenge mechanisms and document responses. If no one can disagree with the assessment, it is not robust.

Version Control and Governance: Where Many Assessments Quietly Fail

Even well-designed assessments fail assurance due to weak version control.

Organisations must be able to demonstrate:

  • Which methodology version was used
  • Which data inputs applied to which period
  • What changed from prior years—and why
  • Who approved those changes?

Research typically suggests that year-on-year inconsistency will be a focus area as CSRD reporting for an organisation matures. Where material topics suddenly appear/ disappear without documented methodology change(s), organisations run the risk of adverse assessment outcomes and potential scrutiny from regulators.

ISSB Compliant ESG Assurance

The Real Fix: From Compliance to Controlled Judgement

Double materiality fails assurance when it is treated as a reporting obligation. It passes when it is treated as a structured judgment process—governed with the same discipline as financial reporting or enterprise risk management.

The fix is not more workshops or surveys. It is:

  • Clear decision logic
  • Stronger evidence discipline
  • Explicit governance
  • Documented judgement
  • Early involvement of assurance teams

When these elements are in place, assurance stops being a threat and becomes validation.

Conclusion

Double materiality is increasingly being viewed as a core governance process, not a mere disclosure exercise. Organisations that are able to embed it as a controlled, repeatable judgment system are better positioned to withstand assurance scrutiny and regulatory review. However, organisations that view it as a one-off compliance task will find it difficult to remain defensible as scrutiny will increase.

Learn more about – audit readiness

Visit for more: https://www.corpstage.academy/

Leave a Reply

Your email address will not be published. Required fields are marked *