Docs
MethodologiesGuides

MvF Minimum Structure

Reference specification — the eight mandatory building blocks every MvF must contain.

Overview

This page defines the minimum structure a Methodology Verification Framework (MvF) must contain to be implementable, auditable, and governable within the Carrot dMRV ecosystem. The intention is not to force every methodology into a single rigid format, but to ensure that any framework submitted to the ecosystem includes the essential building blocks for consistent digital execution, evidence package formation, technical validation, and standardization.

The structure below serves as a reference for MvF Authors, as a basis for internal evaluation by the Operations & Methodologies team, and — in the future — for RFP processes and community review. Every section follows the design for verification principle: each requirement must be expressed so that either the MvA (in automated execution) or an auditor (in independent verification) can answer, based on evidence, whether the requirement was met.

The eight mandatory sections are:

  1. Scope & concept
  2. Methodological reference & registry
  3. Eligibility, criteria & exclusions
  4. dMRV events & event catalog
  5. Inputs, evidence & evidence policy
  6. Validation rules & controls
  7. Calculations, formulas & parameters
  8. Outputs & digital evidence package

Scope & concept

The MvF must begin by establishing, transparently, which environmental problem or purpose the dMRV addresses and what impact-generation mechanism will be quantified and reported. This section defines the "conceptual contract" of the framework: what it measures, under what conditions, and with which boundaries. Without this framing, subsequent criteria and calculations become open to interpretation and difficult to verify.

The author must describe the scope of the dMRV, including:

  • Problem and purpose — The environmental problem being addressed and the specific impact mechanism the methodology quantifies.
  • System boundary — Which activities, process stages, or flows are included in the quantification — and which are explicitly excluded and why — in alignment with the validated third-party methodology.
  • Quantification unit — The unit of measurement and the type of output expected in the applicable methodological context (e.g., tCO2e of avoided emissions, tons of recycled material). The goal is not to invent parameters but to state what will be accounted for and how it connects to the methodological reference.
  • Output type — Whether the methodology produces credits, certificates, or both, and under what conditions.
  • Applicability context — Sector, type of operation, geographic scope, minimum operational conditions, and any critical assumptions that constrain the framework's validity. This delimitation is essential for integrity — it prevents the dMRV from being applied outside the context for which it was designed.

Expected outcome: by the end of this section, any technical reader should be able to answer: "What does this dMRV measure, in which context does it apply, what is its boundary, and what output does it intend to produce according to the validated reference methodology?"


Methodological reference & registry

After defining scope and concept, the MvF must declare with precision which validated third-party methodology underpins the dMRV. This is critical for integrity and auditability: the framework cannot "float" without external anchoring and validation, because it is precisely this methodological reference that delimits assumptions, quantification logic, evidence requirements, and validity conditions.

The author must identify the reference methodology with enough detail to eliminate ambiguity, including:

  • Official name, version, date, and responsible entity
  • A permanent identifier or stable public reference (e.g., a permanent link or equivalent publication record)
  • Relevant annexes, tables, appendices, or superseding versions that affect calculations

Beyond pointing to the source, the MvF must make clear how the methodology will be operationalized within the framework. This is not about repeating the methodology's content but about explaining the relationship between the external reference and the MvF: which methodology sections are materially relevant, which requirements will be translated into events, rules, and formulas, and which dependencies are assumed. This narrative context prepares the reader for the Traceability Matrix, which provides the structured requirement-by-requirement mapping.

An important aspect is recording interoperability boundaries: what the MvF assumes as an external prerequisite (e.g., methodology validation, macro eligibility decisions, participant accreditation processes, registry rules) versus what is executed by the dMRV on the platform (operational rules, digital validations, evidence, and auditable outputs). This separation keeps documentation consistent with Carrot's positioning as a dMRV orchestration platform.

Expected outcome: by the end of this section, it should be unambiguous which external methodology underpins the dMRV, which dependencies and boundaries apply, and whether a registry is related — including the registry's role and the boundary between what is external and what the dMRV executes.


Eligibility, criteria & exclusions

Defining eligibility criteria, exclusions, and exceptions is one of the most sensitive steps in building an MvF. Poorly formulated criteria produce two types of problem: either they allow participants and processes that compromise credit integrity, or they unduly exclude legitimate operations that meet the methodology's objectives.

The guiding principle is design for verification: every eligibility criterion must be described so that someone — whether the MvA in automated execution or an auditor in independent verification — can answer, based on evidence, whether the requirement was met, without room for subjective interpretation.

Verifiable vs. narrative criteria

The difference between a narrative criterion and a verifiable criterion is the difference between intention and operationality.

A narrative criterion declares a condition generically:

The participant must hold a valid environmental license.

This communicates intent but does not guide verification: what does "valid" mean? Which document proves it? Which field or metadata should be validated? What is the rejection rule? Is there an exception?

A verifiable criterion describes the same condition in terms that allow objective validation:

The participant must submit an environmental license issued by the competent authority, with an expiration date equal to or later than the monitoring period start date. The system must verify the presence of the 'expiration date' field and compare it to the period's reference date. If the license is expired or the field is absent, the participant is blocked from proceeding in the accreditation flow.

When writing each criterion, the MvF Author should ask: "If I hand only this text to the developer, can they implement the validation without consulting me?" If the answer is no, the criterion needs refinement.

Three layers of eligibility

The MvF must organize its criteria into three distinct layers:

1. Participant eligibility — Minimum requirements that each participant type (waste generator, hauler, processor, recycler) must meet to be accredited and operate within the dMRV. These typically involve:

  • Legal and regulatory documentation (licenses, permits, registrations)
  • Demonstrable operational capacity (installed capacity, equipment, technical certifications)
  • Geographic location (when the methodology has a territorial scope)
  • Registration ties or impediments (conflicts of interest, non-compliance history, regulatory restrictions)

2. Material and process eligibility — Which waste types, activities, or operational flows the dMRV accepts. This definition must be precise enough to avoid ambiguity. Instead of "Organic Waste," the framework should specify accepted categories by official classification code, acceptance conditions with contamination limits, source-separation requirements, and any applicable restrictions (e.g., exclusion of hazardous waste or waste from specific industrial origins).

3. Exclusions and exceptions — Exclusions are conditions that definitively prevent approval, regardless of other criteria being met. Exceptions are atypical but documented situations in which a standard criterion may be relaxed under specific conditions. The distinction matters: exclusions are absolute (if the exclusion criterion is triggered, there is no alternative path), while exceptions must be accompanied by justification, additional evidence, and — when applicable — formal approval.

Conditional and contextual criteria

In many methodologies, eligibility criteria are not uniform — they vary by application context. For example, a methodology with broad geographic coverage may have different requirements by country or region due to regulatory or infrastructure differences. The MvF must explicitly declare which criteria are universal (applicable to all participants and contexts) and which are conditional (applicable only under certain circumstances), identifying the trigger that activates each condition.

Similarly, criteria may have temporality: a requirement that applies at initial accreditation may differ from a maintenance or renewal requirement. The framework must make clear at which point in the cycle each criterion is verified and with what frequency.

Expected outcome: by the end of this section, any technical reader — including the MvA developer and the auditor — should be able to answer, for each participant and material type: "Is this candidate eligible?" "Under what conditions?" "What prevents them?" and "Is there an applicable exception?" — without consulting the framework's author.


dMRV events & event catalog

In Carrot's dMRV ecosystem, the event is the fundamental unit of verification. A dMRV event represents an operationally relevant occurrence in the supply chain flow that must be recorded, verified, and traced. It is at the event level that inputs (data and documents) gain meaning, validations are applied, evidence is generated, and the audit trail is built.

If the MvF were a building blueprint, the event catalog would be the detailed list of every construction step, with required materials, mandatory checks, and acceptance criteria for each phase. Without this list, the MvA developer receives a blueprint without execution instructions, and the auditor does not know which points to inspect.

How to identify relevant events

The first step in building the catalog is identifying all events that compose the operational flow for the methodology. The author should map the complete journey of the tracked object (a waste item, a batch, an activity) from origin to final disposition or transformation, considering every movement, action, and intervention it may undergo.

Each point in the flow where one of the following situations occurs potentially constitutes a dMRV event:

  • Custody transfer — The object changes responsible party or location (e.g., collection, transport, delivery)
  • Transformation or processing — The object undergoes physical, chemical, or classification change (e.g., sorting, composting, recycling)
  • Measurement or recording — A measurement is taken or a document is generated (e.g., weighing, temperature measurement, manifest or certificate issuance)
  • Verification or audit — A rule is applied and a result is recorded (e.g., MassID audit, credit issuance)
  • Decision or classification — The object is classified, accepted, rejected, or reclassified based on framework criteria

Granularity guidance

Event granularity is a framework design decision. Events that are too aggregated (e.g., "processing" as a single event) lose verification power because they bundle steps that could be validated separately. Events that are excessively granular (e.g., each minute of windrow turning as a separate event) create operational complexity without proportional integrity gains. The ideal balance is granularity sufficient for each event to represent a meaningful verification checkpoint — a moment when something verifiable happens and evidence must be recorded.

Standard event description table

For each identified event, the MvF must provide a structured description containing at minimum the following elements:

ElementDescriptionPurpose
IdentifierUnique event code in the catalog (e.g., EVT-001)Traceability and cross-reference
NameDescriptive, standardized name (e.g., Pick-up, Weighing, Drop-off)Clear communication between author, developer, and auditor
ObjectiveWhat this event proves or records in the flowContextualizes the event in the methodology's logic
Flow stagePosition of the event in the operational sequenceReveals dependencies and execution order
Responsible participantWho is responsible for executing or recording the eventDefines accountability and custody link
Required inputsData and documents that must be provided, with minimum metadataBasis for validation — detailed in Inputs, evidence & evidence policy
Validation rulesChecks the MvA must execute to accept/reject the eventBasis for implementation — detailed in Validation rules & controls
Generated outputsWhat the event produces as a recordable resultComposes the digital evidence package
DependenciesPrior events that must be completed, or subsequent events that depend on this oneEnsures logical sequence and temporal integrity
Evidence regimeWhether verification is digital (automated) or requires audit/inspectionGuides the developer and auditor on the verification level

Key point for authors

The event catalog is the primary handoff artifact between the MvF Author and the MvA Developer. Its completeness and clarity directly determine implementation quality. An ambiguous or incomplete catalog generates rework, author consultations, and risk of divergence between framework intent and application execution.


Inputs, evidence & evidence policy

If events are the backbone of the dMRV, inputs and evidence are the substance that makes them verifiable. An input is the data or document that feeds an event; evidence is the record that proves the event occurred in conformity with the framework's rules. Without adequate inputs, the event cannot be validated. Without traceable evidence, the event cannot be audited.

Input specification per event

Each event in the catalog requires a set of inputs to be executed. When specifying these inputs, the author must go beyond a generic list of "required documents" and provide a complete operational description that enables both MvA implementation and auditor verification.

For each input, the MvF must declare:

  • Input type — Structured data, digitized document, system record, signed declaration, etc.
  • Required fields or metadata — For example, for a weighing event: gross weight, tare, net weight, scale type, capture method, date/time, geolocation.
  • Accepted formats and units — For example, weight in kilograms, dates in ISO format, numeric values with up to two decimal places.
  • Acceptance conditions — What makes the input valid (e.g., gross weight greater than zero, date within the monitoring period).
  • Rejection conditions — What makes the input invalid and prevents the event from proceeding.
  • Exception conditions — Which data points are desirable vs. mandatory, and acceptable flexibilities for initial projects or specific participant types.

The difference between a well-specified input and a vague one separates an implementable framework from one that depends on interpretation. For example, a vague specification for the Weighing event (EVT-003) would say "record the weighing data." An adequate specification would detail every expected field, its validation rules, and the behavior when a field is absent or inconsistent.

Metadata requirements

Every input in dMRV must carry sufficient context to sustain traceability. This context is composed of metadata that answers the fundamental questions of the chain of custody:

  • Who submitted (responsible participant identification)
  • When (date and time of recording)
  • Where (geolocation or linked address)
  • In which context (associated event, flow stage)
  • Under which version (of the MvF and MvA in execution)

The MvF must declare which metadata are mandatory (without which the input cannot be accepted) and which are optional (enriching traceability but not blocking validation). When in doubt, the recommendation is to treat the metadata as mandatory — it is easier to relax a requirement later than to create one retroactively.

Evidence policy per event

The Evidence Policy consolidates, event by event, how each requirement will be substantiated. It must distinguish two fundamental categories: evidence that can be accepted through digital operational verification (whose robustness can be sustained by the digital trail, metadata, and cross-event consistency) and evidence that, by its nature, criticality, or risk, requires audit or additional inspection.

For each event, the policy must record:

FieldDescription
Evidence requiredWhat evidence must be provided
Acceptance levelDigital, audit, or both
Minimum metadataWhich metadata fields are required
Digital validationsWhich automated checks are performed
Escalation triggersConditions that escalate the event to manual review or audit

Escalation trigger categories

Escalation triggers are conditions that, when detected during digital verification, indicate that available evidence is insufficient to sustain conformity and that additional procedures (audit, inspection, supplementary evidence collection) are necessary. The MvF Author must define these triggers explicitly and link them to actions proportional to the identified risk.

Triggers typically fall into three categories:

  • Inconsistency — Divergence between expected and provided data (e.g., a weighing that results in negative net weight, or a geolocation incompatible with the registered address).
  • Absence — Missing mandatory metadata or required evidence (e.g., missing transport manifest without an exemption justification).
  • Anomaly — Unusual pattern detected over time (e.g., volumes systematically exceeding declared capacity, or atypical frequency of exemptions).

For each trigger, the MvF must indicate the expected action: event blocking (preventing flow continuation until resolution), flagging for operational review (the event proceeds but is marked for analysis, and credit issuance is temporarily blocked), or escalation to audit (the event is forwarded for independent verification). The proportionality between trigger and action is a framework design decision that must consider the risk associated with the event and the materiality of the impact on credit integrity.

Expected outcome: a complete Evidence Policy organized event by event, enabling the MvA developer to implement all digital validations and the auditor to understand which points require additional verification.


Validation rules & controls

Validation rules are the operational mechanism that ensures conformity between participant-provided inputs and MvF-defined criteria. While the event catalog defines what happens and the evidence policy defines what proves it, validation rules define how each input is verified — which condition is tested, which standard is expected, what happens when the condition fails, and what evidence is recorded.

For the MvA developer, validation rules are the most directly implementable element of the entire MvF. Each well-specified rule translates into a code verification — a logical condition that produces a binary result (passed/failed) or an escalation (flagged for review).

Rule taxonomy

Operational experience with the BOLD methodologies suggests a practical classification of rules into three types:

Structure rules verify formal integrity and data completeness. These rules are methodology-independent — they ensure the information "packaging" is correct before the content is evaluated. Examples:

  • All required fields are populated
  • The unit of measurement is the expected one (kilograms)
  • The numeric format is valid
  • The document category matches expectations
  • Participant identification metadata are present and consistent

Methodology rules verify conformity with the criteria and parameters defined in the scientific methodology and translated by the MvF. These rules depend on the framework's technical content. Examples:

  • The waste type belongs to the list of eligible subtypes
  • The distance between collection and destination does not exceed the project boundary limit
  • The temporal interval between events falls within the acceptable range
  • The project size does not exceed the eligibility threshold

Audit rules verify cross-event consistency and behavioral patterns requiring deeper analysis that cannot be fully resolved by simple deterministic verification. These rules typically involve comparison with accreditation data, cross-event checks, duplication detection, operational limit controls, and validations that generate flags for human review. Examples:

  • The cumulative mass from a generator in the month does not exceed the declared cap
  • The vehicle plate and drop-off date/time do not conflict with another MassID
  • The fertilizer conversion coefficient is compatible with accreditation data

12-field rule specification

For each validation rule, the MvF must provide a specification that the developer can implement without interpretation and the auditor can verify independently:

ElementDescription
Execution orderPosition of the rule in the event's verification sequence (rules may depend on each other)
IdentifierUnique rule code (e.g., RV-001)
NameDescriptive, standardized name
Applicable event(s)Which event(s) from the catalog the rule applies to
Condition verifiedWhat the rule tests — described in clear, unambiguous logical language
Acceptance criterionThe expected result for the validation to pass
Description / RationaleExplanation of the rule's purpose and its relationship to dMRV integrity
Rule typeStructure, Methodology, or Audit
Failure actionWhat happens when the condition is not met: rejection, blocking, flagging, or escalation
Exception caseWhether exception situations apply, which trigger activates the exception, and whether this affects the output
Generated evidenceWhat is recorded in the audit trail as a result of rule execution
Methodological referenceWhich section of the validated methodology or MvF supports the rule (when applicable)

Cross-event dependency rules

Some rules do not operate on a single isolated event but depend on information from multiple events or historical data. For example, a project distance limit rule (RV-008 in the COM example) crosses the geolocation from the pick-up event (EVT-001) with the geolocation from the drop-off event (EVT-005). Similarly, the generator mass cap rule (RV-009) accumulates information from all MassIDs of the same generator over the month.

When a rule depends on cross-event data, the MvF must explicitly declare: which events are crossed, which fields from each event are used, what the aggregation or comparison logic is, and at which point in the flow the cross-check is executed (at the target event, at the end of the period, or during MassID audit). This clarity is fundamental for deterministic implementation and reproducible auditing.


Calculations, formulas & parameters

Calculations and formulas are the quantitative core of the dMRV — they translate verified evidence into numerical results that ultimately support the generation of environmental credits. The way the MvF specifies its formulas determines not only the precision of results but also their reproducibility, auditability, and market credibility.

The central principle is self-containment: the MvF must provide the MvA developer with all information needed to implement each calculation without consulting the original scientific methodology or the framework's author. The MvF maintains traceability to the external reference, but the framework itself must contain all operational information needed for implementation and auditing.

Per-formula specification

Each formula must be presented with a minimum set of information:

Equation — Expressed unambiguously with consistent notation throughout the framework. Variables must have descriptive, unique names (avoid using the same symbol for different quantities in different formulas). When the formula involves summations, conditions (if/then), or iterations, the logic must be spelled out step by step, not condensed into a single expression.

Variables — For each variable used in the formula, the MvF must declare:

  • Symbol used
  • Full variable name
  • Unit of measurement
  • Value source (input data, fixed parameter, result of another calculation, accreditation value)
  • Application conditions (when the variable is used and when it is not)
  • Limits or constraints (minimum/maximum values, valid domain)

Fixed parameters — Emission factors, conversion coefficients, and constants must be accompanied by their primary source (article, methodology, database), reference date, and conditions under which the value is valid. When a parameter varies by context (e.g., different emission factor per waste type or region), the MvF must provide the complete value table and the selection rule.

Test scenarios — A set of reference inputs with the expected result, allowing the developer to verify the implementation is correct. For example: "If net weight = 14,949 kg, waste type = domestic sludge, and offset index = 0.85 tCO2e per ton, the expected GasID = 12.71 tCO2e." Test scenarios reduce implementation error risk and are especially useful when formulas involve multiple variables or conditions.

Uncertainty and discount factors

When the reference methodology provides for uncertainty margins, conservative discount factors, or safety buffers, the MvF must describe them with the same precision applied to the main formulas:

  • Uncertainty calculation formula (when quantifiable)
  • Discount factor applied and its technical justification
  • Activation conditions — Under which circumstances the discount is applied
  • Impact on the final result — How the discount affects the quantity of credits generated

Discount factors are especially relevant for dMRV credibility because they demonstrate a conservative approach — in case of doubt, the framework credits less, not more. This stance is valued by buyers, auditors, and the market, and should be made explicit in the MvF as part of the methodology's design.


Outputs & digital evidence package

The auditable methodological output is the final result of dMRV execution: the artifact that justifies, based on verified evidence, that a given environmental impact was quantified in conformity with the methodology. In the Carrot ecosystem, these outputs are the basis for subsequent issuance, tracking, and — when applicable — retirement of credits, according to the adopted institutional design.

Output types

The MvF must explicitly declare which output types the methodology generates and under which conditions:

Primary outputs are quantitative results that support credit generation. For example, the quantity of avoided emissions (in tCO2e), the quantity of recycled material (in tons), or any other metric defined by the methodology. In the fictional COM example, primary outputs would be the GasID (carbon credits for methane avoidance) and the RecycledID (recycling credits for landfill diversion).

Intermediate outputs are partial results that feed subsequent calculations or have informational value for audit, but do not directly generate credits. For example, the net weight after sorting is an intermediate output from the sorting event that feeds the final credit calculation.

Verification outputs are records generated by validation rule execution — approvals, rejections, flags, and audit results that compose the MassID audit trail. These outputs do not generate credits but are essential for demonstrating conformity.

Evidence package composition

The digital evidence package is the organized set of all records, data, documents, logs, and validation results generated throughout dMRV execution for a given tracked object (typically a MassID). It enables both internal digital verification and independent verification by third-party entities when applicable.

The MvF must describe what composes the evidence package for the methodology, including:

  • Data and documents submitted by participants at each event (inputs), with their metadata
  • Results of each validation rule executed (passed, failed, flagged), with the MvF and MvA version used
  • Results of each calculation applied, with input variables and parameters
  • Primary outputs generated, with their technical justification (how the number was obtained)
  • Records of any escalation, additional audit, or human intervention that occurred

The completeness of the evidence package is what allows any reviewer — internal or independent — to traverse the full path between original inputs and the final output, verifying each step independently. The MvF must be designed so that no output exists without an evidence trail that supports it.

Versioning and temporal traceability

Each output generated by the dMRV must carry sufficient versioning information to allow future reproduction. At minimum, the output must record:

  • The MvF version that defined the applied rules
  • The MvA version that executed the rules
  • The execution date
  • References to the events and inputs that support it

This temporal traceability is especially important when the framework undergoes version updates. An output generated under MvF version 1.0 must be evaluated according to version 1.0 rules, even if version 2.0 is already in operation. The MvF must declare how this separation is maintained and what happens with in-transit outputs during a version transition.

Connection to external processes

The MvF must also indicate how outputs connect to processes outside the Carrot platform perimeter, when applicable. The platform enables methodology execution that supports credit generation, but issuance, tracking, and retirement of those credits may involve external infrastructure with its own formats and requirements.

When the dMRV institutional design provides for this interoperability, the MvF must declare:

  • Which outputs are delivered to external processes and in what format
  • Which metadata are required by the registry or destination infrastructure
  • Which external dependencies exist (e.g., registry approval required before formal credit issuance)

This declaration keeps the separation of responsibilities clear and allows the platform to function as execution infrastructure without conflating its role with certification or registration. See Carrot Explorer for how outputs are presented to external stakeholders.


Artifacts & templates

The sections above reference artifacts and templates that should accompany the MvF as standardized annexes. The table below consolidates the recommended artifacts:

ArtifactDescriptionReference section
Traceability MatrixConnects validated methodology requirements to MvF elements, mapping each requirement to events, evidence, validations, and outputs.Methodological reference & registry
dMRV Event CatalogStructured description of all events in the operational flow, with inputs, validations, outputs, dependencies, and evidence regime.dMRV events & event catalog
Evidence Policy per EventTable consolidating, event by event, required evidence, acceptance level, minimum metadata, digital validations, and escalation triggers.Inputs, evidence & evidence policy
Validation Rules TableComplete list of validation rules classified by type (Structure, Methodology, Audit), with specification sufficient for implementation.Validation rules & controls
Calculations & Parameters TableDetailing of each formula, variable, and parameter, including sources, units, application conditions, and test scenarios.Calculations, formulas & parameters
MvF Completeness ChecklistVerification list with all mandatory MvF components per the Minimum Structure, for self-assessment by the author and evaluation by the curation team.All sections

Download templates

Download the standardized MvF artifact templates — including the Traceability Matrix, Event Catalog, Evidence Policy, Validation Rules Table, Calculations & Parameters Table, and Completeness Checklist.

Each artifact must be developed following the standardized template to ensure consistency across different frameworks submitted to the ecosystem.


MvF Author Guide · MvF concept · Methodology Lifecycle

On this page