Quality Criteria & Accreditation
How MvFs are evaluated — six quality dimensions, the accreditation process, and non-conformity types.
Purpose and scope
The quality evaluation of a dMRV serves three simultaneous purposes. For the ecosystem, it acts as an integrity barrier — ensuring that only technically sound, auditable, and implementable frameworks enter production, protecting the credibility of issued credits and the trust of participants and buyers. For the author, it acts as an expectation guide — by knowing the evaluation criteria in advance, the author can build the MvF with focus on the requirements that will be verified, reducing rework and review cycles. For the developer, it acts as a quality guarantee for inputs — an accredited MvF is, by definition, implementable without interpretation, reducing risk and friction during MvA development.
The scope of this page covers the evaluation of the MvF (framework), which is the primary author deliverable and the main object of curation at the current stage of the ecosystem. Quality criteria for the MvA (application), which involve software engineering and platform integration requirements, are defined by the Engineering team in a separate document.
The criteria described here reflect the current internal curation model, in which MvF validation is conducted by the Carrot Operations and Methodologies team. As the ecosystem matures and the Community of Experts advances through its phases, these responsibilities may be progressively transferred, with safeguards and processes defined by applicable governance.
Evaluation principles
The quality evaluation of an MvF is guided by six principles that reflect the values of the dMRV ecosystem and apply both to the current internal curation process and to future community review processes.
-
Verifiability — Every requirement described in the MvF must be expressed so that someone — whether the MvA in automated execution or an auditor in independent verification — can answer, based on evidence, whether the requirement was met. The guiding question is: "for each rule, criterion, and parameter, is there an objective way to test whether it was fulfilled?"
-
Self-containment — The MvF must be sufficiently complete for the MvA developer to implement all rules, calculations, and validations without consulting the author or the original scientific methodology. This does not mean the MvF replaces the external reference, but rather that the framework itself must contain all operational information necessary for implementation and audit.
-
Traceability — It must be possible to trace the complete path between the third-party validated methodology and any operational element of the MvF — rule, calculation, parameter, eligibility criterion — demonstrating where each requirement came from and how it was translated. The Traceability Matrix is the instrument that materializes this principle.
-
Internal consistency — The different parts of the MvF must be coherent with each other: eligibility criteria must be compatible with catalog events, validation rules must reference inputs that actually exist in the events, calculations must use variables that have been defined and have a declared source, and outputs must be justified by the evidence trail from preceding events.
-
Proportionality — The evaluation recognizes that different methodologies have different complexity levels, and that quality criteria must be applied with reasonableness. An MvF for a simple methodology with few events and straightforward rules does not need the same documentary breadth as an MvF for a complex methodology with dozens of variables and cross-participant interactions. What must remain constant is the quality and precision of each component, not necessarily the quantity.
-
Adaptability — The MvF must be designed to support application across multiple territories, separating universal elements from elements parameterizable by geography. A framework that works only for a specific country — without providing an interface with Geographic Annexes — limits the methodology's scalability and creates significant rework when territorial expansion becomes necessary.
Quality dimensions
Quality criteria are organized into six dimensions. Each dimension groups a set of requirements that together determine whether the MvF is ready for accreditation. The evaluation is performed dimension by dimension, and the result for each can be: Conformant, Conformant with caveats (minor points to adjust that do not compromise framework integrity), or Non-conformant (gaps that prevent accreditation and require substantive revision).
Completeness
The completeness dimension evaluates whether the MvF contains all components defined in the MvF Minimum Structure and whether mandatory artifacts are present and filled in. The MvF Completeness Checklist is the author's self-assessment instrument for this dimension, but internal curation may verify additional aspects beyond the checklist.
In practice, the completeness evaluation verifies that each Minimum Structure section is present and non-empty: scope and concept, methodology reference, eligibility and exclusions, event catalog, evidence policy, validation rules, calculations and parameters, outputs, and geographic adaptability. It also verifies that mandatory artifacts have been delivered: Traceability Matrix, Event Catalog, Evidence Policy, Validation Rules Table, and Calculations and Parameters Specification Table — all with Geographic Scope columns filled in applicable tabs.
Verifiability
The verifiability dimension evaluates whether the rules, criteria, and parameters of the MvF are described in a way that allows objective testing. It is the most critical dimension for the operational quality of the framework, because an MvF that cannot be verified cannot be implemented deterministically.
Curation evaluates verifiability through a sample-based review: it selects a representative subset of validation rules and eligibility criteria and, for each one, verifies whether the description is sufficient to answer, without ambiguity, whether the requirement was met. Rules containing terms such as "adequate", "reasonable", "sufficient", or "as needed" without an operational definition are considered non-verifiable and must be reformulated.
The evaluation also verifies that test scenarios provided (when present) are consistent with the described formulas — that is, that expected results are correct when the stated inputs are applied.
Traceability
The traceability dimension evaluates whether the MvF maintains a clear, documented connection to the third-party validated methodology that underpins it. The central instrument of this evaluation is the Traceability Matrix, which must connect each relevant requirement from the external methodology to the corresponding MvF element.
Curation verifies that the Matrix is completed in a way that allows any reviewer to trace the path between the external reference and the operational implementation: Is the original requirement identified precisely (section, version, passage)? Does the MvF indicate where and how this requirement is addressed? Do associated events exist in the catalog? Are the evidence and validations consistent?
Beyond the Matrix, traceability is also evaluated in the MvF body: Do formulas reference their sources? Do fixed parameters indicate the origin of their value? Are exclusions and exceptions justified by the methodology? When the MvF makes design choices that go beyond what the methodology prescribes — for example, defining event granularity or conservative discount factors — these choices must be documented with technical rationale.
Implementability
The implementability dimension evaluates whether the MvF provides the MvA developer with sufficient information to code all rules, validations, and calculations without needing to interpret, infer, or consult the author. It is the self-containment principle translated into an evaluation criterion.
In practice, curation evaluates whether: validation rules specify the tested condition, the acceptance criterion, and the action on failure; calculations describe the equation, each variable (with unit, source, and conditions), and fixed parameters with references; catalog events declare required inputs with minimum metadata and expected formats; and conditional criteria state the trigger that activates each condition.
A pragmatic way to assess implementability is the "developer test": if curation takes any section of the MvF and hands it to a developer who did not participate in writing the framework, could that developer implement it without asking the author any questions? If the answer is no, the section needs refinement.
Auditability
The auditability dimension evaluates whether the MvF, when executed, generates sufficient evidence for an independent auditor to verify the process end-to-end — from original inputs to final outputs — without needing to access internal systems or rely on verbal explanations.
Curation verifies whether: the Evidence Policy distinguishes what is digitally verifiable from what requires audit or inspection; escalation triggers are defined for situations of inconsistency, absence, and anomaly; the digital evidence package contains the elements necessary to justify each result; and MvF and MvA versioning is included in outputs, enabling future reproduction of results.
Auditability is especially relevant because the Carrot ecosystem involves validation, issuance, and retirement processes for credits that require a complete audit trail. An MvF that does not generate auditable evidence creates friction throughout the cycle and weakens credit credibility.
Geographic adaptability
The geographic adaptability dimension evaluates whether the MvF is designed to support application across multiple territories, following the geographic adaptability guidelines. This dimension recognizes that a dMRV methodology with global scaling potential needs to separate, from the design stage, what is universal — derived from the methodology's logic — from what is territorial — dependent on local legislation, classification, or infrastructure.
Curation evaluates whether: MvF elements are classified as Universal or Territorial in tabular artifacts (Rules Table, Calculations Table, Evidence Policy); elements classified as Territorial have a clear functional requirement (the universal intent) separated from operationalization (which depends on the territory); territorial parameters indicate a default value when local data is unavailable; territorial eligibility criteria indicate connection points with the Geographic Annex; and the MvF explicitly declares which information the Geographic Annex must provide for each territorial point.
An MvF that does not address geographic adaptability is not necessarily non-conformant in this dimension — this depends on the scope declared by the author. If the methodology is explicitly designed for a single territory (for example, a methodology that only applies to Brazil due to specific Brazilian legislation), the author must declare this restriction in the scope and the evaluation will consider this dimension "not applicable". However, if the methodology has multi-territorial application potential and the MvF does not provide universal/territorial separation, curation will flag this as an improvement point.
Accreditation process
The accreditation process is the flow through which a submitted MvF is evaluated, adjusted (when necessary), and — when approved — formally accredited for production operation. At the current stage, this process is conducted internally by the Carrot Operations and Methodologies team, with support from the Engineering team for technical implementation feasibility.
Submission and triage
The process begins with the author's formal submission of the MvF, accompanied by all mandatory artifacts (Traceability Matrix, Event Catalog, Evidence Policy, Rules Table, Calculations Table) and the completed Completeness Checklist. Submission may occur via RFP (when the dMRV responds to a published demand) or via partnership/direct initiative.
During initial triage, curation verifies that the deliverable is complete — that is, that all Minimum Structure sections are present, that mandatory artifacts have been delivered, and that Geographic Scope columns are filled in applicable tabs. If the deliverable is incomplete, the author is notified of the missing components and given a deadline to supplement before the technical assessment begins.
Technical assessment
Once the deliverable is complete, curation conducts the technical assessment — the systematic application of the quality criteria across six dimensions (completeness, verifiability, traceability, implementability, auditability, and geographic adaptability). The assessment produces a technical report that records, for each dimension, the result (conformant, conformant with caveats, non-conformant) and applicable observations.
During the technical assessment, curation may consult the author to clarify specific points, request additional documentation, or solicit targeted adjustments. These interactions are recorded as part of the dMRV's assessment history, preserving traceability of the decision process.
When the Engineering team identifies that some aspect of the MvF presents significant implementation challenges — for example, a rule that depends on information the platform cannot access, or a calculation requiring external data that cannot be integrated — curation may return the MvF to the author with a technical note explaining the limitation and suggesting alternatives.
Review cycle
If the technical assessment identifies dimensions that are non-conformant or conformant with caveats requiring adjustment, the MvF enters a review cycle. The author receives the technical report with detailed observations and has a defined period to submit the revised version.
The ecosystem allows a maximum of two review cycles for the same MvF. If after the second cycle there are still non-conformant dimensions, the MvF is returned to the author with a recommendation for substantive restructuring, and a new submission will be treated as an independent process. This limitation ensures that the accreditation process remains efficient and that frameworks with structural problems do not consume indefinite review cycles.
In each cycle, curation re-evaluates only the flagged dimensions — previously approved dimensions are not re-evaluated, unless changes made by the author have cross-cutting impact that justifies reassessment.
Accreditation
When all six dimensions are evaluated as conformant (or conformant with minor caveats that do not compromise integrity), the MvF is considered approved and enters formal accreditation. Accreditation includes:
- Registration of the approved MvF version with timestamp and reference to the technical assessment
- Publication of the framework in the platform's accredited methodology repository
- Linkage to the MvA development process, which proceeds under the responsibility of the Engineering team and the designated developer
Accreditation of the MvF does not end the author's responsibility. As described in the Versioning Policy, the author may be called upon to collaborate on version revisions, respond to auditor inquiries, and participate in technical discussions about the framework's evolution.
Non-conformity typology
To guide authors and standardize evaluation language, this section classifies the most common types of non-conformity identified during the technical assessment of an MvF.
| Type | Description | Examples | Impact | Typical resolution |
|---|---|---|---|---|
| Content gap | A mandatory Minimum Structure section is absent or mostly empty. | Event catalog without input descriptions; Evidence Policy absent; Calculations section without specified variables. | Blocks assessment: cannot evaluate quality without content. | Complete the section and resubmit. |
| Rule ambiguity | A validation rule or eligibility criterion is described in a way that allows multiple interpretations. | "The participant must have adequate capacity"; "weighing must be precise"; "the interval must be reasonable". | Prevents deterministic implementation; creates risk of divergence between MvF and MvA. | Rewrite with testable condition, numeric criteria, and failure action. |
| Internal inconsistency | Two or more parts of the MvF contradict each other or make incorrect cross-references. | A rule references a non-existent event in the catalog; a calculation uses an undefined variable; the Matrix points to non-existent sections. | Compromises framework reliability as a whole. | Revise conflicting parts and ensure coherence between sections and artifacts. |
| Insufficient traceability | Cannot connect an MvF element to its origin in the validated methodology. | Formula without methodology reference; fixed parameter without source; exclusion criterion without methodological justification. | Weakens the framework's technical legitimacy. | Add references and sources; complete the Traceability Matrix. |
| Insufficient specification | Description exists but lacks sufficient detail for developer implementation. | Formula with variables without units; event with "weighing data" unspecified; rule without failure action. | Creates developer dependency on the author; increases error risk. | Detail fields, units, sources, conditions, and actions for each component. |
| Auditability gap | The framework does not generate sufficient evidence for independent verification at one or more points. | Event without defined evidence regime; no escalation triggers; outputs without versioning. | Compromises verification and audit capacity for the credit cycle. | Define complete Evidence Policy with regime, metadata, and triggers. |
| Geographic adaptability gap | The MvF has multi-territorial scope but does not provide separation between universal and territorial elements. | Eligibility rules referencing single-country legislation without indication of territorial variation; material classification hardcoded for a local taxonomy; parameters without default value. | Limits methodology scalability; creates rework during territorial expansion. | Classify elements as Universal/Territorial in artifacts; write functional requirements separate from operational ones; define Geographic Annex connection points. |
Each non-conformity identified in the technical report includes: the affected quality dimension, the non-conformity type, the location in the MvF (section and artifact), the problem description, and the resolution recommendation. This record allows the author to understand precisely what needs correction, without ambiguity.
MvA quality criteria
The quality evaluation of the Methodology Verification Application (MvA) involves software engineering criteria, platform infrastructure integration, and implementation fidelity relative to the accredited MvF.
There is a relevant interface between MvF quality and MvA quality: a well-specified framework — one that meets the verifiability, implementability, and geographic adaptability criteria described on this page — reduces the risk of implementation problems. When the MvF classifies elements as territorial and provides clear functional requirements, the developer knows to parameterize those points in the MvA instead of hardcoding values — resulting in more robust and scalable code.
MvA validation, when performed, verifies — among other aspects — fidelity to the accredited MvF, technical quality of the implementation (determinism, reproducibility, error handling), traceability and evidence generation (logs, audit trails, version records), compliance with platform technical standards, and correct parameterization of territorial elements.
MvA quality criteria are defined and documented by the Engineering team in a separate document, maintaining the separation of responsibilities between framework and code.
Evolution toward community evaluation
As described in the Methodology Lifecycle, dMRV curation is designed to evolve progressively — from exclusively internal validation toward a model with increasing community participation.
The quality criteria described on this page were designed to support this evolution: they are objective, documented, and applicable by any reviewer with adequate technical capacity. The inclusion of the geographic adaptability dimension reinforces this point: it enables reviewers from different territories to evaluate whether an MvF is prepared to operate in their local regulatory contexts.
Regardless of the evaluation model in effect, Carrot maintains a backstop role to ensure transparency, consistency, and process continuity — including preservation of assessment history, versioning of criteria, and response to integrity risks.
Formal criteria for community participation in dMRV evaluation, including roles, permission levels, and decision processes, are still being defined and will be documented as the community structure consolidates.