Version 1.0 | Draft for Expert Validation

ICMR MIDAS 2.0 Framework

(Metric-based Integrity and Data Assessment System)

DELPHI SHEET (Technical and Lite Versions)

Confidential Draft – For Review Only
Please do not distribute without authorization from ICMR.

Prepared by:

Indian Council of Medical Research (ICMR)
Division of Development Research
New Delhi, India

Date: 10.30.2025

Clinicians reviewing medical imaging scans on workstation screens

Representative clinical review environment: domain experts examining imaging outputs, discussing evidence, and applying structured judgement in a way that reflects the spirit of the MIDAS 2.0 Delphi process.

Document Purpose

This document presents the MIDAS 2.0 Delphi review format for evaluating a quantitative, evidence-based framework that assesses dataset quality, integrity, interoperability, and privacy assurance across biomedical and health data domains.

It is being circulated for expert review and content validation by national and international specialists in digital health, data governance, bioinformatics, AI ethics, and biomedical informatics.

Feedback from this review will guide the finalisation of MIDAS 2.0 for adoption across thematic hubs and integration into India’s national digital-health ecosystem.

Background and Rationale

Indian healthcare is shaped by geographic heterogeneity, socio-economic disparities, and an uneven distribution of infrastructure and specialist expertise. Services and workforce remain concentrated in urban centres, while rural and remote populations often face delayed access to specialist care. The three-tier model expects primary care to manage most needs, yet in practice primary facilities are underutilised and tertiary hospitals are overburdened.

AI-enabled digital health, when implemented responsibly, can help rebalance triage and referral pathways. However, health AI is only as reliable as the data used to build it. In India, many models still depend on institutional silos, small datasets, and inconsistent annotation practices that do not adequately represent population diversity or disease spectrum. These constraints can introduce bias and weaken performance during external validation and real-world deployment.

To address this gap, the Indian Council of Medical Research (ICMR), together with the Indian Institute of Science (IISc), launched the Medical Imaging Datasets of India (MIDAS) initiative(1) to create gold-standard, AI-ready datasets that reflect national diversity and real care contexts. MIDAS 1.0 demonstrated feasibility through common SOPs, standardised ontologies, and population-representative curation, but its quality assessment remained largely qualitative and expert-led.

MIDAS 2.0, the Metric-based Integrity and Data Assessment System, advances MIDAS into a quantitative, evidence-driven framework for dataset quality, interoperability, and privacy assurance. It combines a Composite Quality Index (CQI) across 15 measurable domains with a Privacy-Risk Score (PRS) that estimates residual risks of re-identification and sensitive-attribute disclosure.

CQI and PRS together support release decisions and a six-tier quality ladder spanning Remediation to Diamond. Compared with global frameworks such as FAIRShake (2), METRIC (3), and FUTURE-AI (4), MIDAS 2.0 brings together quantitative rigour and governance accountability, creating a scalable national benchmark for equitable and reproducible biomedical datasets.

Instructions for Reviewers

Purpose of Review

Building on the successful implementation and demonstrated utility of MIDAS 1.0, and recognising its limitations, we have developed MIDAS 2.0 as a comprehensive, quantitative framework to assess the AI-readiness of datasets. The framework is being circulated for expert validation of its conceptual clarity, scientific soundness, completeness, and applicability across data domains. Feedback from this review will help refine the framework so that it remains both technically rigorous and operationally feasible.

Review Objectives

Reviewers are requested to:

  • Assess the clarity, structure, and scientific validity of each section.
  • Evaluate whether the 15 framework domains adequately capture the key dimensions of dataset quality, trustworthiness and AI-readiness.
  • Identify domains, criteria or evidence requirements that may require refinement, simplification or further clarification.
  • Comment on the global applicability of the CQI–PRS model and its conceptual interoperability with related frameworks like FAIRShake, METRIC, and FUTURE-AI.
  • Suggest additional domains, criteria or metrics that may strengthen future health-AI dataset validation.

Review Method

The review will be conducted using a modified Delphi approach.

  • Round 1: independent conceptual review and scoring
  • Round 2: Re-assessment of revised items following synthesis of reviewer feedback.
  • Round 3 (if required): Final re-rating of unresolved items

Reviewers are requested to provide feedback through track changes within the document or by submitting a separate note summarising their observations.

Each MIDAS domain should be rated on the following dimensions:
  • Clarity of definition (1–5)
  • Relevance to dataset quality (1–5)
  • Ease of reproducibility (1–5)
  • Evidence sufficiency (1–5)

Delphi Mechanics and Consensus Process

1. Rating Scale

Each statement or domain should be rated on a 5-point Likert scale:

Score Interpretation
1Very unclear / Not relevant / Difficult to implement
2Requires major revision / Poorly defined
3Acceptable but requires minor clarification
4Clear, relevant, and adequately defined
5Exceptionally clear, highly relevant, and self-explanatory

Ratings of 1–3 must be accompanied by explanatory comments.

2. Consensus Criteria

Consensus will be evaluated using the following measures:

  • Item-level Content Validity Index (I-CVI): Proportion of experts rating an item ≥ 4
    • Acceptable threshold: I-CVI ≥ 0.78
    • Preferred threshold: I-CVI ≥ 0.90
  • Scale-level Content Validity Index (S-CVI/Ave): Mean of all I-CVIs across items
    • Target threshold: S-CVI/Ave ≥ 0.90
  • Modified Kappa (k*) to account for chance agreement
    • ≥ 0.74: Excellent
    • 0.60–0.73: Good
    • 0.40–0.59: Fair

3. Stopping Rule

The Delphi process will conclude when either:

  • ≥ 80% of items achieve I-CVI ≥ 0.78 , or
  • The change in ratings between rounds for unresolved items is < 10% after Round 2

4. Confidentiality and Feedback

Reviewer identities will remain confidential. All comments will be anonymised before redistribution in subsequent rounds.

For re-circulated items, reviewers will receive:

  • The median and interquartile range of group ratings for each item, and
  • A synthesised summary of comments highlighting areas of agreement and divergence

5. Management of Persistent Disagreement

If consensus is not achieved after Round 3, divergent expert views will be documented verbatim in the validation report, and the item will be finalised through structured expert discussion.

6. Data Handling and Archiving

All individual ratings, I-CVI/S-CVI/k* calculations, and revision logs will be archived to ensure transparency, auditability, and reproducibility of the validation process.

Submission of Feedback

Please return your consolidated comments or marked-up document to:

hsingh[at]bmi[dot]icmr[dot]org[dot]in hsingh[at]fas[dot]harvard[dot]edu harpreets[dot]hq[at]icmr[dot]gov[dot]in

All reviewer feedback will be anonymised, synthesised, and discussed during the expert consensus process for finalisation of the framework.

References

  1. Maity D, Satish R, Jadeja DA, Dharmaraju R, Chandru V, Sundaresan R, et al. MIDAS: a new platform for quality-graded health data for AI-enabled healthcare in India. Nat Med. 2024;1–2.
  2. Clarke DJB, Wang L, Jones A, Wojciechowicz ML, Torre D, Jagodnik KM, et al. FAIRshake: Toolkit to Evaluate the FAIRness of Research Digital Resources. Cell Syst. 2019 Nov;9(5):417–21. doi:10.1016/j.cels.2019.09.011
  3. Schwabe D, Becker K, Seyferth M, Klaß A, Schaeffter T. The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review. npj Digital Medicine 2024 7:1. 2024 Aug 3;7(1):1–30. doi:10.1038/s41746-024-01196-4
  4. Lekadir K, Frangi AF, Porras AR, Glocker B, Cintas C, Langlotz CP, et al. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. 2025 Feb 5;388:e081554. doi:10.1136/bmj-2024-081554