Short "intelligent lay person's" definitons of some of the performance factors that MolDx looks at.
Explanation of Validation Metrics for Molecular Lab Developed Tests (LDTs) Required by MOLDX
1. Sensitivity: This measures the test's ability to correctly identify those with the disease (true positives). It's determined by comparing the test results with a known gold standard of diagnosed positive cases.
2. Specificity: This assesses the test's ability to correctly identify those without the disease (true negatives). It's also compared against a gold standard of diagnosed negative cases.
3. Accuracy: This refers to the test's overall ability to correctly identify both positive and negative cases. It combines the data from sensitivity and specificity to provide a comprehensive measure of test performance.
- I believe it's similar to 100% minus the FP and minus the FN. For example, if the FP is 5% and the FN is 5% [for a popularion] then the running accuracy is 90%. (BQ)
- While SENS and SPEC should be absolute (from 100 gold standard true positives, etc), the ACCURACY (and PPV NPV) will vary on the population, base rate, etc. (BQ)
4. Precision: This metric evaluates the reproducibility of the test results under the same conditions. It's divided into:
- Repeatability: Consistency of results when the same sample is tested multiple times within a short period.
- Reproducibility: Consistency of results when the same sample is tested across different days, operators, and equipment.
5. Limit of Detection (LOD): This is the lowest concentration of the target analyte that the test can reliably detect, but not necessarily quantify. It indicates the analytical sensitivity of the test.
6. Limit of Quantitation (LOQ): This is the lowest concentration at which the test can not only detect but also quantify the analyte with acceptable precision and accuracy. It represents a higher threshold than LOD and is crucial for quantitative assays.
7. Interference: This involves testing the assay's performance in the presence of potential contaminants or substances that could affect the results, such as hemoglobin, lipids, or other common biological substances.
8. Cross-Reactivity: This measures whether the assay mistakenly identifies non-target substances as the target analyte, ensuring the test's specificity in complex biological samples.
9. Reportable Range: This defines the span of analyte concentrations over which the test can accurately measure and report results. It ensures that the test is reliable across a clinically relevant range of concentrations.
10. Reference Interval/Range: This establishes the normal range of analyte levels in a healthy population, providing a benchmark against which patient results can be compared.
Understanding these metrics helps a genomic lab scientist ensure that their LDTs meet regulatory standards for reliability, accuracy, and clinical utility, essential for MOLDX approval.
##
Limit of Blank.
Limit of Blank (LoB)
Definition: The Limit of Blank (LoB) is the highest measurement result that is likely to be observed for a sample that does not contain the analyte of interest. It represents the upper threshold of background noise in a blank sample, which theoretically should not contain any analyte.
Context and Importance
- Purpose: The LoB helps distinguish between true low-level signals and the noise inherent in the assay. It sets a baseline to ensure that any measurement above this threshold can be attributed to the presence of the analyte rather than random fluctuations or background interference.
- Calculation: The LoB is typically calculated by repeatedly measuring blank samples and determining the value below which a certain percentage (commonly 95%) of the measurements fall. This involves statistical analysis of the blank sample measurements.
- Application: In clinical and laboratory settings, knowing the LoB is crucial for determining the Limit of Detection (LoD). The LoD is derived from the LoB and represents the lowest concentration of the analyte that can be reliably detected above the LoB.
Example
- Suppose an assay measures a blank sample (containing no analyte) multiple times, and the highest measurement observed from these replicates is 0.02 units. This value is determined to be the LoB.
- Any measurement above 0.02 units in test samples can be considered to potentially indicate the presence of the analyte, as values below this are likely due to background noise.
Understanding the LoB helps ensure the reliability and accuracy of assays, particularly when working with low concentrations of analytes. It is a foundational metric for establishing the sensitivity of a test.