Figure 5 simulates a screening test in a low-prevalence environment, where negative ground truth is significantly more common than truth-positive ground. An example of this scenario is cervical cancer screening through Pap striped cytology, where significant positive rates (cell abnormalities of unknown importance) can be expected and positive test results do not necessarily give high confidence in the presence of a high-level disease [28,29]. The 95% CI represents the range of values that can be 95% sure, which contains the test properties. A determining factor that determines the width of 95% CI for sensitivity/AAE is the number of samples of covid-19 patients used to validate the test, while the 95% ic for specificity/APA are more strongly influenced by the number of negative control samples, with more samples offering a narrower confidence interval and greater safety. This table corresponds to Figure 2. A total of 100 negative Ground Truth patients and 100 positive Ground Truth patients were considered. The 95% confidence intervals on the median were calculated by resampling and shown in parentheses. Nor do these statistics support the conclusion that one test is better than another. Recently, a British national newspaper published an article on a PCR test developed by Public Health of England and the fact that with a new commercial test in 35 samples out of 1144 (3%) disagreed. Of course, for many journalists, this was proof that the PHE test was imprecise. There is no way to know which test is correct and which is wrong in any of these 35 discrepancies. We simply do not know the actual state of the subject in unit studies.
Only further investigation into these discrepancies would identify the reasons for these discrepancies. The shaded area represents the sensitivity and specificity limits required by the FDA from serological testing to obtain THE ERA through its “Umbrella” route. In this way, tests are validated by some government agencies outside the FDA, such as the testing serological validation program, led by the NIH`s National Cancer Institute (NCI). Four of the FDA approved tests itself do not make the note. Additional simulations have shown that for each test, even for a perfect test, it is very unlikely to achieve very high performance in a diagnostic evaluation study, even if there is little uncertainty in the comparative value against which the test is evaluated. Like what. B in S7 Supporting Information (“Very high performance tests”), when 99% of PPA (sensitivity) or NPA (specificity) is required in a diagnostic evaluation study, a low ranking rate of 5% compared to that of perfect diagnostic tests results in a probability of more than 99.999%.