Seeing lots of articles like this one saying 21% of NYC tested posted, or the one a few days ago showing 33% of Boston did.
https://www.6sqft.com/new-york-covid-antibody-test-preliminary-results/These guys are missing some Bayesian Mathematics. I just did the numbers. At 21.4% of the population testing positive if you assume a 99% specificity and sensitivity that would imply a 20.8% true prevalence. But if the specificity and the sensitivity are let’s say 85%, the true prevalence is only 9.1% according to my calc, yet its reported as 21%.
For example if they tested 3000 people like in NYC. 20% come back positive. That’s 600 positives. If the specificity is only 85%, then that means that 450 are false positives (approximately, it'd be a bit lower since you assumed everyone was negative in this back of the envelope). So the true positive rate is only 150/3000 = 5% in this simplified version. Estimates for specificity seem to vary on the tests being created but most seem below 90%, at which point they’re kinda useless. Sensitivity I think matters less when most of the populatin is negative
Point being, lots of news headlines about
how 20-30% of the population tested positive, but that highly depends on the accuracy and specificity of the test, which probably isnt where it needs to be