As part of our Journal Club summaries our JC Chairs (Drs. Lisa Calder and Ian Stiell @EMO_Daddy) have been tasked with explaining Epidemiological concepts so that everyone in our department can analyze the literature and appraise articles on their own. For this Blog post we have all the “Epi Lessons” as they relate to “Diagnosis Articles”. More to follow in the coming weeks.


The Value of Consecutive Enrollment in Prospective Cohort Studies   

Dr. Lisa Calder   February 2015
Prospective cohort studies are, by their nature, subject to selection bias. One way to mitigate this bias is to endeavour to enrol consecutive patients as they present to the health care setting. If an investigator chooses to do this, it is important that they fully describe in their flow chart, the number of eligible patients, and the number of excluded patients stratified by reason for exclusion. If a high proportion of eligible patients are enrolled and the number and reasons for exclusions are reasonable, then the risk of selection bias can be considered mitigated by consecutive enrolment.

Composite Outcomes

By: Dr. Christian Vaillancourt            
It is not unusual for studies to select a composite outcome as their primary outcome measure. The necessity to do so is often justified by the rare occurrence of the primary outcome of true interest (for e.g. death or survival), and by the otherwise very large sample size required to measure it. Caveats to using composite outcomes include the inability to attribute associated risk or benefit of the intervention with the main (rare) outcome of interest itself. Similarly, it is possible to erroneously conclude to the benefit of an intervention where such benefit may only be true for “surrogate” components of the composite outcomes and not the main outcome of interest.
 
 

Did Participating Patients Present a Diagnostic Dilemma? 

Dr. Ian Stiell    January 2015
Studies of diagnostic tests or decision rules should enroll patients with a spectrum of disease from mild to severe, including many at intermediate risk. If the patients clearly have or do not have the condition of interest, then we will learn very little about how well the new diagnostic test will perform on our undifferentiated ED patients. 

Gold Standard Adequacy
Dr. Lisa Calder   October 2012  

Whenever evaluating a study about a diagnostic test, always evaluate the adequacy of the gold standard. Ideally a gold standard should be accurate, reliable, validated and acknowledged by the medical community as the current best method of detecting a given disease. 

Incorporation bias                                                

Dr. Lisa Calder    June 2015
It is important when critically appraising studies evaluating a diagnostic technique to closely examine how the outcome was assessed. If the outcome assessment includes the results of the diagnostic test being studied, this is incorporation bias. This can lead to an overestimation of the accuracy of the test. Far preferable is to keep the outcome assessors blinded to the results of the test being evaluated.  

Likelihood Ratios                                                

Dr. Lisa Calder    April 2012 

When interpreting likelihood ratios, make sure you not only have a clear sense of how large or small a number will significantly alter your pre-test probability, but also carefully look at the 95%CI to ensure that the lower limits are still within this range of significance.

 

Likelihood Ratios:       

Dr. Ian Stiell    September 2015
For diagnostic tests, LR+ increases the post-test probability of a diagnosis and LR- decreases the post-test probability. LR values generally have this impact:
a) large and conclusive (LR+ >10, LR- <0.1),
b) moderate (LR+ 5 to10, LR- 0.1 to 0.2),
c) small but important (LR+ 2 to 5, LR- 0.2 to 0.5),
d) unimportant (LR+ 1 to 2, LR- 0.5 to 1).
In emergency medicine, we are generally most interested in ruling out serious conditions and look for tests with very high sensitivity and very low LR-.


Receiver-Operating Characteristic (ROC) Curves

Dr. Ian Stiell    January 2012
ROC curves are used to evaluate diagnostic data by plotting sensitivity versus 1-specificity, with higher areas under the curve (AUC) indicating better ability of the test to discriminate between patients with and without the condition. In this paper, the authors claimed than AUCs of 0.95 (which are very high) meant that the assays were very accurate in identifying MI. Unfortunately, ROC curves are not very usefulness to clinicians and particularly in the ED where we are usually focused on sensitivity or the ability of a test to rule-out a condition (SnOut). 



Selection Bias in Prospective Cohort Studies

Dr. Lisa Calder   October 2012
Prospective cohort studies are vulnerable to selection bias. Carefully examine the inclusion and exclusion criteria to verify that the study population is appropriate and reproducible. In this case, including patients with “acute chest pain symptoms suggestive of AMI” without further specification likely has poor inter-rater reliability and suggests poor generalizability.  
 
 

Spectrum Bias

Dr. Ian Stiell    September 2012 
Spectrum bias may occur in a study of a diagnostic test when the study subjects do not represent a wide mix of patients with no disease, mild disease, moderate disease, and serious disease (such as we typically see in the ED). If the study only contains patients who clearly have the disease or clearly do not have the disease, then we are unable to judge how well the new test will perform in the real life setting of many patients with a medium probability of disease.   
 

Screening Tests in the ED 

Dr. Ian Stiell     March 2012
Diagnostic tests in the ED are often used to screen many patients for the possibility of severe illness, e.g. ACS in chest pain, SAH in headache, dementia in the elderly. We typically wish to rule-out a condition and such testing must be highly sensitive (SnOut) but will have false positives, e.g Troponin, CT Head, 3DY. In contrast, specialty services may be more interested in ruling in a condition definitively using tests that are highly specific (SpIn), e.g. coronary angiography, CT angiography of the brain, a battery of cognitive tests. 
 
 

Test Accuracy                                                   

Dr. Lisa Calder    February 2012  
Accuracy is not the same as reliability. An accurate test provides results very close to the true value, a reliable test can determine a consistent result multiple times and often irrespective of user.

The Importance of Specificity                    

Dr. Ian Stiell    April 2013
While ED physicians are appropriately concerned that tests used to rule out serious illness have very high sensitivity, we also must be concerned about specificity. If a screening test has a low specificity, it will not reduce use of more invasive or costly tests – be sure to note both the estimated specificity but also the 95% CIs.
 

Why sample size calculations are important          

Dr. Lisa Calder    December 2014
A sample size is calculated in order to: 1) determine the power of a study and 2) reduce the chance of sampling error. When authors provide a sample size calculation, the critical reader can determine whether the power is adequate and whether the underlying assumptions are valid. Without this, there is a high risk of an underpowered study with an imprecise estimate of overall effect
 

Why 2X2 Tables are Important 

Dr. Lisa Calder      June 2014
It is important for the critical reader that studies evaluating the properties of a diagnostic test present the 2X2 contingency tables (disease positive/negative at the top and test positive/negative on the left). This allows the reader to verify the sensitivity, specificity and also calculate positive and negative predictive values and likelihood ratios. Without the raw data of a 2X2 table, these key measures are not always obtainable.
 
 

Author