Category Archives: Risk Stratification

Predictors of Central Dizziness

I’m rotating through a community emergency department this month, in which it seems like 40% of the patients I’m seeing have dizziness as some element of their constellation of chief complaints. This is one of the most difficult chief complaints to evaluate in emergency medicine — not only because people use the term “dizziness” to describe a multitude of subjective experiences, e.g. vertigo, syncope/presyncope, generalized weakness, anxiety, ataxia, or any sort of disturbance in mentation. Add in the barriers to effective communication that can accompany elder patients visiting an ED, such as language barriers + hearing/vision issues that accompany aging (imagine a translator on a video phone screaming at a patient who is extremely hard of hearing) and this becomes a tricky subject indeed.

To that end, I reviewed a paper published by a Korean group evaluating dizzy patients in their emergency department: Characteristics of central lesions in patients with dizziness determined by diffusion MRI in the emergency department, by Lee et al.

This was a retrospective review of 902 patients presenting to a single ED with a chief complaint of dizziness over six months. They looked closely at 645 patients (!) who recieved MRI imaging as part of their workup, which showed 23 patients (3.6%) having strokes, the majority in the posterior circulation. The authors then examined the characteristics that best predicted the presence of a central lesion.

Their findings? Predictably, advancing age brought with it a higher likelihood of central etiologies: the rate of central lesions on DWI was 3.9% and 3.5% in patients in their 50s and 60s respectively; 7.4% in 70s and 16.7% in their 80s! Hypertension was more common in patients with strokes (69% versus 36%). Atrial fibrillation was more common. 77% of patients with a central cause reported a more vague non-whirling dizziness compared to 40% in patients without central lesions. Other associated neurologic symptoms were present in about 46% of patients with a central cause, compared to only 3% in those who were MR-negative.

So while this study had all the drawbacks of most retrospective, single-center publications, and may not generalize exactly to the populations I work with, I felt it was useful in terms of giving me at least *some* numbers to use to estimate what proportion of these patients are hiding badness. I will have a much lower threshold to MRI patients who are in their 70s-80s, those with AF who aren’t anticoagulated (though the sensation of palpitations or the diminished cardiac output can contribute to the sensation of dizziness as well), or those who report a “vague non-whirling” sense of dizziness. That last point stands in contrast to what I’ve read in other studies that suggested that the character of dizziness was *not* useful, so that was interesting. When this study was reviewed on EMRAP another thing that Sanjay and Mike mentioned was that older patients often have difficulties cooperating with the exam, accurately reporting/describing their symptoms, and that our threshold for obtaining further diagnostic imaging in these patients should be lower.

More on dizziness to come soon, I’m sure.

References

Lee DH1, Kim WY2, Shim BS3, Kim TS4, Ahn JH5, Chung JW5, Yoon TH5, Park HJ5. Characteristics of central lesions in patients with dizziness determined by diffusion MRI in the emergency department. Emerg Med J. 2014 Aug;31(8):641-4. PMID: 23722117. [PubMed] [Read by QxMD]

Transient Hypotension in the Emergency Department

An interesting technicality in the use of the PERC rule to rule out pulmonary embolism is the tachycardia component — it asks not whether the patient is tachycardic at the time of the application of the rule, or whether tachycardia was sustained throughout the emergency department stay, but instead whether the patient had (as described by Jeff Kline in his great review article on PE diagnosis and risk stratification): “3. Pulse <100 beats/min during entire stay in ED”.  Meaning, even transient tachycardia may suggest a life-threatening diagnosis, even if it resolves while the patient is in the emergency department, and we’re probably PERCing out a whole bunch of patients inappropriately, at least according to Kline (who, notably, testifies a whole bunch as an expert witness in cases of missed pulmonary emboli).

I recently had a handful of patients in whom concerning blood pressures were measured and documented, which then resolved when vital signs were re-checked or after a small quantity of fluid or repositioning. I was wondering whether anyone had looked at the prognostic significance of ED hypotension, and whether these momentary dips in blood pressure should be something that concerns me. I did a quick search and found two studies that addressed this question in two different populations:

First we have, from the Rick Bukata school of title writing: “Emergency department hypotension predicts sudden unexpected in-hospital mortality: A prospective cohort study.”  This study, by Alan Jones and Jeff Kline out of (and formerly out of) Carolinas, prospectively enrolled 4,790 adult ED patients admitted to the hospital for reasons other than trauma. Patients were divided into those with and without systolic BPs below 100 mmHg at any time during their ED visit and followed through their hospitalization for the primary outcome of in-hospital mortality. Secondary outcomes included “sudden and unexpected death”, the relationship between the degree and the duration of hypotension measured and mortality, and the test characteristics of hypotension as a test for predicting in-hospital mortality.

Their conclusions are illustrated well in this graph:

hypotension

As they concisely summarize in the article’s conclusion:

Patients exposed to hypotension had a threefold increased risk of in-hospital death and a 10-fold increased risk of sudden, unexpected in-hospital death. Patients with any one SBP < 80 mm Hg had a sixfold-increased incidence of in-hospital death, and patients with a SBP < 100 mm Hg for > 60 min had almost a threefold-increased incidence of in-hospital death.

The second article from the same group echoes this conclusion in a different population of patients. This article, “The significance of non-sustained hypotension in emergency department patients with sepsis” is a secondary analysis of the above data set which looks specifically at the prognostic value of non-sustained hypotension defined as one or more occurrence of SBP < 100 mmHg in patients with sepsis as defined by the receipt of antibiotics in the ED + at least two SIRS criteria.

774 patients met their inclusion criteria for sepsis, and after 74 were excluded for “overt shock” (sustained hypotension or use of pressors). They examined the remaining patients for a primary outcome of in-hospital death.  They found, as one might expect, that hypotension predicts worse outcomes in this sub-population of patients — including when patients had non-sustained hypotension. Again, there seemed to be a “dose-dependent” relationship, with an inverse relationship between the nadir of the ED SBP and the frequency of in-hospital death, as shown here:

sepsishypotension

Another important finding (though taken in context of a fairly small sample) was the statistically similar incidence of the primary outcome in both the groups with transient and sustained hypotension. Both groups of patients had a 2.5-3x higher risk of in-hospital mortality when compared to patients without any hypotension.

Without belaboring the point, these two studies underscore the prognostic significance of even transient hypotension in the undifferentiated emergency department patient, and (as is better known to have implications in terms of severity) in patients diagnosed with sepsis. Like the previous post regarding lactate, or the well-known pearl about tachycardia at discharge, this is a number that should get your attention and which demands evaluation and possible intervention / escalation of care.

References

Marchick MR1, Kline JA, Jones AE. The significance of non-sustained hypotension in emergency department patients with sepsis. Intensive Care Med. 2009 Jul;35(7):1261-4. PMID: 19238354. [PubMed] [Read by QxMD]
Holler JG1, Bech CN1, Henriksen DP2, Mikkelsen S3, Pedersen C4, Lassen AT1. Nontraumatic hypotension and shock in the emergency department and the prehospital setting, prevalence, etiology, and mortality: a systematic review. PLoS One. 2015 Mar 19;10(3):e0119331. PMID: 25789927. [PubMed] [Read by QxMD]

Hyperlactatemia in the Emergency Department

Much has been made over measurement of serum lactate over the last several years– primarily focusing on whether we should be measuring it in the first place, and what the significance (and etiology) of elevations in serum lactate is, and what role it should play in diagnosis and risk stratification. Back in 2010, Scott Weingert was organizing the New York Sepsis Collaborative, and produced this podcast covering the basics of lactate measurement, with a particular bent towards sepsis. He did a great job covering the essential take-home of the data that existed thus far, and addressed a lot of points of confusion many people have about lactate — namely, the idea that it results from hypoxia/hypoxemia or anaerobic respiration, and covers some of the alternative etiologies of hyperlactatemia, i.e. any beta agonist, whether endogenous catecholamines or exogenous, such as albuterol or epinephrine being used as a vasopressor. The takeaway from this, echoed in sepsis care guidelines issued by many other organizations since and in the policies and protocols in many hospitals and emergency departments, is that elevated lactate is a marker of increased mortality, and may be an early alarm that someone is in septic shock or headed towards it.

I wanted to cover two studies — one by Shapiro et al. (a big name in sepsis research), and the other by del Portal et al– that looked at this question in the ED. These were prospective and retrospective cohort studies respectively, and both looked at over 1,000 emergency department patients and evaluated the prognostic significance of elevated venous lactate measurements. In the first study by Shapiro et al, they evaluated all patients admitted to the hospital with an infection-related diagnosis. In the second study, they looked at older adults admitted to the hospital with any diagnosis, though a very large proportion of patients were excluded. Reasons for exclusion (they excluded >14,000 of 16,886 total admissions , so I think this really affects the robustness of this paper) were things like being a sick trauma patient, transfers out, LWBS or leaving AMA — those are all reasonable, but they also excluded all patients in whom a lactate was not drawn in the ED. Without providing the numbers to break this down, it’s tough to say how generalizable these conclusions are, or if lactates were only obtained in patients that the providers thought were sick/potentially septic in the first place (which was the protocol at the hospital conducting the study by Shapiro et al.).

As one might expect, both studies found that hyperlactatemia correlates with badness in the form of increased mortality. The relationship is linear, and statistically significant. The authors also stratified the mortality by time — in Shapiro et al. by 28d in-hospital v. death within 3 days (top graph), and in del Portal’s study by in-hospital, 30 day and 60 day mortality (bottom):

lactateshapirolactatedelcar

Note the similar trend and the steep upward trajectory of the relationship — these results have been paralleled in the critical care literature, and have led to the commonly-accepted idea that a lactate > 4.0 is a threshold above which one should be concerned for hypoperfusion or shock, even in the absence of hypotension. These studies do not, and no studies have, established a causal relationship between lactate elevation and increased mortality– nor have they shown that trying to “clear” lactate will lead to better outcomes than trending alternative markers of perfusion (though several studies have looked at this question, without any definite conclusions). They also did not establish that one need only be worried about lactate > 4.0 — multiple studies including this one have shown that infected patients with lactate in the 2.0–3.9 mmol ⁄ L range have a risk of mortality that is approximately twice that of patients with a lactate level of < 2.0 mmol ⁄ L. They also have not established that we need not be worried about patients without hyperlactatemia — so-called “occult” sepsis.

More recent studies have questioned the relationship between hyperlactatemia and hypoperfusion per se by looking at changes in microcirculation, but I think it’s safe to say that an elevated lactate in a patient with suspected infection should still ring alarm bells in your head. Having these mortality “buckets” in mind when mentally risk stratifying patients or prioritizing them for workup or interventions can also help — particularly when these patients might otherwise look well and thereby fly under the radar.

In my mind, an elevated serum lactate must be explained — sometimes, the explanation is that they just got a nebulizer treatment, are in alcoholic ketoacidosis (which along with the production of ketones, leads to an accumulation in reduced nicotinamide adenine dinucleotide (NADH), which then results in impaired conversion of lactate to pyruvate or preferential conversion of pyruvate to lactate, both resulting in increased lactic acid level), or seized. But these are diagnoses of exclusion, and one must assume until proven otherwise that this represents their body’s sympathetic accelerator pedal being pushed to the floor and that they are needing resuscitation and provision of care with the mentality that this is a sick patient.

 

References

Shapiro NI1, Howell MD, Talmor D, Nathanson LA, Lisbon A, Wolfe RE, Weiss JW. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med. 2005 May;45(5):524-8. PMID: 15855951. [PubMed] [Read by QxMD]
del Portal DA1, Shofer F, Mikkelsen ME, Dorsey PJ Jr, Gaieski DF, Goyal M, Synnestvedt M, Weiner MG, Pines JM. Emergency department lactate is associated with mortality in older adults admitted with and without infections. Acad Emerg Med. 2010 Mar;17(3):260-8. PMID: 20370758. [PubMed] [Read by QxMD]

D-Dimers & Dissection

A recent patient I saw in the emergency department was a fifties year-old woman with a family history of aortic dissections presenting with “chest pain” per the triage note. On my history and exam, she more endorsed vague neck and epigastric discomfort (which had now resolved), and had no other classic findings for a dissection (e.g. hemodynamic instability, asymmetric pulses or blood pressures, abnormal neurologic findings, etc.). She also had a a normal chest x-ray and a negative initial workup for ACS, including a normal ECG and undetectable troponin. In terms of other life-threatening diagnoses, she did not PERC out, and had a Wells score that suggested the D-Dimer would be an appropriate test to rule out pulmonary embolism.

When I discussed with her the potential utility of getting a CT scan of her chest to evaluate for an aortic dissection — she asked me about how much radiation exposure this involved, and shared her (valid and very appropriate) concerns about getting too much radiation. She had many CT scans for various reasons over the years she felt, and did not want any additional unnecessary radiation.

I talked to her more about this and tried to start some shared decision making by sharing a favorite infographic of mine about radiation amounts in diagnostic imaging, and (to myself) pondered a clinical question: If the D-dimer test was low, did that along with the low-ish pretest probability, safely decrease the likelihood of dissection enough to forego a CT scan? There is an emerging literature on the use of dimer testing to rule out aortic dissections, but how good is it? Do you use the same cut-off as in pulmonary embolism? Should that cutoff be age adjusted? And what are the test characteristics in this context? I had no idea, so that’s what today’s post-didactics reading was about.

I read through “A Systematic Review and Meta-analysis of D-dimer as a Rule-out Test for Suspected Acute Aortic Dissection” by Asha et al., which reviews the work of 30 studies and combines the data for 4 studies using a standard cutoff of 0.50 μg/mL to estimate sensitivity, specificity, and positive and negative likelihood ratios of a D-dimer. As the abstract conclusion reads:

“Overall, sensitivity and negative likelihood ratio were 98.0% (95% confidence interval [CI] 96.3% to 99.1%) and 0.05 (95% CI 0.03 to 0.09), respectively. These measurements had little statistical heterogeneity. Specificity (41.9%; 95% CI 39.0% to 44.9%) and positive likelihood ratio (2.11; 95% CI 1.46 to 3.05) showed significant statistical heterogeneity. When applied to a low-risk population as defined by the American Heart Association (prevalence 6%), the posttest probability for acute aortic dissection was 0.3%.”

So there you have it. Obviously, there’s more to it, and the actual paper is worth reading — it discusses some of the drawbacks of the included studies, specifically unanswered questions about bias and the generalizability to ED populations given the high prevalence of disease in the included cohorts. Limitations aside, basic conclusion that was in low risk patients, a negative D-dimer confers an even lower risk of acute aortic dissection, and it may be reasonable (don’t you love that phrase?) to consider using this result to inform your decision-making regarding the utility of imaging. Of course, one must also consider the rate of false positives, and the potential harms of resultant downstream testing as has been discussed regarding testing for PE.

I think that one of the more important (and potentially easily-overlooked, as when it comes to all clinical decision tools or supports, or anything that serves as a Bayesian modifier) points I took away from this review is that while this is a potentially useful test in this context, pretest probability matters. As the abstracts of some of the included studies say: “When applied to a low-risk population…”, “…in patients with low likelihood of the disease”,  “…the presence of ADD risk score 0 or ≤ 1 combined with a…” and so on. You should only really hang your hat on a negative dimer assay when you think the probability is low in the first place. Another question to consider though, is how low is the pre-test probability to suggest you *shouldn’t* order a dimer to r/o dissection? And how many people with potential dissections that might be caught and thereby managed earlier PERC’d out of receiving a test that might reveal this diagnosis (though might also subject them to an unnecessary scan for PE)?

As the full text of the article states:

It would be pertinent to comment on the many case reports of patients with confirmed acute aortic dissection but a negative D-dimer result. It should first be recognized that these cases did not have a risk-stratification applied and also that no test, no matter how good, including the reference standards for the disease, has 100% accuracy. These cases mostly represent a subgroup of patients with a thrombosed false lumen or an intramural hematoma who seem particularly likely to have a lower or negative D-dimer result. The studies in this meta-analysis included such patients, which means that the high sensitivity and excellent negative likelihood ratio were achieved with the inclusion of these problematic cases.

It is always worth remembering that rare diseases are rare, and that in a patient with a low pretest-probability of having a disease, any test can be construed as to have a high sensitivity when applied to the wrong population. For instance, I can figure out who is low risk for aortic dissection in most chest pain patients with the “Bryan” rule — I just ask if their name is Bryan, spelled with a y. If negative, they are extremely unlikely to have an aortic dissection. Of course, if they do, my test will likely miss them but the point remains. In the patient described above, even though the D-dimer was negative, this patient was not low risk by the fairly-conservative AHA acute aortic dissection risk score (pictured below), and therefore the sensitivities and specificities cited in the articles presented in this meta-analysis don’t apply to their case.

1-s2.0-S0196064415001183-gr3

In cases where acute aortic dissection is suspected as a likely potential diagnosis, a D-dimer is probably not an appropriate test to replace definitive diagnostic imaging of the aorta–  specifically, as stated by previous guidelines from the AHA: computed tomography (CT), magnetic resonance imaging (MRI), or transesophageal echocardiography. Let this inform your discussions of shared decision making in the emergency department, and document accordingly, and hopefully you’ll be able to adopt a strategy to help everyone sleep better at night.

References

Nazerian P1, Morello F2, Vanni S1, Bono A3, Castelli M1, Forno D3, Gigli C1, Soardo F3, Carbone F3, Lupia E3, Grifoni S1. Combined use of aortic dissection detection risk score and D-dimer in the diagnostic workup of suspected acute aortic dissection. Int J Cardiol. 2014 Jul 15;175(1):78-82. PMID: 24838058. [PubMed] [Read by QxMD]

Evaluation of Cervical Spine Clearance by Computed Tomographic Scan Alone in Intoxicated Patients With Blunt Trauma

One common and vexing problem I’ve run into thus far in residency is the intoxicated patient, found down, brought in by EMS in a rigid cervical collar placed because of the presumption of possible trauma leading to an unstable cervical injury. The efficacy and necessity of cervical collars has been debated elsewhere, and I’m not looking to discuss that here — what I’m more interested is, if these patients have a negative CT scan (for better and for worse, fairly common practice in those unable to give a reliable exam, especially if they have any sign of trauma on them), can we safely remove their collar?

This study, by the “Pacific Coast Surgery Association” and published in JAMA Surgery, prospectively evaluated 1668 intoxicated adults with blunt trauma who underwent cervical spine CT scans over one year at a single Level I trauma center. Intoxication was defined based on the results of urine and blood testing, and the outcome of interest was clinically-significant cervical spine injuries that required cervical immobilization (not necessarily surgical fixation).

The authors wanted to evaluate the negative predictive value of a normal CT scan in the intoxicated patient to determine whether this would allow safe removal of their cervical collar– it is well-known that some injuries (e.g. unstable ligamentous injuries or spinal cord injuries without fractures of the vertebrae) may not be identifiable on a CT scan, and in the patient who is altered, it may be difficult to elicit exam findings that would tip a practitioner off to the presence of these injuries.

So what did they find? In intoxicated patients, the negative predictive values of a CT scan read as negative for acute injury were 99.2% for all injuries and 99.8% for unstable injuries.  There were five false-negative CTs, with 4 central cord syndromes without associated fracture. There was also one false-negative for a potentially unstable injury identified in a drug-intoxicated patient who presented with clear quadriplegia on examination. All of these were detected on MR imaging. About half of the intoxicated patients with the negative CT went on to be admitted with their cervical collar left on. None of these intoxicated patients went on to have an injury identified later, or to have any neurologic deficit, leading to a conclusion of a NPV of 100% in that cohort.

My takeaway from this paper: while there are some weaknesses, e.g. the lack of protocol-based care and the significant heterogeneity in terms of “intoxication”, it seems reasonable to take away from this that a negative CT scan done on a modern scanner and read by an experienced trauma radiologist or neuroradiologist does allow you to safely clear the collar of an intoxicated patient who does not have any gross neurologic deficits. This data lends further support to the 2015 recommendations from the Eastern Association for the Surgery of Trauma who in a systematic review and meta-analysis “found the negative predictive value for identifying unstable CSIs to be 100% and thus have made a conditional recommendation for cervical collar removal based on a normal high-quality CT scan”. Adopting this practice could help minimize unnecessary testing (including expensive MRIs that are more likely to show false positives than to identify clinically-significant injuries) , allow for earlier disposition of patients from the emergency department, increase patient comfort, and decrease the emotional and cognitive burden placed on providers who otherwise often have to continuously struggle to keep patients adherent to immobilization practices.

References

Bush L1, Brookshire R1, Roche B1, Johnson A1, Cole F1, Karmy-Jones R1, Long W1, Martin MJ2. Evaluation of Cervical Spine Clearance by Computed Tomographic Scan Alone in Intoxicated Patients With Blunt Trauma. JAMA Surg. 2016 Jun 15. PMID: 27305663. [PubMed] [Read by QxMD]

Pulmonary Embolism in Pregnancy

The diagnosis of pulmonary embolism in pregnant patients is one made difficult by many factors, including a normal elevation in serum d-dimer levels (see below) as well as the additional concern regarding exposure of a developing fetus to the high levels of radiation and contrast associated with CT pulmonary angiography. It is well-known that exogenous estrogen is a risk factor for thromboembolic disease, and while it seems from the data discussed below that pregnancy is not as scarily-high-risk for PE as we might think, we certainly know that pregnancy is a time when homones are running high Add to this the fact that in pregnancy, women are both tachypnic and tachycardic due to normal changes in cardiovascular and respiratory physiology — making a clinical diagnosis that much more difficult.

In these sequentially-published review articles by the PE guru Jeff Kline et al., the authors review the diagnostic dilemma presented by these patients and present the following algorithm:

Microsoft Word - jem_10231_JEM10231.edt

Note the inclusion of the trimester-stratified quantitative d-dimer for patients without a high pretest probability who are PERC negative — this goes against the conventional wisdom that the d-dimer is a worthless test in pregnant women due to the normal elevation found intrapartum. Similar to the way we have begun “age-adjusting” the threshold value of the quantitative d-dimer in non-pregnant patients, they propose that the threshold be “adjusted according to the trimester of pregnancy, as follows: first trimester, 750 ng/mL; second trimester, 1000 ng/mL; third trimester, 1250 ng/mL (assuming a standard cutoff of 500 ng/mL). If the patient has a non-high-pretest probability, has no high-risk features, is PERC negative, and the bilateral ultrasound is negative, and the D-dimer is below the trimester-adjusted values, PE can be ruled out to a reasonable degree of medical certainty.”

They acknowledge the limitations of this approach, including that it hasn’t been prospectively validated, and they do not present any data showing its performance as they’ve been using it, but in cases like this expert opinion is the best we have (so far). He discussed this approach on an episode of ER Cast, and explains it a little bit more in terms of the integration into clinical practice, as well as the role that gestalt can play in risk stratification. 

What I found interesting about this was the idea that the post-partum period is the most risky period of time for women in terms of pulmonary embolism — this echoes what we know about cardiovascular disease in the post-partum period, i.e. when women are autotransfused and their cardiopulmonary physiology is rapidly and massively altered, this presents the highest risk in terms of women with heart failure, valvular abnormalities, or disease entities like peripartum cardiomyopathy. According to the data presented by Kline et al, while the risk increases throughout a pregnancy, 70% of all peripartum PEs occur post partum, and the risk during pregnancy is low (OR 0.4-0.8, depending on trimester) — though, as the authors note, this may not actually reflect that pregnancy is protective against PE but instead suggest that we overtest women for pulmonary embolism during pregnancy, perhaps because of the clinical changes described above. The also cite a large meta-analysis of 23 epidemiologic studies that found PE occuring in only 3 of 10,000 pregnancies.

Another thing that stood out to me while reviewing this article was that for a patient to PERC out of these algorithms, their vital signs must be normal throughout their entire ED stay — normalization of vital signs during an ED visit does not lower the risk of PE, as specifically stated by the authors.

 

References

Kovac M1, Mikovic Z, Rakicevic L, Srzentic S, Mandic V, Djordjevic V, Radojkovic D, Elezovic I. The use of D-dimer with new cutoff can be useful in diagnosis of venous thromboembolism in pregnancy. Eur J Obstet Gynecol Reprod Biol. 2010 Jan;148(1):27-30. PMID: 19804940. [PubMed] [Read by QxMD]
Kline JA1, Williams GW, Hernandez-Nino J. D-dimer concentrations in normal pregnancy: new diagnostic thresholds are needed. Clin Chem. 2005 May;51(5):825-9. PMID: 15764641. [PubMed] [Read by QxMD]

More Low-Risk Chest Pain!

In this article published in JAMA Internal Medicine in July of last year, a group of emergency physicians reviewed 11,230 records of patients hospitalized for chest pain with 2 negative troponin tests, nonconcerning initial ED vital signs, and nonischemic, interpretable electrocardiographic findings to determine the incidence of patient-centered adverse events in the short term.

What is interesting and unique about this study is the shift from using MACE (which, as I have discussed before, includes somewhat-nebulously-patient-centered bad outcomes such as need for cardiac revascularization — this is an intervention, not a harm that occurred to a patient due to a lack of intervention) from using their more  “clinically relevant adverse cardiac events” (of course requiring a new catchy acronym, CRACE): (1) life-threatening arrhythmia (ventricular fibrillation, sustained ventricular tachycardia requiring treatment, symptomatic bradycardia or bradyasystole requiring emergent intervention, and any tachydysrhythmia treated with cardioversion); (2) inpatient STEMI; (3) cardiac or respiratory arrest; and (4) death.

Another unique aspect of this study was the enrollment of patients who were sick who met their criteria discussed above– many other studies only considered “low risk patients” to be those without significant comorbidities or CV disease histories (e.g. history of CABG, multiple stents, diabetes, hypertension, etc) . They did exclude patients with LBBB or pacemaker rhythms on EKGs, which would have made identification of ischemia perhaps more difficult.

What did they find? Only four patients out of 7266 meeting the above criteria went on to have any of the primary endpoints. Of these, two were non-cardiac and two were possibly iatrogenic. This is a rate of 0.06% (95% CI 0.02-0.14%), which is much lower than many people would likely guess, and can help inform the discussion we can have with patients when arriving at a disposition. If I am practicing in a community such as the authors’, where short-term follow up with a cardiologist can be arranged, and a patient is reliable, I feel that this data can help me feel more comfortable discharging them with that plan rather than admitting to the hospital, if the patient is comfortable with this.

As Ryan Radecki wrote, the applicability of this hinges on tightly integrated follow up, and we cannot practice “catch and release” medicine. This is also only one data set, and requires prospective validation, and we need to acknowledge that this is not a zero-miss strategy (just like any strategy). That said, there are many potential downsides associated with admission, from costs and downstream sequelae of unnecessary invasive testing to iatrogenic harms, and this study will help better inform our conversation with patients about all of these issues.

References

Weinstock MB1, Weingart S2, Orth F3, VanFossen D4, Kaide C5, Anderson J6, Newman DH7. Risk for Clinically Relevant Adverse Cardiac Events in Patients With Chest Pain at Hospital Admission. JAMA Intern Med. 2015 Jul;175(7):1207-12. PMID: 25985100. [PubMed] [Read by QxMD]

Age-Adjusted D-Dimer

Pulmonary embolism is a commonly-investigated diagnosis in the world of emergency department risk stratification — the presentation of these patients is varied, the ultimate impact on patients of the disease entity itself is questionable when it comes to the less sick end of the spectrum, and the tools we have for diagnosis are associated with significant amounts of radiation and contrast. However, in a practice environment with a low tolerance for missed diagnoses (however questionable the risk:benefit balance of the intervention that would have been performed), we continue to strive to balance the risks and costs of diagnostic testing with the very real risk of progressive disease.

The D-Dimer level is a test used in patients with a low to moderate pretest probability of DVT or PE (and possibly aortic dissection?) — if negative, it will virtually rule out PE, and can help you avoid further testing with CT pulmonary angiography. If positive, further testing is required. So why do emergency physicians hate the D-Dimer? Because while elevation in D-Dimer levels is sensitive for pulmonary embolism or DVT, it is not specific — particularly with cutoff levels of ~ 500 ng/dL, which is the conventional cutoff for a positive test. Elevated D-Dimer levels occur for a multitude of reasons, including liver disease, inflammation, malignancy, trauma, pregnancy, and– most complicating of all– advanced age.

The first of the studies I read this weekend, the ADJUST-PE study, a group of authors had previously retrospectively derived and valid the value of a progressive D-Dimer cutoff adjusted to age in 1712 patients — the optimal age-adjusted cutoff was defined as patient’s age multiplied by 10 in patients 50 years or older. The ADJUST-PE study represented their attempt to prospectively validate the adjustment and to assess its impact on patients in real life. In this multi-center study which enrolled 3324 patients, the age adjusted D-Dimer cut off did very well — only one patient who had a D-Dimer between 500 ng/dL and their age-adjusted cutoff (in other words, someone who would have gotten scanned if they weren’t using the new tool) was found at three month follow up to have a PE, and this was non-fatal. The age adjusted level allowed for safe discharge of patients that might otherwise have been exposed to the costs/potential harms associated with CTPA or treatment of non-hemodynamically significant emboli.

The second study takes the same approach and retrospectively applies the cutoff to 31,094 suspected pulmonary embolism patients presenting to an emergency department in the community. They report data for all ED visits for Kaiser Permanente Southern California members older than 50 years, from 2008 to 2013, who received a D-dimer test after presenting with a chief complaint related to possible PE such as chest pain or dyspnea (due to their focus on PE rather than DVT). The authors excluded patients who underwent ultrasound imaging for DVT for the same reason. What they found was a sensitivity of 92.9% and a specificity of 63.9% for the age-adjusted D-Dimer threshold applied to this population — this compares to 98.0% and 54.4% for the traditional threshold of 500 ng/dL. This is not unsurprising — what I thought was interesting about the second paper was its expansion of the discussion of this testing strategy to include estimates for other harms beyond symptomatic PE that might be missed — specifically, they discuss the incidence of contrast-induced nephropathy, and how changes in testing strategies translate into potential benefits there that may outweigh the harms done by missing clots. These are statistical models, and need to be taken with a grain of salt, but they predict that  “using an age-adjusted D-dimer threshold would miss or delay diagnosis of 26 more pulmonary embolisms than the current standard, but it would prevent 322 cases of contrast- induced nephropathy, 29 cases of severe renal failure, and 19 deaths related to contrast-induced nephropathy in this sample.”

So what will I do with this information? Probably try for better shared decision making and try to avoid CTPA in patients with D-Dimers below the age-adjusted cutoff. I think sharing these numbers with our patients in a comprehensible way, and talking to them about the potential harms associated with testing is the best way forward– this will require further work in terms of identifying the best way to communicate these risks and odds to patients, and as always, trying to balance advocacy for patients, and our ultimate goal of keeping them safe, alive and functional, with the fear of missing a diagnosis or sending someone home with a nebulous non-diagnosis and the possibility of clinical deterioration.

References

Righini M1, Van Es J2, Den Exter PL3, Roy PM4, Verschuren F5, Ghuysen A6, Rutschmann OT7, Sanchez O8, Jaffrelot M9, Trinh-Duc A10, Le Gall C11, Moustafa F12, Principe A13, Van Houten AA14, Ten Wolde M15, Douma RA2, Hazelaar G16, Erkens PM17, Van Kralingen KW18, Grootenboers MJ19, Durian MF20, Cheung YW15, Meyer G8, Bounameaux H1, Huisman MV3, Kamphuisen PW21, Le Gal G22. Age-adjusted D-dimer cutoff levels to rule out pulmonary embolism: the ADJUST-PE study. JAMA. 2014 Mar 19;311(11):1117-24. PMID: 24643601. [PubMed] [Read by QxMD]

Have a HEART! And some low-risk chest pain risk stratification, while you’re at it!

Chest pain is tricky. And scary. The combination of these two things makes it one of the chief complaints that seems to be difficult to work up in a thoughtful way, which minimizes risk to the patient (and provider) while also not overreaching in one’s diagnostic testing and thereby adding additional harms.

In medical school, we learned about the TIMI score as the best way for evaluating chest pain in our patients– however, this score was developed for inpatients on the cardiology service admitted with NSTEMI/UA, not ED patients presenting with chest pain, and has only really been validated in high-risk ED patients. The GRACE score is another one that seems to slightly outperform the TIMI in terms of predicting certain adverse events, but again was not designed for risk stratification of ED patients with chest pain.

So now we have (and have had for a while, this isn’t exactly new – I am more reviewing it for my own benefit) the HEART score, designed to “identify both low and high risk patients for an acute coronary syndrome” in the emergency department. It was not derived from a database, but from “clinical experience and medical literature”, and was then prospectively validated in 2440 patients at 10 sites. When compared to TIMI and GRACE, the c-statistic (or area under the receiver-operator curve) was 0.83 v. 0.75 and 0.70 respectively, showing that it did a better job discriminating patients with higher risk for major adverse cardiac events (MACE) in these patients. Pertinently for the ED physician, it also did a better job ruling *out* badness, with a lower percentage of “low-risk” scorers having an adverse event. With all this in mind, I plan to try to use the HEART score in my discussions with attendings when presented with chest pain patients, and hope that I will not only catch more (and rule out more) badness, but may also help reduce invasive imaging and stress testing in these low-risk patients.

In a recent single-center study, Mahler et al. looked back at patients who had HEART scores of 0-3 (low risk) admitted to an ED-based observation unit (keep the population in mind) and evaluated the impact of this score on their receipt of further diagnostic testing down the line, as well as the incidence of adverse events in this group. They found, unsurprisingly, that these patients were in fact at low-risk for ACS — only 0.5% of patients in this group had an adverse event in the next 30 days (though to be thorough, they had a LTFU rate of 30% which is pretty significant). The surprising, and meaningful to me finding was that they reduced the rate of further testing by 83%, no doubt saving these patients from unnecessary stress and anxiety, potential harms or complications, and costs to both them and the health care system.

This is a very incomplete treatment of chest pain risk stratification, I know, but I hope to add more as I learn and read more about these scoring systems and others, and grow in my understanding of critical appraisal of the literature.

References

Antman EM1, Cohen M, Bernink PJ, McCabe CH, Horacek T, Papuchis G, Mautner B, Corbalan R, Radley D, Braunwald E. The TIMI risk score for unstable angina/non-ST elevation MI: A method for prognostication and therapeutic decision making. JAMA. 2000 Aug 16;284(7):835-42. PMID: 10938172. [PubMed] [Read by QxMD]
Mahler SA1, Hiestand BC, Goff DC Jr, Hoekstra JW, Miller CD. Can the HEART score safely reduce stress testing and cardiac imaging in patients at low risk for major adverse cardiac events? Crit Pathw Cardiol. 2011 Sep;10(3):128-33. PMID: 21989033. [PubMed] [Read by QxMD]
Backus BE1, Six AJ, Kelder JC, Bosschaert MA, Mast EG, Mosterd A, Veldkamp RF, Wardeh AJ, Tio R, Braam R, Monnink SH, van Tooren R, Mast TP, van den Akker F, Cramer MJ, Poldervaart JM, Hoes AW, Doevendans PA. A prospective validation of the HEART score for chest pain patients at the emergency department. Int J Cardiol. 2013 Oct 3;168(3):2153-8. PMID: 23465250. [PubMed] [Read by QxMD]