Organophosphate flame retardants (OPFRs) are ubiquitous flame-retardant additives with endocrine-disrupting properties. Despite increasing evidence that OPFRs impact neurodevelopment, their effects on the neuroendocrine stress response remain poorly understood. To examine their long-term impact on stress regulation, we treated pregnant C57Bl/6J dams to a mixture of tris(1,3-dichloro-2-propyl) phosphate (TDCPP), triphenyl phosphate (TPP), and tricresyl phosphate (TCP; 1 mg/kg each) from gestational day (GD) 7 through postnatal day (PND) 14. Adult offspring (8-9 weeks of age) were then challenged with acute stressors, including 1 h restraint or a 6-day acute variable stress (AVS) paradigm. Perinatal OPFR exposure produced persistent, sex-specific alterations in the hypothalamic-pituitary-adrenal (HPA) axis and stress-related neurocircuitry. Following 1 h restraint, OPFR-treated females showed heightened serum corticosterone. In addition, gene expression analysis revealed sex-dependent disruptions in key stress-regulatory pathways after OPFR treatment and 1 h restraint in the hypothalamus (Crhr1, Crhr2, Ptpn5) and pituitary (Crhr1, Pomc, Nr3c1). Females demonstrated more differences in adrenal gene expression related to steroidogenesis (Mc2r, Cyp11b2) and catecholamine biosynthesis (Dbh, Pnmt), with OPFR-treated groups having blunted responses. OPFR AVS females displayed reduced corticosterone and Crh mRNA in the hypothalamus, and downregulated Pacap/Pac1r expression in the bed nucleus of the stria terminalis (BNST), accompanied by increased behavioral avoidance and immobility. In males, OPFR exposure led to increased BNST Pacap and Pac1r, expression, along with hyperactivity and avoidance behaviors. Together, these findings demonstrate that early-life OPFR exposure induces lasting, sex-specific dysregulation of the HPA axis and associated stress circuits, highlighting OPFRs as developmental neuroendocrine disruptors with implications for mood and stress-related disorders.
Publications
2026
BACKGROUND: Clinical Informatics is wide-ranging field that engages with nearly every aspect of clinical care that is documented in the electronic health record (EHR). While studies from the informatics literature had been gradually introducing more sophisticated machine learning and artificial intelligence (AI) techniques into clinical settings, the explosive growth of Large Language Models (LLMs) has enticed both entrepreneurs and clinicians to rapidly introduce LLMs into the Emergency Department.
DISCUSSION: Clinical Informaticists possess a deep understanding of both the clinical significance and underlying architecture of clinical data. Misunderstanding how data is represented can pose significant hazards for clinical care, research, and AI systems. Despite the seemingly high performance of LLMs on some clinical measures, evidence for their ability to reason clinically is lacking, and they often provide confident, false answers. Emergency Physicians (EPs) who are board-certified in Clinical Informatics could be a natural constituency to help to integrate these technologies safely into the ED. However, there are very few EPs with this board-certification, due to high demand, few training programs, and a lack of visibility of the subspecialty.
CONCLUSIONS: LLMs and other AI systems are likely to play a growing role within the ED as technology improves and hospitals partner with commercial vendors. Working EPs need to have a strong understanding of the potential benefits and limitations of these technologies, and EPs with training in Informatics will play an essential role. Increasing exposure to Clinical Informatics within Emergency Medicine residencies and supporting EPs to go into Informatics fellowships is paramount.
While tau pathology is closely associated with neurodegeneration in Alzheimer's disease (AD), our prior work using multi-modality imaging revealed that mismatch between tau (T) and neurodegeneration (N) may reflect contributions from non-AD processes. The medial temporal lobe (MTL), an early site of AD pathology, is also a common target of co-pathologies such as limbic-predominant age-related TDP-43 encephalopathy neuropathologic change (LATE-NC), often following an anterior-posterior atrophy gradient. Given the susceptibility of MTL to co-pathologies, here we explored T-N mismatch specifically within MTL using plasma ptau217 and MTL morphometry for identifying vulnerabilities and resilience in cognitively impaired or unimpaired AD patients. We parcellated the MTL into 100 spatially contiguous segments and calculated their T-N mismatch using plasma ptau217 as a measure for T and thickness as a marker of N. Based on these mismatch profiles, we clustered 447 amyloid-positive individuals from ADNI cohort into data-driven T-N phenotypes. We characterized the T-N phenotypes by examining their cross-sectional and longitudinal atrophy both within the MTL and across the whole brain, as well as cognitive trajectories. This framework was replicated in an independent cohort and finally translated to a real-world clinical sample of 50 patients undergoing anti-amyloid therapy. Clustering identified three T-N phenotypes with different MTL T-N mismatch profiles, atrophy patterns, and cognitive outcomes, despite comparable AD severity. The "canonical" group, characterized by low T-N residuals (N ∼ T), showed AD-like neurodegeneration patterns. The "vulnerable" group, characterized by disproportionately greater neurodegeneration than tau (N > T), showed atrophy primarily in the anterior MTL that extended into temporal-limbic regions, both in cross-sectional and longitudinal analyses. This group also exhibited neurodegeneration that preceded estimated tau onset and experienced faster cognitive decline across multiple domains, aligning with the typical characteristics of mixed LATE-NC with AD. In contrast, the "resilient" group (N < T) showed minimal atrophy and preserved cognitive function. These phenotypes were reproducible in an independent research cohort. Importantly, in a feasibility study applying the model developed from ADNI to a clinical cohort of patients receiving lecanemab, we identified vulnerable individuals with LATE-like atrophy patterns. This highlights its potential utility for identifying individuals with co-pathology in clinical settings. Our findings demonstrate that T-N mismatch within MTL using MRI and plasma biomarkers can reveal AD groups with varying vulnerability/resilience, with the vulnerable group displaying structural and cognitive outcomes suggestive of LATE-NC. This approach offers a cost-effective strategy for clinical trial stratification and precision medicine for AD therapeutics.
BACKGROUND: The laryngeal view from an unsuccessful first intubation attempt is critical for planning the next attempt.
OBJECTIVE: To estimate the agreement of glottic views, as measured by the Cormack-Lehane classification, between the first and second intubation attempts in the emergency department (ED).
METHODS: We performed a retrospective cohort study of ED intubations in the National Emergency Airway Registry from 2016 to 2018 in adults who received both a sedative and paralytic, were intubated with either direct or video laryngoscopy, and received multiple (more than one) intubation attempts. We excluded cases where laryngoscopes, supine vs. nonsupine positioning, or external laryngeal manipulation were altered between attempts. We divided cases into two cohorts: the different intubator cohort (first and second attempts by different intubators) and the same intubator cohort (both attempts by the same intubator). We measured the percent agreement and calculated a weighted kappa (κ) as a secondary measure of agreement.
RESULTS: We included 640 ED intubation cases: 200 in the different intubator cohort and 440 in the same intubator cohort. Between the first and second attempts, the Cormack-Lehane grade was the same in 100 (50.0%, 95% confidence interval [CI] 43.1-56.9) cases for the different intubator cohort (κ = 0.40, 95% CI 0.29-0.51) and 317 (72.0%, 95% CI 67.6-76.1) cases in the same intubator cohort (κ = 0.53, 95% CI 0.46-0.61).
CONCLUSION: Among ED intubations with multiple attempts under similar conditions, the glottic view changed in half of all cases when the intubator changed, and in over a quarter of cases when the same intubator tried again.
BACKGROUND: Direct oral anticoagulant (DOAC) use in children is increasing, supported by multi-center randomized controlled trials (RCTs) and registry data. This study describes real-world experience using DOACs in a paediatric population.
METHODS: We performed a retrospective case series at a single Australian paediatric tertiary hospital, including children aged 0-18 years treated with a DOAC for thrombosis prophylaxis or treatment between January 1st 2021 to May 1st 2025. Data was obtained through retrospective review of patient medical records. Patients were excluded if their medical records lacked sufficient documentation to allow for our analysis. Primary outcomes measured were thrombus resolution, recurrence, extension or development. Principal safety outcomes measured were major or clinically relevant non-major bleeding (CRNMB) and adverse events. Qualitative data on quality of life (QoL) was also obtained if available.
FINDINGS: Out of 69 patients prescribed a DOAC, 67 met inclusion criteria creating 74 treatment episodes. Two patients were excluded due to insufficient documentation. Indications for treatment included therapeutic (n = 49), primary prophylaxis (n = 4), secondary prophylaxis (n = 10), central venous line (CVL) prophylaxis for patients on longterm total parenteral nutrition (TPN) (n = 5) and localized intravascular coagulopathy (LIC) secondary to venous malformation (n = 6). Treatment failure occurred in four episodes (5%), with one episode in each of the therapeutic, primary prophylaxis, secondary prophylaxis, and LIC groups. There were no episodes of fatal thromboembolism. There were no major bleeding events or treatment-related deaths, three (4%) CRNMB events and four (5%) DOAC related side effects.
INTERPRETATION: We confirm the effectiveness of DOACs for the prophylaxis and treatment of paediatric thromboembolism and report acceptable risk of bleeding and side effects.
OBJECTIVE: This study aimed to compare positron emission tomography (PET) and plasma-based temporal modeling of amyloid and tau biomarkers in Alzheimer's disease.
METHODS: Longitudinal amyloid PET (n = 1,097, mean age ± SD = 72.5 ± 7.38 year, 51.4% male), 18F-flortaucipir tau-PET (n = 230, 74.3 ± 7.18 year, 52.2% female), and Fujirebio Lumipulse plasma p-tau217 (n = 752, 72.8 ± 6.93 year, 51.3% male) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and University of Pennsylvania Alzheimer's Disease Research Center (Penn ADRC) were used to generate biomarker trajectory models using sampled-iterative Local approximation (SILA). SILA models using plasma p-tau217 were compared to amyloid and tau PET-based models to estimate amyloid and tau onset, and factors influencing tau onset and time from tau onset to dementia were evaluated for PET and plasma models.
RESULTS: Plasma and PET models generated similar results for estimated amyloid and tau onset, with stronger model agreement for tau (r = 0.88[0.86, 0.89], t = 57.4, p < 0.001) than amyloid (r = 0.75[0.72, 0.77], t = 37.4, p < 0.001) onset. Accuracy of estimated onset compared to actual onset was high within modality (mean absolute error [MAE] ≤ 2.03) with slightly greater error (MAE 3.09-3.42) when comparing across modalities (ie, plasma to PET). For both plasma and PET, earlier tau onset was associated with younger amyloid onset, female sex, and ≥1 apolipoprotein (ApoE) ε4 allele. Earlier dementia onset after tau was associated with later tau onset for both plasma and PET, while male sex was associated with shorter tau to dementia gap in plasma models.
INTERPRETATION: Temporal modeling of plasma biomarkers provides comparable information to PET-based models, particularly for tau onset age, and can serve as a widely accessible tool for clinical assessment of biological disease severity. ANN NEUROL 2026.
Clinical evaluation of large language models (LLMs) currently relies on static datasets and isolated scenarios that fail to capture the cascading effects of healthcare decisions. We propose the Clinical Environment Simulator (CES), a framework that evaluates clinical LLMs within digital hospital environments where every decision dynamically alters future states. The CES would use a parallel simulation architecture: a 'hospital engine' that tracks bed availability, staff workloads and equipment status in real time, and a 'patient engine' that simulates disease progression and treatment responses based on LLM interventions. Unlike current benchmarks, the CES framework requires clinical LLMs to execute decisions through realistic electronic health record interfaces, while managing trade-offs between individual patient optimization and system-wide efficiency. The CES enables three critical evaluations absent from current benchmarks: temporal reasoning under evolving constraints, where delayed diagnostics can lead to patient deterioration; resource-aware decision-making, where aggressive workups for one patient may exhaust capacity needed by others; and operational resilience, through adversarial testing with simultaneous emergencies and system failures. By scoring LLM performance on both clinical outcomes and operational metrics, the CES represents a shift toward evaluating clinical LLMs as a dynamic and integrated component of healthcare delivery systems.
OBJECTIVES: Acute sciatica is a frequent cause of emergency department (ED) visits and hospital admissions. We evaluated the potential national cost savings of using ultrasound-guided transgluteal sciatic nerve block (TGSNB) in patients with acute sciatica who would otherwise be admitted.
METHODS: We performed a Monte Carlo simulation with 10,000 iterations to compare the costs of usual care versus TGSNB targeted to patients who would otherwise require admission. Model inputs included national ED visits for acute sciatica, pre-block admission rates, admission costs, and procedural costs. The primary outcomes were per-patient savings among admitted patients and projected annual national savings.
RESULTS: Targeted use of TGSNB in admission-eligible patients yielded mean per-patient savings of $11,974 (95 % UI: $6702-$18,527). Extrapolated nationally, this corresponds to $45.8 M (95 % UI $22.9 M-$74.0 M) in annual savings. Block costs were modest ($0.67 M (95 % UI: $0.46 M-$0.93 M)), and sensitivity analysis identified admission rates and costs as the main drivers of savings.
CONCLUSIONS: Adoption of TGSNB for severe sciatica in the ED may reduce admissions and generate meaningful healthcare savings. Prospective studies are needed to confirm clinical efficacy and implementation feasibility.
OBJECTIVE: The objective of this study was to evaluate the impact of operator training level, specifically comparing Emergency Medicine (EM) attending physicians and residents, on the analgesic efficacy of ultrasound-guided nerve blocks (UGNBs) performed in the emergency department (ED).
METHODS: This is a secondary analysis of the National Ultrasound-Guided Nerve (NURVE) Block Registry, involving 11 U.S. EDs from January 1, 2022, to December 31, 2023. Adult patients undergoing UGNBs for acute pain or procedural analgesia were included, totaling 1595 procedures after exclusion of incomplete post-procedural pain scores. The primary outcome was percent pain reduction, with >50% defined as clinically meaningful and > 75% as substantial analgesia. Subgroup analyses were performed by operator experience and block type.
RESULTS: Attendings achieved clinically meaningful pain reduction in 80.7% of cases versus 63.4% for residents, and substantial reduction in 68.1% vs 47.7% respectively (p < 0.001). This difference persisted at the highest experience level (>20 prior blocks: 82.3% vs 71.0%, p = 0.0007) and was observed across block types, reaching significance for erector spinae plane blocks (79.6% vs 63.6%, p = 0.01). Complications were rare (0.13%), with both events in resident-performed blocks.
CONCLUSION: UGNBs performed by attendings were associated with greater analgesic success compared with those by residents, yet both groups achieved high rates of clinically meaningful pain reduction with very low complication rates. These results underscore the role of experience in UGNB efficacy while supporting the safety and effectiveness of supervised resident performance in the ED.
Point-of-care ultrasound (POCUS) has emerged as a powerful tool for bedside diagnosis and management, offering real-time clinical insights and cost savings. Its integration into rural family medicine could reduce reliance on advanced imaging, improve patient satisfaction, and support physician versatility across primary, emergency, and procedural care. Despite these advantages, POCUS adoption remains limited, largely due to ambiguous and inconsistent reimbursement policies. Rural Health Clinic all-inclusive payment models, state Medicaid variability, and Local Coverage Determination gaps undermine financial sustainability. Cost analyses demonstrate meaningful system-level savings, yet physician revenue remains constrained, particularly in Medicare-heavy rural populations. Policy solutions include adjusting rural payment models, establishing national Local Coverage Determinations (LCDs), introducing visit modifiers, and leveraging tele-ultrasound and hybrid training approaches. Complementary pathways, such as limited out-of-pocket patient payments, may provide short-term support but risk inequities. Aligning reimbursement policy with demonstrated clinical and economic benefits is critical to scaling POCUS in rural family medicine and strengthening equitable access to care.