CheXray: Multi-Model Chest X-ray Analysis and Report Drafting

less than 1 minute read

Published:

Skills Used: Agentic AI Medical Imaging AI LangGraph Orchestration

I developed CheXray as a research prototype to combine computer vision models and an LLM for chest X-ray interpretation support.

The pipeline integrates:

  • CheXNet (DenseNet121, Keras) for 14 thoracic findings.
  • TorchXRayVision (PyTorch) for broader pathology scoring.
  • LangGraph + LangChain + OpenAI to synthesize both model outputs into a structured draft report with Findings and Impression sections.

Why this matters

CheXray is useful for research workflows where you need both reproducible model scores and readable first-pass report drafts. It supports:

  • Faster triage-style experimentation.
  • Side-by-side cross-model comparison.
  • More consistent reporting prototypes during dataset studies.

Workflow summary

  1. Build image candidate lists from MIMIC-CXR metadata.
  2. Run batch inference with CheXNet and TorchXRayVision.
  3. Aggregate model outputs in LangGraph.
  4. Generate a radiology-style draft report through the LLM.

The output is intended for research and educational use, with clinician review required.

Repository: nimadarbandi/chestXray