CheXray: Multi-Model Chest X-ray Analysis and Report Drafting
Published:
I developed CheXray as a research prototype to combine computer vision models and an LLM for chest X-ray interpretation support.
The pipeline integrates:
- CheXNet (DenseNet121, Keras) for 14 thoracic findings.
- TorchXRayVision (PyTorch) for broader pathology scoring.
- LangGraph + LangChain + OpenAI to synthesize both model outputs into a structured draft report with Findings and Impression sections.
Why this matters
CheXray is useful for research workflows where you need both reproducible model scores and readable first-pass report drafts. It supports:
- Faster triage-style experimentation.
- Side-by-side cross-model comparison.
- More consistent reporting prototypes during dataset studies.
Workflow summary
- Build image candidate lists from MIMIC-CXR metadata.
- Run batch inference with CheXNet and TorchXRayVision.
- Aggregate model outputs in LangGraph.
- Generate a radiology-style draft report through the LLM.
The output is intended for research and educational use, with clinician review required.
Repository: nimadarbandi/chestXray
