Paper Accepted at AIR-RES 2026

1 minute read

Published:

Skills Used: Foundation Models Label-Efficient Learning Self-Supervised Learning Computer Vision

I am pleased to share that our paper has been accepted at the 2026 International Conference on the AI Revolution: Research, Ethics, and Society (AIR-RES 2026) for both publication in the proceedings and conference presentation.

Paper details:

  • Paper ID: AIR3365
  • Title: Foundation Models as Data Engines: Label-Efficient Learning in Modern Computer Vision
  • Category: Regular Research Paper
  • Authors: Mohammad R. (Nima) Darbandi, Mahsa Darbandi, Sara Darbandi, Afsaneh Shams, Soheyla Amirian, Hamid R. Arabnia, Guoming Li, Tianming Liu

In this paper, we argue that computer vision is moving from a model-centric workflow to a supervision-centric Data Engine paradigm, where foundation models help iteratively curate, label, and verify training data.

The motivation is practical: traditional deep-learning pipelines are heavily constrained by annotation cost and timeline, often dominated by manual labeling. We present a taxonomy that traces the field from manual supervision to increasingly autonomous supervision loops, where AI-assisted verification and self-improving cycles reduce direct human burden.

The paper also analyzes three converging directions:

  1. Automated curation and labeling using models such as SAM, DINO, and CLIP.
  2. Knowledge agglomeration through distillation from large teacher models into efficient student models.
  3. Domain-specific vertical adaptation in label-scarce settings such as medical imaging, remote sensing, and cellular biology.

Overall, the work outlines a roadmap for scalable, label-efficient visual intelligence systems that can continuously improve through data-engine feedback loops.

Conference: AIR-RES 2026 (April 13-15, 2026, Las Vegas, USA)