Introduced BridgeEQA, a benchmark for open-vocabulary Embodied Question Answering grounded in professional bridge inspection reports. Proposed Embodied Memory Visual Reasoning (EMVR), which formulates inspection as sequential navigation over an image-based scene graph.

Built an interactive UI for researchers to run and evaluate agents for large-scale visual reasoning and analysis of scenes. Features include 3D model navigation with selectable camera nodes, image bounding box annotation, scene graph viewing and editing, and real-time agent monitoring as it navigates the 3D scene.
Created groundbreaking multimodal change detection method enabling cross-dataset training for the ICCV Workshop on Sustainability with Earth Observation & AI.
Released the largest change detection dataset with 500K+ synthetic pairs and 300K+ text prompts, advancing research in multimodal scene understanding. This dataset is part of the ICCV 2025 ViewDelta research.
Developed AI medical simulation prototype that secured $1.3M Defense Health Agency research funding. Engineered a secure AI deployment pipeline for U.S. Navy Fleet Readiness Center and established company-wide DevOps infrastructure.
Received Best Paper Award for 'Unpaired image-to-image translation of structural damage' published in Advanced Engineering Informatics.
Developed EIGAN, a novel CycleGAN architecture that generates realistic synthetic images of damaged infrastructure from undamaged photos, enabling controllable damage severity synthesis for training robust deep learning models, validated on post-earthquake building assessment datasets.
Deployed neural radiance field-based anomaly detection system on UAVs for U.S. Navy REPTX 2022 demonstration.







