I'm a graduate researcher at Dartmouth, advised inside the Hassanpour Lab — a group that builds machine learning tools for digital pathology, medical imaging, and clinical text. My days drift between PyTorch notebooks, ethics discussions, and long conversations with clinicians about what a model is actually being asked to see.
Before Dartmouth, I finished a CS degree with a Statistics minor at Adelphi, where my senior thesis — done with the New York Proton Center — looked at improving proton stopping power estimation for pediatric cancer therapy using dual-energy CT. That project is the reason I care about the weird little seams between numbers, bodies, and the clinicians reading them.
I like clean pipelines, well-commented code, and software that refuses to be dishonest about its uncertainty. I also like matcha, russian literature, and escaping into the woods whenever possible.
Frontend and analysis layer for a dual-energy CT pipeline built with the New York Proton Center. The tool lets medical physicists inspect reconstructions, compare stopping-power estimates, and feed the results back into treatment planning — without leaving the browser.
A ReAct-style AI coding assistant that lives in the terminal. Built with semantic RAG for codebase navigation, automated task planning, and strict file-permission guardrails—because AI tools should write code, not quietly overwrite your directories. Basically, a pocket-sized software engineer.
An audit of skin lesion classifiers to determine if they learn genuine pathology or merely exploit dataset artifacts like clinical markings and demographic features. By systematically removing confounders and applying GradCAM, this project exposes what models are actually looking at—ensuring they diagnose the condition, not the photo.