My name is Mariya, and I am a Senior Research Scientist at Meta Superintelligence Labs, where I study multimodal model behavior with a focus on visual reasoning, safety and alignment, and scalable evaluation of frontier models. Prior to my current role, I was an Applied Scientist at Amazon AWS, where I developed auditing methodologies for machine learning and computer vision systems in sensitive, high-impact settings, including multimodal foundation and generative models used in large-scale deployment.
I have spent several years in industry research working on a broad range of vision and multimodal areas, including image and video generative models, visual recommender engines, 2D-to-3D human body shape and pose modelling, synthetic data generation for efficient training of foundation models at scale.
I obtained my PhD in Computer Science from the University of Illinois at Urbana-Champaign, where I was advised by David A. Forsyth. My doctoral research focused on problems in vision and language, visual search and retrieval, multimodal generation, and applications of computer vision in the fashion domain.
Outside of research, I spend an inordinate amount of time lifting heavy weights, as well as filming, editing, and color-grading candid scenes from my travels and film walks. I also maintain an objectively impressive collection of vintage clothing, sourced from thrift stores around the world — a habit that pairs suspiciously well with conference travel.
If you're interested in collaborating or just want to connect, feel free to reach out.