Research
(and other cool projects)
(and other cool projects)
Abstract: The rapid advancement of astronomical survey technologies, such as the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), is expected to generate millions of transient events annually, posing significant challenges in processing large volumes of unlabeled data. To address this, a deep learning model was developed, combining a Recurrent Neural Network Variational Autoencoder (RNN-VAE) for dimensionality reduction with a Gradient Boosting Classifier for real-time classification of transient events. This model efficiently classifies galactic and extragalactic transients without the need for labeled data. Using the PLAsTiCC dataset, the model achieved an AUC-ROC score of 0.94 and F-1 score of 0.89, demonstrating strong performance in distinguishing between various transient classes, including rare events. This approach offers a scalable solution for real-time astronomical surveys, enhancing both classification accuracy and resource allocation in future data-rich environments.
Publication/Presentation: the Stanford Undergraduate Research Journal Vol. 20 Issue 2, the International Conference on Machine Learning for Astrophysics, the Cambridge Center for International Research Student Research Symposium
"Prefilled Responses Enhance Zero-Shot Detection of AI-generated Images"
Abstract: As AI models generate increasingly realistic images, growing concerns over potential misuse underscore the need for reliable detection. Traditional supervised detection methods depend on large, curated datasets for training and often fail to generalize to novel, out-of-domain image generators. As an alternative, we explore pre-trained Vision-Language Models (VLMs) for zero-shot detection of AI-generated images. We evaluate VLM performance on three diverse benchmarks encompassing synthetic images of human faces, objects, and animals produced by 16 different state-of-the-art image generators. While off-the-shelf VLMs perform poorly on these datasets, we find that their reasoning can be guided effectively through simple response prefilling -- a method we call Prefill-Guided Thinking (PGT). In particular, prefilling a VLM response with the task-aligned phrase "Let's examine the style and the synthesis artifacts" improves the Macro F1 scores of three widely used open-source VLMs by up to 24%.
Our code is publicly available on github: github.com/Zoher15/Zero-shot-s2.
Publication/Presentation: NeurIPs 1st Annual Workshop on Generative and Protective AI for Content Creation (oral)
Poster Outcomes:
Determined classes of student prompts and their frequency as
the course progressed to understand LLM usage
Validated prompt fix effectiveness through student interviews
Built a tool that leverages student-prompt refinement to
generate more helpful responses
Future work: refine clustering using key words from lecture and
homework as a function of time, test prompt tool in a real CS 1 class
3D Printed TPU Ear Shields + Haptic Alert System + GORE-TEX + HTML application
Rechargeable Battery + Removable Circuit + Upgraded Logistic Regression
Directional Sensing and Alerts + Prototype Android Application + Upgraded Logistic Regression