Abstract: The rapid advancement of astronomical survey technologies, such as the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), is expected to generate millions of transient events annually, posing significant challenges in processing large volumes of unlabeled data. To address this, a deep learning model was developed, combining a Recurrent Neural Network Variational Autoencoder (RNN-VAE) for dimensionality reduction with a Gradient Boosting Classifier for real-time classification of transient events. This model efficiently classifies galactic and extragalactic transients without the need for labeled data. Using the PLAsTiCC dataset, the model achieved an AUC-ROC score of 0.94 and F-1 score of 0.89, demonstrating strong performance in distinguishing between various transient classes, including rare events. This approach offers a scalable solution for real-time astronomical surveys, enhancing both classification accuracy and resource allocation in future data-rich environments.
Publication/Presentation: the Stanford Undergraduate Research Journal Vol. 20 Issue 2, the International Conference on Machine Learning for Astrophysics, the Cambridge Center for International Research Student Research Symposium
Abstract: As image generators produce increasingly realistic images, concerns about potential misuse continue to grow. Supervised detection relies on large, curated datasets and struggles to generalize across diverse generators. In this work, we investigate the use of pre-trained Vision-Language Models (VLMs) for zero-shot detection of AI-generated images. While off-the-shelf VLMs exhibit some taskspecific reasoning and chain-of-thought prompting offers gains, we show that task-aligned prompting elicits more focused reasoning and significantly improves performance without fine-tuning. Specifically, prefixing the model’s response with the phrase “Let’s examine the style and the synthesis artifacts”—a method we call zero-shot-s2—boosts Macro F1 scores by 8%–29%. These gains are consistent for two widely used open-source models and across three recent, diverse datasets spanning human faces, objects, and animals with images generated by 16 different models—demonstrating strong generalization. We further evaluate the approach across three additional model sizes and observe improvements in most dataset–model combinations—suggesting robustness to model scale. Surprisingly, self-consistency, a behavior previously observed in language reasoning, where aggregating answers from diverse reasoning paths improves performance, also holds in this setting. Even here, zero-shot-s2 scales better than chain-of-thought in most cases—indicating that it elicits more useful diversity. Our findings show that task-aligned prompts elicit more focused reasoning and enhance latent capabilities in VLMs, like the detection of AI-generated images—offering a simple, generalizable, and explainable alternative to supervised methods.
Our code is publicly available on github: github.com/Zoher15/Zero-shot-s2.