Mononito Goswami

Hi, I’m Mononito! I’m a Robotics Ph.D. student at the Auton Lab within the School of Computer Science at Carnegie Mellon University, where I am advised by Prof. Artur Dubrawski. My research is driven by a commitment to making foundation models useful, efficient, and safe, particularly in high-stakes domains. I led the development of MOMENT, a family of multi-task time series foundation models, which have been downloaded over 900K times on HuggingFace. The second year of my Ph.D. was generously supported by the Center for Machine Learning and Health (CMLH) Fellowship 2021.

I had the privilege of working as a Student Researcher in Athena within Google Research in the latter half of 2024. Earlier, I spent two wonderful summers as an Applied Scientist Intern at Amazon Web Services (AWS) AI Labs. Before starting my Ph.D., I completed my undergraduate studies in computer engineering at Delhi Technological University (formerly Delhi College of Engineering) in India.

Feel free to reach out if you’re interested in my research or would like to collaborate!

You can find my CV here.

I am actively seeking research scientist positions and would love to connect if our research interests align!

Research Experience

Foundation Models

Machine Learning in the Real World

Selected Publications

A complete list of my papers can be found on my Google Scholar and Semantic Scholar

Time Series Foundation Models

MOMENT: A Family Of Open Time-series Foundation Models

Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, Artur Dubrawski. In International Conference of Machine Learning (ICML) 2024 [ArXiv, Code, HuggingFace Weights, Time Series Pile]

JoLT: Jointly Learned Representations of Language and Time-Series for Clinical Time-series Interpretation.

Yifu Cai, Arvind Srinivasan, Mononito Goswami, Arjun Choudhry, Artur Dubrawski. In AAAI Conference on Artificial Intelligence (Student Abstract) (2024). Best student abstract presentation award winner.* [Paper] and Neural Information Processing Systems Workshop on Deep Generative Models for Health (DGM4H NeurIPS) (2023) [OpenReview].

Exploring Representations and Interventions in Time Series Foundation Models

Michał Wiliński, Mononito Goswami, Nina Żukowska, Willa Potosnak, and Artur Dubrawski. In NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability and NeurIPS 2024 Workshop on Time Series in the Age of Large Models. [ArXiv]

TimeSeriesExam: A Time Series Understanding Exam

Yifu Cai, Arjun Choudhry, Mononito Goswami, and Artur Dubrawski. In NeurIPS 2024 Workshop on Time Series in the Age of Large Models (Spotlight) and ICAIF 2024 Foundation Models for Time Series: Exploring New Frontiers Workshop (Oral, Best Paper Honorable Mention). [ArXiv, TimeSeriesExam]

Towards Long-Context Time Series Foundation Models

Nina Żukowska, Mononito Goswami, Michał Wiliński, Willa Potosnak, and Artur Dubrawski. In NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability and NeurIPS 2024 Workshop on Time Series in the Age of Large Models. [ArXiv]

Implicit Reasoning in Deep Time Series Forecasting

Willa Potosnak, Cristian Challu, Mononito Goswami, Michał Wiliński, Nina Żukowska, and Artur Dubrawski. In NeurIPS 2024 Workshop on System 2 Reasoning At Scale and NeurIPS 2024 Workshop on Time Series in the Age of Large Models [ArXiv].

Benchmarking and Evaluation

AQuA: A Benchmarking Tool for Label Quality Assessment

Mononito Goswami, Vedant Sanil, Arjun Choudhry, Arvind Srinivasan, Chalisa Udompanyawit, Artur Dubrawski
In Neural Information Processing Systems (NeurIPS) 2023 Datasets & Benchmark Track (Poster). [ArXiv, Code]

Unsupervised Model Selection for Time-series Anomaly Detection

Mononito Goswami, Cristian Challu, Laurent Callot, Lenon Minorics, Andrey Kan
In International Conference of Learning Representations (ICLR) 2023 (Spotlight). [ArXiv, Code]