“For the things we have to learn before we can do them, we learn by doing them.” - Aristotle


Projects in Interpretability & Generative Modeling

  • Interpretability for SynFlowNet in Molecular Design NeurIPS WiML 2025 7th MoML @ MIT

    • Developed the first interpretability framework for hierarchical GFlowNets in molecular design, advancing transparency of deep generative models used for molecular optimization.
    • Designed gradient-based saliency maps, SMARTS-driven counterfactuals, and sparse autoencoders to uncover atom-level attributions and disentangle key molecular factors (e.g., size, polarity, lipophilicity).
    • Demonstrated recovery of chemically meaningful motifs from embeddings, bridging ML representations with human reasoning.
    • Accepted as a poster at the NeurIPS WiML Workshop 2025 and the 7th Molecular Machine Learning Conference (MoML) @ MIT.

Projects in Deep Learning

  • Leveraging Object Movement Predictions for Interactive Robot Assistance
    • Research advised by Prof. Sonia Chernova. Developed an explainable spatio-temporal graph neural network model for object tracking and future movement prediction in dynamic environments.
  • Deep Reinforcement Learning (RL) based Autonomous Driving
    • Trained a model-free RL algorithm TQC (Truncated Quantile Critics) and increased rewards by 17% with experience replay for navigation in the Donkeycar simulator, outperforming benchmark algorithms DDPG, SAC, and PPO.
    • Trained a Variational Autoencoder to compress input into latent space representation and improved rewards by 42%.
    • Generated a semantic segmentation mask using a pretrained autoencoder to visualize the model for interpretability.

Projects in Computer Vision

  • Computer Vision Tools for Non-verbal Communication in Interviews — Research advised by Prof. James Rehg, Georgia Tech.

    • Trained Hidden Markov Models (HMM) and K-Nearest Neighbours (KNN) models for head gesture detection using OpenFace keypoints on the MIT Interview dataset.
    • Experimented with Multi CONV-LSTM models for head gesture detection using the AMI Meeting corpus.
  • Used Mask R-CNN to perform instance segmentation on images from a robot-mounted camera to identify pixels containing trash. TrashNotBot project

  • Low-cost Intelligent Vision in Automotives (LIVA): Improved object detection in low light for autonomous vehicles. Selected as Top 6 finalist at QBuzz Conference 2019.

  • Vision-based gesture-controlled robotic arm — Final thesis at NIT Trichy. Won Best Final Year Project Award. Published paper as first author: ACM IPS Conference Paper

Projects in Natural Language Processing

  • Semantic Similarity and Toxicity Detection of Questions in Quora — Course project for 7641 Machine Learning.
    • Used Tf-Idf Vectorizer and Word2Vec on the Quora Question-Pairs dataset to predict intent similarity and toxicity.
    • Links: Demo Video | Project Website

Projects in Machine Learning

Enrolled as a remote summer student at Carnegie Mellon University in 2020, completed the course 18-661: Intro to ML for Engineers. Bonus coursework projects included:

  • Analyzed COVID datasets and performed clustering with scikit-learn and Pandas. Modeled growth rates across US states.
  • Created a Decision Tree with scikit-learn to predict user song preferences based on Spotify dataset.
  • Built a custom neural network from scratch in PyTorch. Improved classification accuracy using data augmentation, dropout, and Xavier weight initialization.