projects

BobDyn - Vehicle Simulation & Charecterization thumbnail
Feb 2026 - Present [OpenModelica, Vue, Docker, SciPy]

BobDyn is an open-source high-performance Python framework designed to automate the execution, extraction, and visualization of vehicle dynamics simulations. It acts as an orchestration layer for OpenModelica binaries, transforming raw simulation data into engineered insights for Formula SAE development.


I helped design the modular repository structure and a Docker-based orchestration layer to ensure portable, isolated simulation environments. To maximize throughput, I implemented a parallel execution handler that enables the framework to run locally on multiple cores or scale to high-performance clusters at the Texas Advanced Computing Center (TACC). Containerizing the OpenModelica binaries and managing them through this pipeline, removes local environment dependencies and reduced the time required for full-vehicle characterization.

I engineered a design-of-experiments scaffold specifically around ISO 4138 (Constant Radius) simulations. This wrapper automates large-scale parameter sweeps, allowing the team to analyze how variables like tire stiffness or anti-roll bar rates affect understeer gradients and vehicle stability.

BobDocs was built using VitePress, creating a centralized knowledge base for simulation & charecterization methods. The frontend, built with Vue 3 and Vite, serves as the primary interface for the project, focusing on a high-performance, technical aesthetic that matches our engineering workflow.

Persistent Homology-Guided Image Compression thumbnail
Feb 2025 - Present [Python, Ripser, NumPy, TDA]

The core objective of this research is to leverage Persistent Homology (PH) to intelligently filter frequencies. By analyzing images through the lens of algebraic topology, we can differentiate between structurally essential data (persistent features) and insignificant topological noise.


The framework utilizes Persistent Homology (PH) to identify the structural significance of frequencies by analyzing how topological features—such as connected components and loops—persist across varying spatial scales. Unlike traditional methods that rely on human-centric visual heuristics, this approach uses the 1-Wasserstein distance to calculate an importance score for each frequency pair in the Discrete Fourier Transform (DFT) spectrum. By reconstructing images using only high-persistence frequencies, the system preserves the underlying “shape” and connectivity of the data while effectively discarding insignificant topological noise.

This research offers significant merit for automated computer vision tasks where structural integrity is more critical than pixel-perfect visual fidelity. Experimental results demonstrate that while traditional methods like JPEG are susceptible to distortion in noisy conditions, homology-guided filtering maintains a high degree of topological similarity, as evidenced by superior performance in Bottleneck Distance and Betti Number metrics.

Bayesian Scavenger Hunt FRI 1 thumbnail
Apr 2026 [ROS2, Eigen, OpenCV, YOLOv8]

Developed within the Living with Robots Lab at UT Austin, this project involved building an autonomous "scavenger hunt" agent using a probabilistic search pipeline. By implementing Bayesian inference, the robot maintained a dynamic belief map to locate targets in environments with high uncertainty. We developed custom ROS 2 packages to manage sensor fusion and real-time path planning, moving the system beyond deterministic search toward a more intelligent, uncertainty-aware decision model.


This project focused on engineering an autonomous search agent for the BWI Bot v2. I implemented a Bayesian inference engine that allowed the robot to maintain a dynamic belief map of its environment, quantifying the probability of a target’s location as it gathered new sensor data. By moving away from static search patterns, the robot could intelligently reason about uncertainty, updating its internal model in real-time to reflect the most likely coordinates of its objective. The core of the “scavenger hunt” logic was a specialized traversal algorithm designed to prioritize high-confidence areas. The system analyzed the global belief map and autonomously planned paths to the highest belief regions, creating a robust search-and-find system for the BWI platform.

Autopilot - MIT CRE[AT]E Challenge thumbnail
Feb 2026 - Apr 2026 [Flutter, Tavily, Gemini VLM, Flask]

Autopilot is a mobile navigation aid developed for the MIT CRE[AT]E challenge to assist my visually impaired sister in navigating complex public environments. The application uses a Gemini VLM and the Tavily search API to provide real-time, context-aware spatial descriptions and semantic wayfinding via a Flutter-based interface.


The primary goal of Autopilot was to give my sister the confidence to handle large, unfamiliar indoor environments like malls or airports without needing to rely on another person. Standard GPS often fails in these settings, so we built a system that uses Gemini VLMs to “read” the room in real-time. By processing a live feed from her phone, the app identifies specific landmarks—like a specific store entrance or a restroom sign—and translates those visual cues into clear, fast audio instructions that describe exactly where she needs to go.

To ensure the navigation was grounded in the actual layout of the building, We integrated Tavily to dynamically scrape for digital floorplans and mall directories. The system correlates this external map data with the live visual landmarks identified by the VLM. This means the app doesn’t just see a “hallway”, but it knows that hallway leads to the food court according to the directory.

S&P 500 Cluster Rotation Strategy thumbnail
May 2025 [Streamlit, pandas, scikit-learn, matplotlib]

The S&P 500 Cluster Rotation Strategy is a momentum-based trading dashboard built with Streamlit that uses machine learning to identify structural stock communities. By applying Principal Component Analysis (PCA) and Spectral Clustering to historical correlation matrices, the system isolates primary market factors and groups stocks into natural "manifolds" that move together.


For this project, I wanted to move past basic technical indicators and look at the actual structural relationships between stocks in the S&P 500. I used Principal Component Analysis (PCA) as a first step because stock data is incredibly noisy; by reducing the dimensions of the dataset, I could isolate the “eigen-factors” that actually drive market movement rather than getting hung up on daily price fluctuations. This preprocessing was essential for the clustering stage to ensure the model was grouping stocks based on shared underlying trends rather than coincidental noise.

I chose Spectral Clustering over a standard approach like K-Means because stock market correlations don’t usually fall into neat, spherical blobs. Spectral Clustering is much more effective here because it looks at the connectivity of the correlation matrix, allowing it to identify “manifolds” or communities of stocks that move together structurally even if they aren’t close in a simple Euclidean sense. The end result is a rotation strategy that shifts capital into whichever of these natural communities shows the strongest momentum, which I backtested and visualized through a Streamlit dashboard to track metrics like Sharpe ratio and drawdowns.

StepLight - Assistive Navigation System thumbnail
Jun 2024 - May 2025 [Python, TensorFlow, OpenCV, TypeScript, Coral Dev Board]

Wearable system to help visually impaired people navigate crosswalks independently. Lightweight ML pipeline using OpenCV and TensorFlow on an embedded Coral Dev Board for real-time edge inference. Developed under guidance of Ablr's visually impaired CEO, keeping user needs central throughout.


StepLight was designed as a wearable navigation system to help my sister and other visually impaired pedestrians safely cross complex intersections. The hardware stack centers on a Coral Dev Board responsible for running a localized classification model that identifies crosswalk boundaries and pedestrian signals in real-time. This edge-computing setup pairs with a mobile application to provide immediate haptic or audio feedback, ensuring the user stays oriented within the crosswalk lines.

The technical core of StepLight is the deployment of a specialized classification model onto the Coral Dev Board. I chose this hardware specifically for its Edge TPU, which allowed us to run computer vision inferences locally with extremely low latency, a requirement when providing split-second feedback in a high-traffic intersection. By processing the visual data directly on the wearable board rather than in the cloud, the system remains reliable even in areas with poor cellular connectivity, focusing entirely on keeping the user centered within the crosswalk.

Autonomous Racecar Robotics System thumbnail
Jul 2024 - Aug 2024 [Python, ROS2, OpenCV, LiDAR, IMU]

Autonomous robotics system for a 1:14 scale racecar at MIT Beaver Works, integrating LiDAR, IMU, and computer vision via ROS2 and OpenCV with sensor fusion. Co-authored a published research paper on fiducial pose estimation applications for emergency vehicle yielding, achieving 98% accuracy.


During the MIT Beaver Works program, we progressed through weekly Grand Prix competitions that served as stress tests for our robotics primitives. While the hardware and ROS2 abstractions were provided, the challenge was engineering a high-performance autonomy stack capable of balancing speed with reliability for the final race.

Our team developed a path-planning algorithm that prioritized consistent trajectory tracking and obstacle avoidance. I focused on sensor fusion and fiducial detection, using OpenCV to identify AprilTags with 98% accuracy. This vision pipeline was central to our published research on emergency vehicle yielding, allowing the car to make split-second state transitions based on localized visual cues. By refining our control logic to handle sensor noise and dynamic track conditions better than the field, my team won the Final 1:14 Scale Autonomous Grand Prix.