I'm a data science student — still early in my journey, but deeply curious and eager to grow. Here's a little of what I've been exploring and building so far.
Research prototypes and production systems spanning detection, segmentation, 3D vision, and edge deployment.
A real-time multi-object tracking and segmentation pipeline for autonomous driving. Fuses LiDAR point clouds with camera frames using a custom transformer-based architecture, running at 30 FPS on NVIDIA Orin.
Monocular depth estimation model achieving state-of-the-art results on NYU Depth v2. Uses a hybrid CNN-Transformer architecture with self-supervised pretraining on unlabeled video.
Distilled and quantized version of SAM optimized for edge deployment. Runs interactive segmentation at 15 FPS on Jetson Nano with INT8 TensorRT inference.
Real-time 3D scene reconstruction from posed RGB images using neural implicit surfaces. Generates textured meshes from sparse views with NeRF-inspired volume rendering.
A high-performance data augmentation library for CV pipelines. GPU-accelerated transforms, mosaic augmentation, and domain-specific presets for medical, satellite, and autonomous driving data.
I am a data science student, currently learning how to understand data and make better, informed decisions. I am interested in exploring patterns, improving my analytical thinking, and building a strong foundation in data science.
My journey started with basic statistics and data analysis, and over time I began applying these concepts to real-world problems. I am still learning and continuously working on improving my understanding of data, models, and practical applications.
In my free time, I like studying new concepts, refining old concepts, working on small projects, and gradually improving my skills in data science.
Leading the perception team building real-time multi-sensor fusion for autonomous systems. Designed a transformer-based detection pipeline achieving 45 mAP on internal benchmarks while maintaining 30 FPS on embedded hardware.
Built and deployed instance segmentation models for industrial quality inspection. Reduced defect escape rate by 73% and optimized inference latency from 200ms to 35ms using TensorRT quantization.
Developed novel data augmentation strategies for medical image segmentation. Published 2 papers at MICCAI and contributed to open-source annotation tooling.
Peer-reviewed publications at top computer vision and machine learning venues.
A novel lightweight fusion module that unifies semantic and instance segmentation branches, achieving 48.3 PQ on COCO Panoptic while running at 28 FPS on a single GPU.
A self-supervised framework that learns depth from monocular video by enforcing geometric consistency between predicted depth, optical flow, and ego-motion, with explicit handling of moving objects.
Vision experiments, tutorials on model deployment, and deep dives into CV research.
Open to research collaborations, consulting on vision systems, or full-time roles in perception and computer vision.