Friend Links
Thanks for stopping by đź‘‹
Here you’ll find a curated collection of my Medium articles — shared through friend links so you can read them without a paywall.
If you enjoy the content and want to support my work, a few small actions go a long way:
- 👍 Leave a like or highlight sections you find useful
- đź’¬ Share your thoughts or questions in the comments
- đź”– Bookmark articles you want to revisit
- 👤 Follow me on Medium to stay updated
- 📖 Spend a moment reading — thoughtful engagement helps the work reach more people
- đź”— Share articles with friends or colleagues who might find them helpful
Your support helps me continue creating deep, practical content on machine learning, computer vision, and efficient engineering.
Articles
- The Tricks That Make Production 3DGS Fast (Even If Ours Isn’t)
- Splat Your Own Gaussians - From Circles to Ellipses
- Circles Are Not Gaussians (But Let’s Pretend They Are)
- I Built the Slowest 3D Gaussian Splatting Renderer… On Purpose
- Still Avoiding einsum()? It’s Time to Fix That
- Mastering NumPy - Manual Metadata Manipulation for Memory-Efficient Arrays
- The Power of Views - How NumPy Avoids Copies and Saves Memory
- Why NumPy Arrays Are So Fast (And How They Really Work)
- FlashAttention — Visually and Exhaustively Explained
- FlashAttention from First Principles
- Vision Mamba - Like a Vision Transformer but Better
- Here Comes Mamba - The Selective State Space Model
- Structured State Space Models Visually Explained
- Towards Mamba State Space Models for Images, Videos and Time Series
- The Rise of Diffusion Models - A new Era of Generative Deep Learning
- Depth Anything - A Foundation Model for Monocular Depth Estimation
- Turn Yourself into a 3D Gaussian Splat
- DINO - A Foundation Model for Computer Vision
- Segment Anything - Promptable Segmentation of Arbitrary Objects
- BYOL -The Alternative to Contrastive Self-Supervised Learning
- GLIP - Introducing Language-Image Pre-Training to Object Detection
- The CLIP Foundation Model
- Implement Multi-GPU Training on a single GPU
- Fourier CNNs with Kernel Sizes of 1024x1024 and Larger
- MLP Mixer in a Nutshell
- Create your own GPU accelerated Jupyter Notebook Server with Google Colab using Docker
- Accelerated Distributed Training with TensorFlow on Google's TPU
- Speed up your Training with Mixed Precision on GPUs and TPUs in TensorFlow