Great Resources
Published
A collection of good resources I have found and enjoy on the internet. You find a few good ones when going through the library of babel. However, you miss many more... to wit, this is not an exhaustive list

This is an ongoing post which represents resources I enjoyed using to learn about new things. I have broken it down by category. Any ordering is unintentional.
Blogs
- How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog covers how to optimize matrix multiplication kernels to perform like CuBlas
- How to write good prompts: using spaced repetition to create understanding goes into techniques for learning. The author actually argues in some of his other posts that tasks which a lay-person may assume are more intellectual in nature are actually highly memory bound tasks, exemplifying the importance of memory.
- Entropy from First Principles finally gave me an intuitive explanation for entropy (a tall order).
Books
- Reinforcement Learning by Sutton and Barto is a classic for machine learning
- Elements of Statistical Learning is another classic
- Any Stripe Press Book. I have read a few of these, overall all fantastic reads. Also, everything about these books from the site to the cover to the pages themselves are beautiful.
- Working in Public helped me articulate open-source, and the complicated incentives at play, to people unfamiliar with the subject
- Poor Charlie’s Almanack
Videos
- Coffee Before Arch’s CUDA Crash Course goes in-depth for CUDA programming
Podcasts
- Dwarkesh Patel Podcast is a “highly, but still underrated” podcast in with the interviewer Dwarkesh, spends an enormous amount of time preparing for interviews as The future belongs to those who prepare like Dwarkesh Patel.
YouTube Channels
This is broken out because these are channels which are specifically interesting to me. If I miss any obvious ones, like 3Blue1Brown, that is okay, because my goal here is to feature lesser known channels which have differentiated content.
- Breaking Taps is a channel that introduced me to the idea that there are some people out there who build hobby silicon projects.
- Mutual Information has some good videos which I liked. Two in particular were:
- Why Do Neural Networks Love the Softmax?
- The Boundary of Computation which is on the Busy Beaver numbers
- Stuff Made Here is a channel who slowed down a bit in recent years. I remember when they published a video every two weeks of a high-quality project which would have taken hundreds of hours. Insane work ethic and this kind of paints an aspirational picture of what an engineer can be and accomplish in my mind.
- Inigo Quilez and his corresponding blog is the best resource I have found to learn more about ray marching and shaders.
Tools
Well-tested tools often overlooked in the LLM-powered ai code generator madness.
- GitHub Code Search, cleverly used, can be used to find code examples for niche problems. Often, I have found
".method_name("
to search for method invocations, then language filtering, plus adding some keyword loosely associated with the project is enough to filter down to a couple of relevant results. This works because there are orders of magnitude more uploaded code than articles or Stack Overflow issues. In my experience, if a relevant snippet can found, trying it yourself works 90%+ of the time. Code search is the first thing I turn to break through a plateau in a software project.- Finding out how to collect LTTNG traces in Docker was a recent example where code-search (specifically one-result) saved me and let me figure out how to make tracing working in out Docker-based testing environment as part of Cavalier Autonomous Racing
- LLVM Toolchain is the closest thing I have found to a free lunch for C / C++ development I have found in a while.
- clang is faster than GCC and produces faster binaries. Unless you have a legacy project or some very special use cases, there is no reason to not use it.
- clangd is the best LSP for C / C++ projects
- clang-tidy is useful for cleaning up large-scale software projects so you do not see so many issues in clangd (lol).
- clang-format
- lld is just a faster-linker and support thin lto which is a godsend for maintaining build speed which still enabling Link time optimization
Trace-Driven Simulation in Astra Sim with the PyTorch Profiler
Analyzing Execution Traces with Hollistic Trace Analysis
Stay in touch
Subscribe to my RSS feed to stay updated
Have any questions
Feel free to contact me! I will answer any and all inquires