# Welcome to My Notes This is the living, working memory behind my writing at https://www.chrishayduk.com. If the blog is where I publish polished essays, this space is where ideas are grown, tested, and connected. Use it to explore how I think across frontier ML, AI × biology, systems/strategy, and history. Follow links, check backlinks, and use search to wander — the value is in the connections. For more context on my note-taking strategy, check out "How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking" by Sönke Ahrens, which covers the Zettelkasten method. For some of my takes on the Zettelkasten method: - [[Zettelkasten as compression]] - [[Zettelkasten is accelerated traversal through Bloom's taxonomy]] - [[Zettelkasten provides vector-retrieval for our thoughts]] - [[Learning is compression and compression is connection]] - [[Writing is both the means and the ends of learning]] ## How This Vault Is Organized - Zettelkasten - Evergreen notes: atomic, durable ideas that cross-link heavily. - Expect strong statements, early models, and lots of references between notes. - Fleeting Notes - Short, provisional captures (questions, sketches, half-ideas) I may later refine. These will become Zettelkasten notes once refined. - Useful for seeing the scaffolding behind finished essays. - Reference Notes - Structured summaries of sources — books, research papers, talks, courses, and Readwise highlights. - Subfolders: `Books`, `Research Papers`, `Articles and Blog Posts`, `Conferences and Lectures`, `Courses`, `Readwise`. Tip: Navigation works best by following links and using search/backlinks. Many notes are brief by design; meaning emerges from the network. ## Start Here - Zettelkasten — durable, evergreen ideas: - [[Zettelkasten/Core reinforcement learning terms]] - definition of many of key ideas in reinforcement learning. This note serves as an entry point to many of my other RL notes, with tons of links throughout going to deeper discussions of certain topics and terms. - [[Zettelkasten/Decoder-Only Transformers as Differentiable n-gram Models]] — intuition for why decoder LMs work so well. - [[Zettelkasten/How ESMFold Replaces MSA with Language Modeling]] — where protein LMs meet structure. - [[Zettelkasten/Empires are fractal]] — a systems lens on imperial dynamics. - [[Zettelkasten/The Pyramid Strategy for becoming exceptionally good]] — compounding skill strategy. - Reference Notes — summaries of books, papers, and talks: - [[Reference Notes/Books/Grokking Deep Reinforcement Learning]] — practical RL foundations. - [[Reference Notes/Research Papers/Differential Transformer]] — an architecture for long-context reasoning. - [[Reference Notes/Conferences and Lectures/Shaping the Future of AI from the History of the Transformer]] — where the field is headed and why. ## From The Blog - Home: https://www.chrishayduk.com - Series to sample: - DeepSeek (routing, KV compression, V-series): https://www.chrishayduk.com/p/understanding-deepseek-part-i-deepseekmoe - Protein LMs (ESM2 → ESM3): https://www.chrishayduk.com/p/understanding-protein-language-models - ESM3 and what’s next: https://www.chrishayduk.com/p/esm3-and-the-future-of-protein-language - Strategy — GPT‑5 implications: https://www.chrishayduk.com/p/the-strategic-implications-of-gpt - Civilizational tail risks: https://www.chrishayduk.com/p/managing-civilizational-tail-risks If you like the notes, you’ll probably enjoy the weekly explainers. The threads connect. ## About Chris I’m a Machine Learning Engineer at Meta working on applied ML for ads ranking. Previously, I led ML for Drug Discovery at Deloitte, building biomedical knowledge graphs and training models for in‑silico optimization. I studied Computer Science & Mathematics (B.S., Fordham), Mathematics (M.S., CCNY), Applied Statistics (M.S., Fordham Gabelli), and I’m completing an M.S. in Computer Science (ML) at Georgia Tech. - Twitter: https://twitter.com/chris_hayduk1 - LinkedIn: https://www.linkedin.com/in/chrishayduk/ - GitHub: https://github.com/ChrisHayduk - Goodreads: https://www.goodreads.com/chris_hayduk Thanks for visiting — start with a link that sparks curiosity, then follow the graph.