Resources & Tools
Curated books, courses, research, and industry tools that have shaped how I build and operate production systems. Use these to sharpen fundamentals and improve reliability in AI systems.
Curated books, courses, research, and industry tools that have shaped how I build and operate production systems. Use these to sharpen fundamentals and improve reliability in AI systems.
These books have shaped my understanding of AI, ML, and their impact on society:
The comprehensive textbook on deep neural networks. Useful for understanding architectures, optimization, and advanced techniques.
The seminal paper introducing Transformers. Foundation for modern LLMs. A useful reference for understanding the architecture.
Explores AI's potential impact on humanity. Useful for thinking about AI safety, alignment, and societal implications.
Critical look at algorithmic bias and fairness. Important for building responsible, ethical AI systems.
Explores risks of advanced AI systems. Critical for understanding long-term AI safety and alignment challenges.
Concise overview of ML fundamentals. Good for quick reference and revisiting core concepts.
A selection of courses I've found useful for structured learning:
Top-down approach to deep learning with an emphasis on hands-on practice early, with theory as needed.
Foundational ML concepts with broad coverage of core fundamentals.
Advanced NLP and Transformers. From Stanford's computer science department. Research-level material.
Building LLM applications with prompt engineering, RAG, and agents, with an emphasis on applied techniques.
Free course on Transformers, fine-tuning, and building NLP applications with practical examples.
Focus on production ML systems, data, pipelines, and real-world engineering practices.
Staying current often involves reading research papers. Here are a few widely cited conferences and papers that shaped modern AI:
Major Conferences
🔬 Seminal Papers
I regularly track papers on arXiv.org and note ideas that are relevant to production engineering. Following research helps you stay current in a rapidly evolving field.
I document practical details through technical articles on Medium, Dev.to, and here on my site:
Series on building production LLM apps with RAG, prompt engineering, and deployment strategies.
Deep dives into MLOps, data pipelines, model serving, and production ML systems.
Exploring bias detection, fairness, alignment, and responsible AI practices in production systems.
Step-by-step guides for building, integrating, and deploying models and tooling for real production constraints.
Check out my Insights for the latest articles and technical deep dives.
Community forums can be useful for questions and discussion. Here are a few places I follow:
Open-source projects, issues, and community support through discussions and examples.
Answer and ask technical questions about ML, Python, frameworks. Great for learning from peers.
r/MachineLearning, r/LanguageModels, r/ArtificialIntelligence. Active discussions on current topics.
Community around Transformers, models, and NLP. Engage with researchers and engineers.
Follow researchers and engineers for updates and discussion.
Join local AI/ML meetups and conferences to meet practitioners in your area.
A simple starting point for common tools used in AI/ML development:
Python 3.10+
Primary language for all ML/AI development
VS Code + Extensions
Python, Pylance, Jupyter for development
Git & GitHub
Version control and collaboration
Jupyter Notebook
Experimentation and interactive development
pip install torch torchvision pytorch-lightningpip install transformers huggingface-hubpip install scikit-learn pandas numpy matplotlibpip install langchain llama-index openai
See Tech Stack page for complete list of tools and frameworks.
These resources reflect approaches I've found useful when moving from experiments to production systems. Use what fits your context and constraints.