2025​
- May 20 - llm-d Press Release
- May 20 - Announcing the llm-d community!
- June 3 - llm-d Week 1 Project News Round-Up
- June 25 - llm-d Community Update - June 2025
- July 29 - llm-d 0.2: Our first well-lit paths (mind the tree roots!)
- September 3 - Intelligent Inference Scheduling with llm-d
- September 24 - KV-Cache Wins You Can See: From Prefix Caching in vLLM to Distributed Scheduling with llm-d
- October 10 - llm-d 0.3: Wider Well-Lit Paths for Scalable Inference
- December 2 - llm-d 0.4: Achieve SOTA Performance Across Accelerators