Sessions 2026 More sessions to be announced! To view last year’s agenda, Click here - Any - Mon Sep 16 Tue Sep 17 Wed Sep 18 - AI Ready Data: Data Preparation, Versioning, and Vector Stores — The Expanding Role of Storage in the AI Pipeline Himabindu Tummala Distinguished Engineer Dell Technologies - Batched I/O: Reducing Overhead in the Storage Stack Jeff Bromberger Principal Software Engineer Microsoft - Beyond GPUs: NVMe SSD Power and Voltage Telemetry for AI Datacenters Jacob Schmier Senior Firmware Engineer Sandisk Carson Tunnell Staff Engineer Sandisk - Characterizing and Emulating FDP SSDs with WARP Inho Song Computer Science Ph.D. Student Virginia Tech Shoaib Asif Qazi Computer Science Ph.D. Student Virginia Tech - Emulating a Full RDMA NIC in Userspace with VFIO-User: Architecture, Backends, and Lessons from Reaching 2 GB/s RDMA Without Hardware Stephen Bates Fellow, AMD - From Fragmented Data to Open Standards: Product Category Rules and LCA at Scale for IT Hardware Lisa Rivalin Staff System Engineer, AI Fleet - Sustainability Meta Ines Sousa Senior Sustainability Industry Advisor, Microsoft - Increasing SSD Power and Efficiency with Liquid Cooling Keith MacLean System Architect Micron Technology - KV Cache as Distributed Storage: What Disaggregated Inference Demands from the Storage Stack Abhishek Gupta Storage Architect Cerebras Systems Rohan Puri Senior Staff Engineer DataDirect Networks (DDN) - Maximizing E3.L 2T: Engineering Ultra‑Dense QLC SSDs Trent Johnson SSD Hardware Architect IBM - Memory Fabrics for AI-Native Data Services at Scale Satananda Burla Distinguished Engineer Marvell Gaurav Agarwal Senior Distinguished Engineer Marvell - Surpassing 10T scale for fine-grained data access to storage CJ Newburn Distinguished Engineer NVIDIA Senior Research Scientist NVIDIA - Using CXL Fabric-Attached Memory to Enable Shared KV Caches Across GPUs for Faster LLM Inference Skyler Windh MTS Systems Software Engineering Micron Technology Katya Giannios Principal Machine Learning Engineer Micron Technology