


Carlos joins us to explore what it really takes to run AI workloads on Kubernetes, from GPU scheduling to scaling inference and training efficiently across clusters.
In this episode of De Nederlandse Kubernetes Podcast, we talk with Carlos Santana, Principal Partner Solution Architect at AWS and long-time contributor to the Kubernetes and AI communities.
Carlos joins us to explore what it really takes to run AI workloads on Kubernetes, from GPU scheduling to scaling inference and training efficiently across clusters. We discuss how AI and machine learning are transforming the cloud-native ecosystem — and why orchestration is becoming just as important as the models themselves.
He shares insights into:
A forward-looking conversation about where AI, Kubernetes, and cloud-native engineering are heading — from someone building that future at scale.
ACC ICT Specialist in IT-CONTINUÏTEIT
Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overal
Like and subscribe! It helps out a lot.