
Saiyam PathakLoft Labs
Wasm enables one to run AI inference in a cost effective and safe manner due to its compact footprint, near-native performance, and cross-platform compatibility which enables users to run AI inference workloads more efficiently.
This panel will explore the current adoption of Webassembly for machine learning and Generative AI workloads and see if it’s a viable solution to run inference at scale in a fast and sustainable manner across diverse system architectures.
While also covering the current state of maturity for production AI workloads across key domains: edge computing with GAIA Nodes and LlamaEdge, Exploring Kubernetes‑based approaches with Spinkube along with advanced orchestration strategies that enable secure, isolated (i.e., multi‑tenant) deployments, a topic we’ll touch on with reference to emerging solutions, and browser-based inference using WebLLM and AI agents.
Early Bird
SOLD OUT
Until December 15th
2-Day Conference
Auditori L'illa
Standard
SOLD OUT
Until February 23rd
2-Day Conference
Auditori L'illa
Late Bird
Ticket WASM I/O 25
Until March 26th
Barcelona
Mar • 27- 28 • 2025
2-Day Conference
Auditori L'illa
Join our newsletter now: