Official Session Summary
Pulled from the live conference page.
Most "AI-powered" mobile apps are thin wrappers around hosted APIs, tethered by high latency, cloud costs, and privacy concerns. This talk demonstrates a radical alternative: a 100% offline, sensor-driven image generator running locally on the Samsung Galaxy Z Fold 7. We will explore the technical journey of bridging React Native (Expo) with the device's NPU using ONNX Runtime and the Android Neural Network API (NNAPI). By mapping real-time hardware sensor data (Ambient Light) to latent space prompts, I demonstrate a new UX pattern for offline "Zero-Prompt" generative experiences. This session is a deep dive into the engineering required to move generative models from data centers to our pockets and good practicies how to scale that to other devices with powerful neural processing units such as iPhones.
Speaker Background
Quick context on the person or people on stage.
Senior AI System Engineer at CallStack, working near the edge of mobile, on-device AI, and practical deployment constraints.
Why This Slot Matters
A compact framing layer for navigating the conference.
This is one of the more substantive abstract-backed sessions on the schedule; worth opening when you need enough context to decide whether to stay in the room.