SmartHome Gesture App
A mobile application project for capturing and organizing gesture video data, with a pipeline designed to connect to machine learning workflows — built as a Master's coursework project at ASU.
The core of this project is mobile engineering. The app lets users define custom gestures mapped to smart home actions, record training samples through the device camera, and run a live classification session. The focus was building a clean end-to-end data capture pipeline: from UI to camera to server storage.
I built the interface and camera pipeline in Kotlin with Jetpack Compose and CameraX, running on an Android Studio emulator. Video recordings were sent via HTTP to a Python Flask server running on my PC (in a Docker setup), where they were stored and organized for downstream use.
The project connects to machine learning in that the captured data was intended to feed gesture classification models. A small-scale training run was done using a pre-trained neural network on roughly 50 samples across about a dozen gestures — enough to validate the pipeline, but far too little data for reliable classification. The ML side was a proof of connection, not the focus.
The real value of this project was learning to build a working data capture system from scratch — mobile UI, camera integration, network communication, and server-side storage — and understanding how that kind of pipeline feeds into larger ML workflows.