My master’s thesis at TU Munich was titled MobileVeinViewer - a system for capturing and visualizing hand vein patterns in real time using a near-infrared camera attached to an Android phone. What sounds like a niche research project turned out to involve a surprisingly deep stack: UVC over USB OTG, JNI bindings, libuvc, libjpeg-turbo, and a full OpenCV image processing pipeline - all on a mobile device.
Why NIR for Vein Imaging?
Human veins absorb near-infrared light (roughly 700-900 nm) differently than surrounding tissue. When you illuminate a hand with an NIR light source, hemoglobin in deoxygenated blood absorbs the light while surrounding skin reflects it. The result: veins appear as dark patterns against a lighter background.
This contrast is invisible to the human eye but trivially detectable by a camera sensitive to NIR wavelengths - even a cheap, unmodified CMOS sensor with the IR-cut filter removed.
The Camera Pipeline
Getting raw frames off a USB UVC camera on Android was the first real challenge. Android’s Camera2 API only works with cameras Android already recognizes as a camera device. A generic USB UVC camera doesn’t appear there - you need to talk USB directly.
The solution was libuvc, a cross-platform C library that implements the USB Video Class protocol on top of libusb. The integration path:
UVC Camera → USB OTG → libusb → libuvc → raw MJPEG frames
↓
libjpeg-turbo (decode)
↓
OpenCV Mat (grayscale)
↓
Processing pipeline (JNI)
↓
Android SurfaceView (display)
Each arrow hides complexity. Getting libusb to enumerate and claim interfaces on Android requires USB device permissions, file descriptor passing across the JNI boundary, and careful lifecycle management to avoid device resets.
Image Processing Pipeline
Once frames arrived as OpenCV Mat objects, the processing pipeline was:
- Grayscale conversion - NIR frames are monochrome; drop color information.
- CLAHE (Contrast Limited Adaptive Histogram Equalization) - locally enhances contrast without blowing out highlights.
- Gaussian blur - reduces high-frequency noise from the sensor.
- Threshold / inversion - veins are dark; inverting makes them bright and more intuitive to view.
- Optional: morphological operations - closing small gaps in vein structures.
The whole pipeline ran in C++ via JNI, which was necessary to hit real-time performance (>20 fps) on a mid-range Android device. A pure-Java/Kotlin implementation using OpenCV4Android was noticeably slower due to JNI overhead on per-frame data copies.
The Hardest Part
Honestly? USB device lifecycle management. Android can revoke USB device access when the app goes to background, the screen rotates, or a charging state changes. libusb doesn’t handle this gracefully. Building a robust reconnect mechanism that didn’t leak native resources took more time than the image processing itself.
Key Takeaway
Mobile computer vision is very feasible today - the hardware is fast enough, OpenCV is well-ported to Android, and JNI is manageable if you’re disciplined about ownership. The hard part is always the I/O layer at the edges: getting data reliably from unusual hardware into your processing pipeline.
The full source is on GitHub if you want to see how the pieces fit together.