I am an R&D Software Engineer in computer vision at Zappar, working on real-time tracking and perception systems for mobile augmented reality.
My work focuses on building performant, low-level computer vision algorithms that run reliably on constrained devices. I’m particularly interested in marker-based tracking, real-time pipelines, and the practical trade-offs involved in shipping robust systems.
At Zappar I work across two main products:
Zapbox — a mixed reality headset where I contribute to marker tracking, system performance, and a custom WebXR browser that replays WebGL content natively in a stereo runtime.
Zapvision — a QR detection and tracking system designed for blind and partially-sighted users, where I work on reliable detection and tracking in real-world conditions.
I contribute across the stack, from SIMD-level optimisation and C++ implementations through to system design and tooling. I’ve also built internal tools for analysing and debugging our computer vision pipeline, making it easier to understand performance and failure cases.
Alongside this, I’m interested in artificial intelligence and robotics, particularly where they intersect with real-time perception. I’m drawn to combining efficient learned models with classical computer vision approaches in practical systems.
More broadly, I enjoy working close to the hardware and thinking about performance, whether that’s through low-level optimisation, systems design, or building tools to better understand complex pipelines.
I'm originally from New Zealand but now live close to Richmond Park in London (UK).
Feel free to get in touch at jordan.evan.campbell@gmail.com.