Lumanoi: Interactive LED Light Art
Robert Quattlebaum left a large tech company to work on his own on Lumanoi - he does the electronics, writes the software, and does his own CNC machining.
Robert Quattlebaum of Voria Labs is the creator of Lumanoi, an interactive light sculpture that responds subtly to human presence—objects designed not to demand attention, but to reward it.
Robert discovered a love of electronics at Maker Faire Bay Area 2007 in a workshop building a set-top box called YBox. Eventually he developed the 2nd-gen version of YBox with Adafruit. By day he continued working at major tech companies like Google and Apple, but after decades of writing software, he felt something was missing. Maybe the answer was related to some Christmas lights he’d hacked in his spare time. Fast forward to Maker Faire Bay Area 2025, and his interactive Lumanoi LED sculptures were delighting attendees and connecting him with the glowy maker community. He recently appeared on Make:Live to talk about his art and his process.
-DD
Can you tell us a bit about your background and how you got here?
Robert: I don’t know if I’d call myself a software engineer anymore. That’s how I got started, though. I went to DigiPen, which is a video game programming and engineering school in Washington. That’s really where I caught the interactive bug. I’d make a game, invite friends over, and they’d have just as much fun playing it as anything else.
But over time, my career moved further away from that kind of interactivity. It became harder to show people what I was working on in a way that felt exciting. It’s like, “Hey, I improved this thread protocol for IoT”—that doesn’t really inspire people.
After about ten years at Google, I decided I wanted to change things up.
Can you show how us Luminoi works
Robert: If I hold my hand about an inch in front of it, it can detect that my hand is there.
The idea is for these pieces to fade into the background a bit. They don’t make your living room feel like a nightclub. But if you pay attention to them and interact with them, they become really delightful.
I developed all the electronics, all the software, all the CNC machining, all the algorithms. It’s a lot of work.
The hand detection, in particular, took quite a bit of work. It involved some analog circuitry and a lot of iteration to get right. But now it’s working really well.
I’m making them and selling them now, and it’s been a fun ride.
Can you explain what’s happening behind the display?
Robert: Each cell has its own circuit board with LEDs, a diffuser layer, and an acrylic lens on top.
The boards use smart LED chips with very high dynamic range—about 20 bits per channel with some software tricks. There are also infrared LEDs around the edges and a photodiode with an op-amp to detect reflected light, all controlled by a microcontroller.
If you try to do infrared proximity sensing in a naive way, it doesn’t work—you just end up detecting the diffuser itself.
The trick is that each cell doesn’t detect its own signal. Instead, each cell emits infrared light, and the surrounding cells measure it. That way, you can detect reflections from a hand in front of the piece.
That required synchronizing all the microcontrollers very carefully. Each one has to sample when the LEDs are on and off to compensate for ambient light.
It took about eight months to get that working properly. At first, it was doing one scan per second or just not working at all. When it finally worked, I was thrilled.
You’ve gone from prototyping to making these—are you now in a manufacturing phase?
Robert: I make them all myself in my workshop.
I’ve got a CNC machine, a table saw, a bandsaw, and a small 3D print farm for making the backplates that hold the boards.
Compared to working at Google, it’s very different. I’m working alone, so it’s more solitary. But I also get to touch every part of the process—design, electronics, fabrication, software. That’s incredibly satisfying.
At Google, I ended up specializing more narrowly. I worked on Thread radio protocols for IoT. Before that, I was at Nest Labs, and earlier at Apple Inc.
At Nest, I loved the mix of hardware and software. I had oscilloscopes on my desk, I could hack things together, debug physically—it felt very visceral. After the acquisition, things became more specialized again, and I missed that.
This project brings me back to that hands-on experience.
What’s the largest size you can build right now?
Robert: With my current CNC machine, about 4 feet by 4 feet.
I could probably do something like 4 by 8 feet with some clever indexing. Beyond that, I’d need a different manufacturing approach.
Ultimately, I’d like to build entire walls. That’s going to require rethinking how everything is assembled to keep costs manageable.
Can users reprogram the panels?
Robert: Yes. There are capacitive controls on the side for switching modes—different colors and patterns.
There’s also a web interface where you can compose new visualizations. So they are programmable, and you can change how they behave.
Have you explored using radar-based sensors for gesture or biofeedback?
Robert: Yes—definitely.
Millimeter-wave radar is something I want to integrate, especially for biofeedback. It’s sensitive enough to detect breathing, and even pulse if you’re very still.
The idea would be for the piece to respond to your breathing—maybe even guide it, encouraging slower, calmer patterns.
The challenge is that most modules don’t give access to raw data. They just output things like “heart rate” or “breaths per minute,” but I need the raw signal to process it myself.
I may end up designing my own board, but once you get into active emitters and FCC certification, things get expensive quickly. So for now, that’s a future feature.
What’s the price range for these pieces?
Robert: The one behind me is about $3,500. I also sell a version for around $3,000.
They’re expensive to make. Each board costs close to $20 even in batches, and there’s a lot of labor involved. So they’re not cheap, but they’re very handcrafted.
Could the system be modular—like connectable panels of different shapes?
Robert: It’s definitely possible, especially for custom projects.
The main challenge would be maintaining synchronization across modules for the sensing system. But that feels like a solvable problem.
So yes, modular systems are something I could explore.
Maker Faire Bay Area in 2026 will take place on September 25-27, 2026. If you would like to exhibit a project this year, let us know by submitting this interest form. Robert’s a great example of a person who was inspired by Maker Faire to become a maker and then came back to share his work at Maker Faire.




