Default HubSpot Blog

Community Spotlight: Alex Di Guida of Designium

Written by 8th Wall | Nov 13, 2025 5:30:00 PM

Before moving into XR R&D, Alex Di Guida built everything from native mobile apps to front-end sites and 3D web experiences. Today he leads research and prototyping at Designium Tokyo, blending art and engineering through 8th Wall and custom Three.js setups.

In this spotlight, Alex talks about lessons learned from Zappar’s fast-paced production work, his favorite prototypes at Designium, and how he’s pushing the boundaries of WebAR performance with AI, physics, and Gaussian Splats.


Background & Path into XR

Q: You started in native mobile apps, then front end, then 3D on the web before going full XR. Which of those early skills still help you prototype fast today?
I think it’s a combination of all those previous experiences that really helps me today. Going through different technologies and running native and web applications on different devices gave me a good understanding of these environments — how they work, their limitations, and constraints.

Native mobile, front-end web development, and 3D all taught me a lot about performance, UX, design, accessibility, and how to ship fast, working solutions. Nowadays, when I’m working with XR, I feel like I’m constantly using that past experience. The Web 3D skills I acquired over the years act as a bridge between what I was doing before and what I’m building now with XR experiences. It also made me even more curious about tools like Blender, Unity, and Unreal Engine, which I like to experiment with as well.

Q: At Zappar you shipped a lot of marketing WebAR. What did that high-volume client work teach you about scope, reliability, and quick iteration?
It really helped me improve in terms of estimations and being realistic with scope and time. You quickly learn what’s actually possible within a deadline and how to simplify ideas and features without losing the core experience.

You have to iterate fast: ship a small version, test early on different devices, communicate, get feedback, improve, and repeat. I can’t stress enough how important it is to test on different devices early when working with WebAR. There are so many devices nowadays, and sometimes you won’t get the same result on Android and iOS, or even between browsers. To avoid bad surprises at the end, test early.

Q: What first attracted you to augmented and mixed reality, and when did you realize it could become a full-time focus?
At the beginning it was really the web itself that attracted me — opening a browser and accessing your content instantly without installing anything. Then I discovered Three.js and felt it could add some really fun spice to it. I learned about shaders and started building web experiences with Three.js. I enjoyed it so much that I just wanted to keep exploring.

Eventually I realized that WebAR and WebXR were natural next steps, letting me bring those 3D experiences into the real world and make games that merge digital content with your surroundings. I started experimenting, building side projects, and soon professional opportunities in WebAR/WebXR began to appear. That’s when it became a full-time focus.


Designium: Role and Focus

Q: What does your day-to-day look like at Designium Tokyo?
I’m working at Designium in Tokyo, where we focus on creating new AR/XR experiences by combining technology and creativity — from augmented games to AI-powered virtual guides and VPS (Visual Positioning System) projects.

At Designium I’m in charge of XR R&D. Day to day I create prototypes, explore new tools, AI, and workflows, and help design and develop immersive experiences for clients. I also spend time discussing new technologies with my colleagues. We have a very creative environment here, and keeping a close eye on what others are building really helps me experiment with new concepts and push ideas with XR. I often share some fun or interesting results on social media.

Q: What projects have been the most fun or eye-opening to prototype at Designium, and what did you learn from them?
There are a lot of different prototypes I’ve been working on at Designium that were both interesting and fun to build. But if I had to pick a couple, the WebAR game concepts stand out the most.

One of them is Magical Forest, a mixed-reality experience where you step into a fantasy forest around you. This project was selected for the Niantic Innovation Labs Program, and the idea was to let users interact with magical plants, mushrooms, and little bugs they could collect using spells. It taught me a lot about environmental storytelling in AR — how to guide the user’s attention in a real-world space and balance visual effects with mobile performance. You can read more on the Designium blog.

Another one is Frozen Coin Hunt Adventure, a WebAR game made with Niantic Studio. It combines physics-based snowballs, dynamic UI, and a small character, Captain Doty, who faces various challenges. Players collect golden coins in AR while escaping iced mushrooms trying to catch them. On this project I learned a lot about making gameplay readable using physics inside Studio to get a fun result to play with.

Overall, these kinds of prototypes and games helped me understand how to make AR experiences playful while remaining accessible and stable for real users.

 

Tools, Engines, and Pipelines

Q: You work across 8th Wall Cloud Editor, Niantic Studio, and custom Vite + Three.js setups. How do you choose the right stack for a new idea?
It depends on what I want to build. Most of the time, when I’m prototyping and playing with new ideas, I use a simple local setup with Vite and Three.js. It’s fast and flexible for experimentation.

When I’m working on a client project, I’ll usually go with the 8th Wall Cloud Editor or 8th Wall Studio. That workflow is great for quick sharing, collaboration, and live device testing through a URL.

Q: You’ve linked React Three Fiber with WebAR. When is R3F worth the extra layer, and when do you keep things lean with raw Three.js?
I like working with React and Three.js, so I naturally became a big fan of React Three Fiber. Getting 8th Wall and R3F to work together was a fun challenge. R3F gives you a lot of ready-to-use pieces out of the box — lighting setups, shadows, helpers, and a WebXR package that’s a nice bonus.

To me, it just feels good to work with React and 3D at the same time. If you’re a React developer, R3F is a great entry point to 8th Wall and XR.

Most of the time I still use a more “classic” setup with Three.js, especially for simple prototypes. When a project needs a more structured, component-based architecture or complex UI and state, R3F starts to make more sense. If you like a component-first approach but aren’t familiar with React, you can also check out A-Frame.

Vite + 8th Wall + R3F Template on GitHub
X post about the setup

Q: Physics can be tricky on the web. What lessons have you learned from using ammo.js?
Recently at Designium, I built a small platform-jump WebAR game using ammo.js, A-Frame, and the 8th Wall Cloud Editor. Ammo.js pairs well with A-Frame, and A-Frame works great with 8th Wall.

Lessons learned:
• Keep the physics as simple as possible - complex shapes and too many bodies quickly hurt mobile performance.
• Be careful with scale and gravity - tuning jump and collision feel in AR takes time.
• Test on real devices early - something smooth on desktop can struggle on phones or HMDs.

Working with a web physics engine taught me a lot about balancing fun with performance and stability. The goal was a WebAR experience that runs seamlessly across mobile, headsets, and desktop browsers.


Standout Experiments and How They Work

Q: Your 2D-to-3D coloring-book pipeline using Stability AI and a custom backend wowed people. Can you walk us through that flow and why speed matters at 20–30 seconds per mesh?
This prototype came from the idea of mixing real-world elements with AR and 3D. I wanted to take something very simple and familiar, like a storybook character or a kid’s drawing, and bring it into 3D so you could play with it in augmented reality.

To build it, I compared different AI services that offer image-to-3D APIs and checked how long they take to generate a mesh. After some tests I ended up using Stability AI’s Stable Fast 3D API. I set up a simple backend with Node.js and Express to handle requests, and on the frontend used 8th Wall with Three.js to display and interact with the generated GLB in WebAR and paint on the mesh.

The API is quite fast, around half a second, a sweet spot for experimentation. Users are willing to wait a bit if they feel something special is happening, but you can’t push too far or they’ll give up. Speed really matters: it has to be fast enough to deliver a quality result worth the wait.
Github repository 8thwall+ painting on mesh only

Q: The segmentation demo you shared felt unusually smooth. What combination of APIs or techniques made it performant enough to feel real-time?
For this demo I combined the 8th Wall camera feed with MediaPipe Selfie Segmentation, which already runs close to real-time on both smartphones and desktop. I also did some work with MediaPipe Hands, so I kept exploring that ecosystem.

I created two modes: a full-body mode using MediaPipe to hide the whole body, and a white-detection mode that uses multi-criteria color detection from the camera feed.

I also used requestVideoFrameCallback(), which helped a lot. Since it’s synced to the video’s frame rate, it reduces latency and random frame drops so everything feels more real-time.

→ Github repository fullbody + white segmentation

Q: The Hidden Shrine used a large Gaussian Splat and an AI bot for cultural context. What did you learn about authoring facts and keeping latency low on mobile devices?
For The Hidden Shrine, a lot of smoothness came from starting with a good scan. I used Scanniverse to capture the area, then cleaned the model by removing extra geometry around the main shrine. That helped keep the Gaussian Splat lighter and more mobile-friendly before bringing it into Niantic Studio, where integration was straightforward.

Once assets are ready, I recommend testing on different devices to check loading time and smoothness. At Designium I also created a similar experience called The Secret Shrine, a WebAR portal experience that inspired Hidden Shrine. In this new version, you can visit a real Japanese shrine and ask an AI questions to learn more about it.


Future Tech and Industry Trends

Q: You’ve experimented with VPS projects and talk often about smart glasses. What real-world use cases excite you most, and what hardware or UX improvements would make you commit to building for them regularly?
I’m really curious about smart glasses because they could take AR to the next level and become a natural part of the ecosystem, just like phones are today.

One use case that excites me is AR navigation. I’ve built two different prototypes for it, one using 8th Wall with Niantic Lightship Map and another with Google 3D Map Tiles. Having directions and context about the place you’re looking at directly in your field of view would be both useful and comfortable.

Ideally, I’d like a lightweight, normal-looking frame with good battery life that’s comfortable for long wear. On the UX side, simple interfaces that don’t overwhelm you, just enough information at the right moment in a clear way.

Q: You’ve mentioned freeing up rendering power for SLAM, segmentation, and meshing. How do you see WebGPU and next-gen engines changing what’s possible for WebAR?
I haven’t worked much with WebGPU yet, but I see it as the modern way to really unlock the GPU and get much better performance on the web. I keep a close eye on this technology and on what people are sharing. Some of the fluid and particle demos I’ve seen are really impressive.

Right now WebGPU isn’t fully supported across all browsers or engines, but once compatibility improves I think it can really change what’s possible in WebAR. If rendering becomes more efficient, we’ll see smoother handling of heavier scenes, better lighting and shadows, and more advanced materials, all while keeping enough performance for AR tracking and interactivity.


Community and Sharing

Q: You post regularly on YouTube, X, and LinkedIn. What’s your strategy for sharing experiments that both inspire other developers and help Designium open new doors?
Usually I like to share things that I find interesting, experiments I haven’t really seen before, or results that feel fun or that I’m proud of. At Designium we like to show what we’re building, share updates, and start new conversations and collaborations through the work we post.

Q: Many devs look up to you for your creative use of tech. What advice would you give someone who wants to move from standard AR projects into more experimental R&D work?
I would say, have fun and play with the tools, AI, and platforms you can find. We have many different options available nowadays. Explore strange ideas and mix tools that don’t normally go together. If you fail, you learn. If you don’t, you learn and get an amazing project.

That’s why I think building side projects, if you can, is always great for personal progress. At least for me, it has always been very important. It helped me gain new knowledge and understanding that I could apply later on real client projects.

I also think sharing what you are proud of or what you like is a very good practice. Many times when I shared something, people commented with interesting advice and new ideas to improve what I built, and that has been very helpful.

Sometimes inspiration comes when you’re not in front of the screen. I like to take notes of ideas when I’m doing something else. Recently I was reading a story to my daughter and had the idea to make the characters from the book appear in 3D.


Join the 8th Wall Community

Want to be featured next? Connect with us on Discord and share your next build for a chance to be highlighted in an upcoming spotlight.