ReconstructMe
ReconstructMe’s usage concept is similar to that of an ordinary video camera – simply move around the object to be modelled in 3D. Scanning with ReconstructMe scales from smaller objects such as human faces up to entire rooms and runs on commodity computer hardware. Read more about its features. Integrate ReconstructMe into your application using our powerful SDK. ReconstructMe’s usage concept is similar to that of an ordinary video camera – simply move around the object to be captured. However, instead of a video stream you get a complete 3d model in real-time. Read about our hardware requirements. Modelling with ReconstructMe scales from smaller objects such as human faces up to entire rooms. ReconstructMe is capable of capturing and processing the color information of the object being scanned, as long as the sensor provides the necessary color stream.
Learn more
Imverse LiveMaker
Use LiveMaker™ to make photorealistic 3D scenes for virtual reality experiences, volumetric videos, movie previsualization, video games, immersive training, virtual showrooms, and much more! LiveMaker™ is the first software that enables you to build 3D models from inside of virtual reality. It’s easy to use, and requires no special programming skills. Using proprietary voxel technology, LiveMaker™ lets you import 360° photos and reconstruct their geometry, retexture occlusions, create new objects, and relight the entire scene. It also allows you to import and integrate external media and assets, static or dynamic, low or high quality, so you can design your virtual scene without limitations. You can use LiveMaker™ to create complete environments or for quick visual prototyping, and the 3D models created with LiveMaker™ can be easily exported and used in other tools depending on your needs and workflow.
Learn more
OmniHuman-1
OmniHuman-1 is a cutting-edge AI framework developed by ByteDance that generates realistic human videos from a single image and motion signals, such as audio or video. The platform utilizes multimodal motion conditioning to create lifelike avatars with accurate gestures, lip-syncing, and expressions that align with speech or music. OmniHuman-1 can work with a range of inputs, including portraits, half-body, and full-body images, and is capable of producing high-quality video content even from weak signals like audio-only input. The model's versatility extends beyond human figures, enabling the animation of cartoons, animals, and even objects, making it suitable for various creative applications like virtual influencers, education, and entertainment. OmniHuman-1 offers a revolutionary way to bring static images to life, with realistic results across different video formats and aspect ratios.
Learn more
Seed3D
Seed3D 1.0 is a foundation-model pipeline that takes a single input image and generates a simulation-ready 3D asset, including closed manifold geometry, UV-mapped textures, and physically-based rendering material maps, designed for immediate integration into physics engines and embodied-AI simulators. It uses a hybrid architecture combining a 3D variational autoencoder for latent geometry encoding, and a diffusion-transformer stack to generate detailed 3D shapes, followed by multi-view texture synthesis, PBR material estimation, and UV texture completion. The geometry branch produces watertight meshes with fine structural details (e.g., thin protrusions, holes, text), while the texture/material branch yields multi-view consistent albedo, metallic, and roughness maps at high resolution, enabling realistic appearance under varied lighting. Assets generated by Seed3D 1.0 require minimal cleanup or manual tuning.
Learn more