Social Foundations of Computation The MIT License folktexts Get classification risk scores on tabular tasks using LLMs.
Thumb ticker sm folktexts loop diagram both prompts.drawio
Social Foundations of Computation The MIT License BenchBench BenchBench is a Python package to evaluate multi-task benchmarks.
Thumb ticker sm benchbench horizontal
Social Foundations of Computation The MIT License Error- Parity Achieve error-rate fairness between societal groups for any score-based classifier.
Thumb ticker sm screenshot 2024 09 17 at 18.54.25
Software Workshop Evaluation servers Web servers to rank and compare human pose estimation algorithms.
Thumb ticker sm screenshot 2022 02 26 at 14.15.24
Haptic Intelligence Software Workshop LQR Teleoperation Shared control framework combining human and robot agents.
Thumb ticker sm augmenting human policies using riemannian metrics for human robot shared controll
Perceiving Systems PS:License 1.0 BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion Synthetic image dataset (1.6 million images) with bodies in motion in realistic environments and trained HPS regressors using only this data. BEDLAM dataset contains monocular RGB videos with ground-truth 3D bodies in SMPL-X format. It includes a diversity of body shapes, motions, skin tones, hair, and clothing. The clothing is realistically simulated on the moving bodies using commercial clothing physics simulation. We render varying numbers of people in realistic scenes with varied lighting and camera motions. We then train various HPS regressors using BEDLAM and achieve state-of-the-a...
Thumb ticker sm teaser video 1
Perceiving Systems PS:License 1.0 SUPR Convertor A utility tool to convert SMPL-X model parameters to SUPR model parameters.
Thumb ticker sm frame 000 delay 0.1s
Software Workshop Perceiving Systems PS:License 1.0 Mesh Annotator Contact labeling application for selecting body regions / mesh vertices on the Mechanical Turk.
Thumb ticker sm meshannotator
Perceiving Systems PS:License 1.0 TEACH: Temporal Action Compositions for 3D Humans Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans".
Thumb ticker sm image teaser 001
Software Workshop Coffee tracker An application for tracking consumption across several "coffee rooms" and facilitating payments.
Thumb ticker sm beans
Perceiving Systems PS:License 1.0 EMOCA: Emotion Driven Monocular Face Capture and Animation EMOCA takes a single in-the-wild image as input and reconstructs a 3D face with sufficient facial expression detail to convey the emotional state of the input image. EMOCA advances the state-of-the-art monocular face reconstruction in-the-wild, putting emphasis on accurate capture of emotional content. <p>The repository provides:</p> <ul> <li>An approach to reconstruct animatable 3D faces from an in-the-wild images, that is capable of recovering facial expressions that convey the correct emotional state.</li> <li>A novel perceptual emotion-consistency loss that rewards the accurac...
Thumb ticker sm emoca
Software Workshop Perceiving Systems PS:License 1.0 Mechanical Turk Manager Web application for interfacing with the Amazon Mechanical Turk.
Thumb ticker sm screenshot from 2024 02 13 14 10 50
Software Workshop Embodied Vision EMFusion++ A VR demonstrator for an in-house SLAM algorithm integrated with commercial VR hardware (headset + camera).
Thumb ticker sm emfusion
Perceiving Systems PS:License 1.0 DIGIT DIGIT estimates the 3D poses of two interacting hands from a single RGB image. This repo provides the training, evaluation, and demo code for the project in PyTorch Lightning.
Thumb ticker sm hands3dv
Perceiving Systems PS:License 1.0 SAMP: Stochastic Scene-Aware Motion Prediction SAMP generates a 3D human avatar that navigates a novel scene to achieve goals. The software includes runtime Unity code and training code as well as training data.
Thumb ticker sm samp3
Perceiving Systems PS:License 1.0 ACTOR: Action-Conditioned 3D Human Motion Synthesis with Transformer VAE ACTOR learns an action-aware latent representation for human motions by training a generative variational autoencoder (VAE). By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action. ACTOR uses a transformer-based architecture to encode and decode a sequence of parametric SMPL human body models estimated from action recognition datasets.
Thumb ticker sm actor
Perceiving Systems PS:License 1.0 ROMP: Monocular, One-Stage, Regression of Multiple 3D People ROMP estimates multiple 3D people in an image in real time. Unlike prior methods, it does not first detect the people and then estimate their pose. Instead, ROMP estimates the a heatmap corresponding to the centers of people together with the parameters of the SMPL model at these centers. The code includes realtime demos.
Thumb ticker sm romp1
Perceiving Systems PS:License 1.0 SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes. We propose a novel forward skinning module to animate neural implicit shapes with good generalization to unseen poses. Given scans of a person, SNARF creates an animatable implicit avatar. Training and animation code is included.
Thumb ticker sm snarf
Perceiving Systems PS:License 1.0 Learning to Regress Bodies from Images using Differentiable Semantic Rendering DSR uses semantic information in the form of clothing segmentation when learning to regress 3D human pose and shape from an image. The SMPL body should fit inside the clothing and match unclothed regions.
Thumb ticker sm overall idea
Perceiving Systems PS:License 1.0 PARE: Part Attention Regressor for 3D Human Body Estimation PARE regresses 3D human pose and shape using part-guided attention. It learns to be robust to occlusion, making it much more practical than recent methods when applied to real images.
Thumb ticker sm website
Perceiving Systems PS:License 1.0 ReSynth Dataset The ReSynth Dataset is a synthetic dataset of 3D clothed humans in motion, created using physics based simulation. The dataset contains 24 outfits of diverse garment types, dressed on varied body shapes across both genders. All outfits are simulated using a consistent set of 20 motion sequences captured in the CAPE dataset. We provide both the simulated high-res point clouds as well as the packed data that's ready to run with the model introduced in the ICCV 2021 paper "The Power of Points for Modeling Humans in Clothing". Checkout out the dataset website for more information.
Thumb ticker sm resynth overview
Perceiving Systems PS:License 1.0 SPEC dataset: Pano360, SPEC-SYN, SPEC-MTP Pano360 dataset consists of 35K panoramic images of which 34K are from Flickr and 1K rendered from photorealistic 3D scenes. We use it to train CamCalib. SPEC-SYN is a photorealistic synthetic dataset which has accurate ground-truth SMPL and camera annotations. We use it for both training and evaluating SPEC. SPEC-MTP is a crowdsourced dataset consisting of real images, camera calibration and SMPL-X fits. We use it only for evaluation purposes in the experiment.
Thumb ticker sm screenshot from 2021 10 15 20 52 58
Perceiving Systems PS:License 1.0 SPEC: Seeing People in the Wild with an Estimated Camera SPEC is the first in-the-wild 3D HPS method that estimates the perspective camera from a single image and employs this to reconstruct 3D human bodies more accurately.
Thumb ticker sm thumb lg webpage teaser
Perceiving Systems PS:License 1.0 The Power of Points for Modeling Humans in Clothing PoP (Power of Points) is a point-based model for generating high-fidelity dense point clouds of humans in clothing with pose-dependent geometry. It supports modeling multiple subjects of varied body shapes and different outfit types with a single model. At test-time, given a single, static scan, the model can animate it with plausible pose-dependent deformations.
Thumb ticker sm pop teaser captioned
Perceiving Systems The MIT License Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings Generalizing deep neural networks to new target domains is critical to their real-world utility. While labeling data from the target domain, it is desirable to select a subset that is maximally-informative to be cost-effective (called Active Learning). The ADA-CLUE algorithm addresses the problem of Active Learning under a domain shift. The GitHub repo consists of code to train models with the ADA-CLUE algorithm for multiple source and target domain shifts. Pre-trained models are also available.
Thumb ticker sm adaclue cropped image
Perceiving Systems PS:License 1.0 MoSh: Motion and Shape Capture from Sparse Markers MoSh can fit surface models of human, animal, and objects to marker-based motion capture, in short mocap, data with high accuracy second only to that of 4D scan data. Fitting is completely automatic, given labeled, and clean mocap marker data. The labeling can also be made automatic using SOMA, whose code is also released here.
Thumb ticker sm mosh
Perceiving Systems PS:License 1.0 SOMA: Solving Optical Marker-Based MoCap Automatically Marker-based optical motion capture systems, in short mocap, are an ultimate precision tool to capture motion of objects and bodies. However, the raw output of these systems are unordered sparse point clouds that have to be brought into correspondence with physical markers on the surface. SOMA is a machine learning-based tool that automates this process, replacing the human expert in the loop, thus enabling rapid production of high quality data with applications in computer graphics and computer vision.
Thumb ticker sm teaser
Perceiving Systems PS:License 1.0 3DCP: 3D Contact Poses The 3D Contact Poses dataset contains SMPL-X bodies fit to 3D scans of five subjects in various self-contact poses, as well as self-contact optimised meshes from the AMASS motion capture dataset.
Thumb ticker sm scans light blue
Optics and Sensing Laboratory Software Workshop Fiber-based shape sensor Grassroots initiative into developing a novel shape probe made out of a multimode optical fiber.
Thumb ticker sm 1440 ss fiber optic cable
Software Workshop Haptic Intelligence Baxter Teleoperation Optimization of Baxter's retargeting
Thumb ticker sm bildschirmfoto 2022 01 17 um 13.22.51
Perceiving Systems PS:License 1.0 SAMP Dataset SAMP dataset is high-quality MoCap data covering various sitting, lying down, walking, and running styles. We capture the motion of the body as well as the object.
Thumb ticker sm teaser
Perceiving Systems PS:License 1.0 AGORA dataset While the accuracy of 3D human pose estimation from images has steadily improved on benchmark datasets, the best methods still fail in many real-world scenarios. This suggests that there is a domain gap between current datasets and common scenes containing people. To evaluate the current state-of-the-art methods on more challenging images, and to drive the field to address new problems, we introduce AGORA, a synthetic dataset with high realism and highly accurate ground truth. We create around 14K training and 3K test images by rendering between 5 and 15 people per image using either image...
Thumb ticker sm ezgif.com gif maker
Perceiving Systems PS:License 1.0 SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements SCALE learns a representation of 3D humans in clothing that generalizes to new body poses. SCALE is a point-based representation based on a collection of local surface elements. The code enables people to train and animate SCALE models and includes examples from the CAPE dataset.
Thumb ticker sm scale1
Perceiving Systems PS:License 1.0 SMPLify-XMC: On Self-Contact and Human Pose SMPLify-XMC is a SMPLify-X optimization framework with Mimicked Contact. It fits a SMPL-X body to a image given (1) 2D keypoints, (2) a known 3D body pose and self-contact, (3) gender, height and weight as input. SMPLify-XMC is used in MTP (Mimic-The-Pose) data collection pipeline to create near ground-truth SMPL-X parameters and self-contact for each in-the-wild image.
Thumb ticker sm smplify xmc teaser
Perceiving Systems PS:License 1.0 SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks SCANimate is an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar. These avatars are driven by pose parameters and have realistic clothing that moves and deforms naturally. SCANimate uses an implicit shape representation and does not rely on a customized mesh template or surface mesh registration.
Thumb ticker sm scanimate
Perceiving Systems PS:License 1.0 BABEL: Bodies, Action and Behavior with English Labels The BABEL dataset consists of labels that describe the action being performed in a mocap sequence. There are two types of labels — sequence labels that describe the actions being performed in the entire sequence, and fine-grained frame labels that describe the actions being performed in each frame. The mocap sequences in BABEL are derived from the AMASS dataset. The GitHub repo consists of helper code that loads and filters the data based on labels. It also consists of training code, features, and pre-trained models that perform the task of action recognition on the BABEL dataset.
Thumb ticker sm babel cropped teaser
Perceiving Systems PS:License 1.0 LEAP: Learning Articulated Occupancy of People LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body. It is effectively an implitic version of SMPL. Given a set of bone transformations (i.e. joint locations and rotations) and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions and then efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose- dependent deformations in the canonical space.
Thumb ticker sm leap2
Perceiving Systems PS:License 1.0 MOJO: We are More than Our Joints MOJO (More than Our JOints) is a novel variational autoencoder with a latent DCT space that generates 3D human motions from latent frequencies. MOJO preserves the full temporal resolution of the input motion, and sampling from the latent frequencies explicitly introduces high-frequency components into the generated motion. We note that motion prediction methods accumulate errors over time, resulting in joints or markers that diverge from true human bodies. To address this, we fit the SMPL-X body model to the predictions at each time step, projecting the solution back onto the space of valid...
Thumb ticker sm mojo
Perceiving Systems PS:License 1.0 SMPL-X for Blender and Unity We facilitate the use of SMPL-X in popular third-party applications by providing dedicated add-ons for Blender and Unity. The SMPL-X 1.1 model can now be quickly added as a textured skeletal mesh with a shape specific rig, as well as shape keys (blend shapes) for shape, expression and pose correctives. We also provide functionality to recalculate joint locations on shape change and proper activation of pose correctives after pose changes.
Thumb ticker sm smplx blender unity
Perceiving Systems PS:License 1.0 DECA: Learning an Animatable Detailed 3D Face Model from In-the-Wild Images DECA reconstructs a 3D head model with detailed facial geometry from a single input image. The resulting 3D head model can be easily animated. The main features: * Reconstruction: produces head pose, shape, detailed face geometry, and lighting information from a single image. * Animation: animate the face with realistic wrinkle deformations. * Robustness: tested on facial images in unconstrained conditions. Our method is robust to various poses, illuminations and occlusions. * Accurate: state-of-the-art 3D face shape reconstruction on the NoW Challenge benchmark dataset.
Thumb ticker sm deca
Perceiving Systems PS:License 1.0 SMPLpix: Neural Avatars from 3D Human Models SMPLpix is a neural rendering framework that combines deformable 3D models such as SMPL-X with the power of image-to-image translation frameworks (aka pix2pix models). Create a 3D avatar from an video and then render it in new poses.
Thumb ticker sm smplpixstatic
Perceiving Systems PS:License 1.0 POSA: Populating 3D Scenes by Learning Human-Scene Interaction POSA takes a 3D body and automatically places it in a 3D scene in a semantically meaningful way. This repository contains the training, random sampling, and scene population code used for the experiments in POSA. The code defines a novel representation of human-scene-interaction that is body centric. This can be exploited for 3D human tracking from video to model likely interactions between a body and the scene.
Thumb ticker sm posa
Perceiving Systems PS:License 1.0 NoW Evaluation: Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision Code: We provide the evaluation code for NoW challenge proposed in the RingNet paper. Please check the repository which is self-explanatory. NoW Benchmark Dataset and Challenge: Please check the external link to download the data and participate in the challenge.
Thumb ticker sm content now dataset
Perceiving Systems PS:License 1.0 GIF: Generative Interpretable Faces GIF is a photorealistic generative face model with explicit control over 3D geometry (parametrized like FLAME), appearance, and lighting. Training and animation code is provided.
Thumb ticker sm gif
Embodied Vision GNU General Public License version 3 EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association The majority of approaches for acquiring dense 3D environment maps with RGB-D cameras assumes static environments or rejects moving objects as outliers. The representation and tracking of moving objects, however, has significant potential for applications in robotics or augmented reality. In this paper, we propose a novel approach to dynamic SLAM with dense object-level representations. We represent rigid objects in local volumetric signed distance function (SDF) maps, and formulate multi-object tracking as direct alignment of RGB-D images with the SDF representations. Our main novelty is a...
Perceiving Systems PS:License 1.0 Learning a statistical full spine model from partial observations The study of the morphology of the human spine has attracted research attention for its many potential applications, such as image segmentation, bio-mechanics or pathology detection. However, as of today there is no publicly available statistical model of the 3D surface of the full spine. This is mainly due to the lack of openly available 3D data where the full spine is imaged and segmented. In this paper we propose to learn a statistical surface model of the full-spine (7 cervical, 12 thoracic and 5 lumbar vertebrae) from partial and incomplete views of the spine. In order to deal with the...
Thumb ticker sm demo 600
Perceiving Systems PS:License 1.0 STAR: A Sparse Trained Articulated Human Body Regressor We propose STAR, a realistic human body with a learned set of sparse and spatially local pose corrective blend shapes. STAR addresses many of the drawbacks of the widely used SMPL model despite having an order of magnitude fewer parameters. The code released includes the model implementation in Tensorflow, PyTorch, and Chumpy.
Thumb ticker sm main teaser
Perceiving Systems PS:License 1.0 GRAB: A Dataset of Whole-Body Human Grasping of Objects Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of va...
Thumb ticker sm grab thumb3