Generate Synthetic Data for Human Analysis
in Conference Rooms and Smart Office
Detect and Identify Humans in Motion in Conference Rooms
The Datagen Platform provides high quality, perfectly annotated data in the form of video and images that are used to train CV ML models for tasks related to understanding human behavior in conference room environments
Synthetic Data for Human Analysis in Smart Office
Acquiring visual data of humans interacting with their office environment for the training of Computer Vision machine learning models is a complex task. It’s becoming even more complicated as work and collaboration continue to shift online, connecting people across borders and time zones. This means that the next generation of smart communication and conferencing tools ⸺ outfitted with identification, attention analysis and gesture recognition models ⸺ must adapt to our expanding definition of “the office” and be able to function accurately no matter where participants are located.
The Datagen Platform provides high quality, perfectly annotated data in the form of video and images that are used to train CV ML models for tasks related to understanding human behavior in conference room environments.
The data generated includes an accurate representation of the conference room and human interaction with that environment: from specific objects that are usually found in conference rooms like whiteboards, post-it notes, chalkboards, and blackboards, to gaze detection and interaction with these objects, such as writing on a whiteboard, moving a chair, or gesturing while speaking.
Use case examples
Identify a person who is looking down/side from the laptop camera, has sleepy closing eyes or turning back and looking away with the body
Identify a person who is standing up from a chair or walking to the side. Identify a person standing by a whiteboard or entering a room
Why Datagen for Smart Office
The right domain-specific data
Controllable camera devices
Controllable data generation
Frictionless granular control by the computer vision engineer
3D ground truth annotations and perfect 2D quality
Zero biases in the data distribution
Ability to define the distributions for every part of the data with no inherent biases