grasping-in-the-wild Computer Vision Project
Updated 21 days ago
Semantic segmentation of the "Grasping in the wild" dataset, recorded by Tobii Glasses 2. Subjects are grasping objects of everyday life in an ecological environment, and are filmed in 7 different kitchens. The dataset contains video clips of short duration (mean duration is 9,25 sec).
-
The annotation of object masks was fulfilled in the framework and supported by I-Wrist Project "Intuitive Wrist Prosthesis Control based on natural movement and visual information » 2024-2028 funded by French National Agency of Research (ANR)
-
Gaze fixations on annotated video frames can be downloaded from CNRS NAKALA server for research usage : https://www.labri.fr/projet/AIV/graspinginthewild.php
-
This corpus results from the research in the framework of the project Suvipp PEPS CNRS-Idex 215-2016 and Interdisciplinary project CNRS RoBioVis 2017 – 2019, supported by French National Center of Scientific research (CNRS).
When using the corpus please site the paper:
B. Atoki, J. Benois-Pineau, F. Baldacci, A. De Rugy, Object segmentation in the wild with foundation models: application to vision assisted neuro-prostheses for upper limbs, in Proc. EUVIP’2024, Geneva, Switzerland, 8-11 September 2024
Build Computer Vision Applications Faster with Supervision
Visualize and process your model results with our reusable computer vision tools.
Cite This Project
If you use this dataset in a research paper, please cite it using the following BibTeX:
@misc{
grasping-in-the-wild_dataset,
title = { grasping-in-the-wild Dataset },
type = { Open Source Dataset },
author = { IWrist },
howpublished = { \url{ https://universe.roboflow.com/iwrist/grasping-in-the-wild } },
url = { https://universe.roboflow.com/iwrist/grasping-in-the-wild },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2024 },
month = { oct },
note = { visited on 2024-11-21 },
}