Browse » Safety

Top Safety Datasets

Technology can help keep us safe. For example, by alerting us to dangers on the road or ensuring we don't fall asleep at the wheel.

DetoxifAI Animals/Plants & Species:

  • Snakes

  • Coachwip (Masticophis flagellum)
  • Western Diamonback Rattlesnake (Crotalus atrox)
  • Pacific Gophersnake (Pituophis catenifer)
  • California kingsnake (Lampropeltis getula californiae)
  • Western yellow-bellied racer (Coluber constrictor)
  • Ring-necked snake (Diadophis punctatus)
  • Garter Snake (Thamnophis)
  • Sharp-tailed snake (Contia tenuis)
  • Rubber Boa (Charina bottae)
  • Northern Pacific rattelsnake (Crotalus Oreganus)
  • Mushrooms

  • Black Trumpet (Craterellus cornucopioides)
  • Western Cauliflower Mushroom (Sparassis radicata)
  • Blushing Morel (Morchella rufobrunnea)
  • Pacific Golden Chanterelle (Cantharellus formosus)
  • Manzanita Bolete (Leccinum manzanitae)
  • Salt-Loving Mushroom (Agaricus bernardii)
  • Death Cap (Amanita phalloides)
  • Magpie (Coprinopsis picacea)
  • Fly Agaric (Amanita muscaria)
  • Jack-o-Lantern (Omphalotus olearius)

Project Overview
Creating a model to detect deer when driving down the road

There are 3 categories used to create the class for each photo. Example: buck_front_standing

  1. Deer type - buck, doe, fawn; help model in case it improves results to distinguish between having antlers or not, having spots or not, etc
  2. View of deer - front, rear, side; not only to help the model identify a deer that looks different from different angles (2 legs from the front) but also in case long term there is usefulness in identifying that a deer is running towards the road or away from it
  3. Activity - standing, walking, running, eating; again, both to help the model but also for "threat" assessment as you drive and need to understand the current state of the deer

Initial images loaded (c. 60) to experiment with. Hope to have a larger dataset by 2023

Contribution and Labeling Guidelines
Any and all are welcome! We especially need deer in settings around roads.

YoloV4 and YoloV5 dataset preparation for real time fire localization

Base images come from the FireNet project.
Approximately 20% more images were added from various sources to compose the final dataset. A small portion of these are just background images for the purpose of getting the best results.

El proyecto está orientado a el reconocimiento de peatones en diferentes momentos del día para detectar a la persona dependiendo de problemas de oclusión, problemas de iluminación, aunque como el proyecto se piensa ejecutar en el Tecnológico de estudios superiores de Jocotitlán México, los problemas con respecto a si es de noche dentro de la universidad no son muy concurrentes, ya que la universidad a más tardar cierra a las 7:30 pm, aun siendo de día

Se están tomando diferentes data-set, data-set públicos brindados por varias páginas de internet y data-set que elaboramos junto con una cámara GoPro y un dron el cual nos ayudó a grabar partes de la universidad para agregarlas y segmentar las imágenes

Las clases que tiene únicamente son (Personas)
Anterior mente una unas versiones se segmentaron por géneros (Hombre-Mujer), dado que fue más importante detectar como tal el peatón se cambiaron estas clases

  • The FaceCar project have as goal to create an efficient way to detect drowsy and fatigue in the driver of an autonomous vehicle, so in cases where the the vehicle isn't 100% responsive the driver should keep atenttion to the road and traffic, avoiding accidents caused by the inefficiency of the autonomous system.
  • We are current using the YOLOv5 algorithm to detect the driver state, so this dataset has been used to train and study how to improve the accuracy of this method.