Browse » Animals
Top Animals Datasets
Animals datasets, models, and APIs can be used for preservation, conversation, non-contact observation, and much more. Tracking animals, counting animals, monitoring animal migration patterns, animal classification, and animal size estimation are common use cases of animal computer vision applications.
Example: https://blog.roboflow.com/how-this-fulbright-scholar-is-using-computer-vision-to/
Example: https://blog.roboflow.com/using-computer-vision-to-count-fish-populations/
This dataset was originally created by Dane Sprsiter. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/dane-sprsiter/barnyard.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
This dataset is a copy of a subset of the full Stanford Dogs Dataset.
Source: http://vision.stanford.edu/aditya86/ImageNetDogs/
The original dataset contained 20,580 images of 120 breeds of dogs.
This subset contains 9884 images of 60 breeds of dogs.
Dataset Information
This dataset contains 14,674 images (12,444 of which contain objects of interest with bounding box annotations) of fish, crabs, and other marine animals. It was collected with a camera mounted 9 meters below the surface on the Limfjords bridge in northern Denmark by Aalborg University.
Composition
Roboflow has extracted and processed the frames from the source videos and converted the annotations for use with many popular computer vision models. We have maintained the same 80/10/10 train/valid/test split as the original dataset.
The class balance in the annotations is as follows:
Most of the identified objects are congregated towards the bottom of the frames.
More Information
For more information, see the Detection of Marine Animals in a New Underwater Dataset with Varying Visibility paper.
If you find the dataset useful, the authors request that you please cite their paper:
@InProceedings{pedersen2019brackish,
title={Detection of Marine Animals in a New Underwater Dataset with Varying Visibility},
author={Pedersen, Malte and Haurum, Joakim Bruslund and Gade, Rikke and Moeslund, Thomas B. and Madsen, Niels},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}
This dataset was originally created by Nazmuj Shakib Diip, Afraim, Shiam Prodhan. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/commolybroken/dataset-z2vab.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
This dataset was originally created by Omar Kapur, wwblodge
, Ricardo Jenez, Justin Jeng, Jeffrey Day. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/omarkapur-berkeley-edu/livestalk.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
About this Dataset
The Oxford Pets dataset (also known as the "dogs vs cats" dataset) is a collection of images and annotations labeling various breeds of dogs and cats. There are approximately 100 examples of each of the 37 breeds. This dataset contains the object detection portion of the original dataset with bounding boxes around the animals' heads.
Origin
This dataset was collected by the Visual Geometry Group (VGG) at the University of Oxford.
About this Dataset
This is a collection of images and video frames of cheetahs at the Omaha Henry Doorly Zoo taken in October, 2020. The capture device was a SEEK Thermal Compact XR connected to an iPhone 11 Pro. Video frames were sampled and labeled by hand with bounding boxes for object detection using Robofow.
Using this Dataset
We have provided the dataset for download under a creative commons by-attribution license. You may use this dataset in any project (including for commercial use) but must cite Roboflow as the source.
Example Use Cases
This dataset could be used for conservation of endangered species, cataloging animals with a trail camera, gathering statistics on wildlife behavior, or experimenting with other thermal and infrared imagery.
About Roboflow
Roboflow creates tools that make computer vision easy to use for any developer, even if you're not a machine learning expert. You can use it to organize, label, inspect, convert, and export your image datasets. And even to train and deploy computer vision models with no code required.
This dataset was originally created by Anonymous. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/nasca37/peixos3.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
Overview
This is an object detection dataset of ocean fish classified by their latin names.
Use Cases
This dataset can be used for the following purposes:
- Underwater object detection model
- Fish object detection model
- Train object detection model to recognize underwater species
- Prototype fish detection system
- Identifying fish with computer vision
- Free fish dataset
- Free fish identificaiton dataset
- Scuba diving object detection dataset
- Fish bounding boxes
- Fish species annotations
Enjoy! These images have been listed in the public domain.
Note: These images have been sourced from makeml.app/datasets/fish
Here are a few use cases for this project:
-
Wildlife Conservation: The Elephant Detection model can be employed by wildlife organizations and researchers to monitor elephant populations in their natural habitats, track their movements, and analyze their behavior to support conservation efforts.
-
Anti-poaching Initiatives: The model can help detect and track elephants in real-time, allowing park rangers and other authorities to identify potential poaching activities and intervene before any harm comes to the animals.
-
Ecotourism Enhancement: Tour operators can use the model to locate elephants during guided safaris or nature walks in wildlife reserves, improving the overall experience for tourists who want to observe these magnificent creatures in the wild.
-
Habitat Management: The model can assist researchers and conservationists in identifying important elephant habitats and analyzing their conditions, such as vegetation, water access, and potential threats. This information can then be used to develop and implement habitat management plans to ensure the long-term survival of elephant populations.
-
Smart Wildlife Corridor Planning: The Elephant Detection model can be used to analyze elephant movement patterns, helping urban planners and conservationists develop wildlife corridors that balance the needs of both humans and wildlife, reduce human-elephant conflicts, and protect the overall ecosystem.
The bird dataset can be used to detect birds from multiple angles and distances during different types of weather and seasons.
Use the bird dataset and detection api to create computer vision applications for birding, bird feeding, bird counting, bird population health, seasonality of bird migrations, and more!
Example bird detection project: https://twitter.com/bradfordgill_/status/1509376362473209871?s=21&t=Ix8RrjaImfrKlJNi5if8iw
Use your home security camera to create notifications when birds have arrived by using code from this animal detection project: https://blog.roboflow.com/rabbit-deterrence-system/
Research paper on animal detection: https://ieeexplore.ieee.org/abstract/document/9752203
This dataset was originally created by My Game Pics. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/my-game-pics/my-game-pics.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
Dataset Details
This dataset consists of 638 images collected by Roboflow from two aquariums in the United States: The Henry Doorly Zoo in Omaha (October 16, 2020) and the National Aquarium in Baltimore (November 14, 2020). The images were labeled for object detection by the Roboflow team (with some help from SageMaker Ground Truth). Images and annotations are released under a Creative Commons By-Attribution license. You are free to use them for any purposes personal, commercial, or academic provided you give acknowledgement of their source.
Projects Using this Dataset:
No-Code Object Detection Tutorial
Class Breakdown
The following classes are labeled: fish, jellyfish, penguins, sharks, puffins, stingrays, and starfish. Most images contain multiple bounding boxes.
Usage
The dataset is provided in many popular formats for easily training machine learning models. We have trained a model with CreateML (see gif above).
This dataset could be used for coral reef conservation, environmental health monitoring, swimmer safety, pet analytics, automated feeding, and much more. We're excited to see what you build!
Part of a project for an AI class at St. Mary's college of southern Maryland, to test how the platform works. Images are pulled from the Calvert Marine Museam, images from My Fossil, and the Florida Museam of Natural History.
DetoxifAI Animals/Plants & Species:
-
Snakes
-
Coachwip (Masticophis flagellum)
-
Western Diamonback Rattlesnake (Crotalus atrox)
-
Pacific Gophersnake (Pituophis catenifer)
-
California kingsnake (Lampropeltis getula californiae)
-
Western yellow-bellied racer (Coluber constrictor)
-
Ring-necked snake (Diadophis punctatus)
-
Garter Snake (Thamnophis)
-
Sharp-tailed snake (Contia tenuis)
-
Rubber Boa (Charina bottae)
-
Northern Pacific rattelsnake (Crotalus Oreganus)
-
Mushrooms
-
Black Trumpet (Craterellus cornucopioides)
-
Western Cauliflower Mushroom (Sparassis radicata)
-
Blushing Morel (Morchella rufobrunnea)
-
Pacific Golden Chanterelle (Cantharellus formosus)
-
Manzanita Bolete (Leccinum manzanitae)
-
Salt-Loving Mushroom (Agaricus bernardii)
-
Death Cap (Amanita phalloides)
-
Magpie (Coprinopsis picacea)
-
Fly Agaric (Amanita muscaria)
-
Jack-o-Lantern (Omphalotus olearius)
Purpose of the Project
This project started as a way to add real-time counts of bees with/without pollen entering my backyard beehive to append some additional information to a livestream of the hive, and to correlate behavior at the hive entrance to weather, temperature, etc. Since then, I've added additional training data not specific to my hive which accounts for classification of drones and queens in addition to bees/pollen bees. Currently the model generalizes reasonably well, but more training data is required.
Assessed Classes & Labeling Guidelines
- bees (either workers or foragers)
- bees carrying pollen
- drones
- queens
Labeling should cover the entire body of the bee, excluding the wings as per the following example:
For the class of bees carrying pollen, it is acceptable to extend the box to include the visible pollen packs to distinguish this from the bee class:
Sample Results
Sample video of backyard hive entrance with low to moderate level of activity: https://www.youtube.com/watch?v=qZW5eYd0Yw8&t=2266s
Generic sample video of single bee: https://www.youtube.com/watch?v=A1x6VA8TWCg
Latest YOLOv5 Weights Files
Latest weights files for use by others are posted on my github here: https://github.com/mattnudi/bee-detection
These files will be updated as more images are added to the dataset
Here are a few use cases for this project:
-
Pet Store Assistance: This model could be used in pet stores to aid in the behavior analysis and health monitoring of specific fish species. The automatic identification of the fish species can provide a non-invasive way to monitor individual fish and track their activities without the need for physical handling.
-
Aquatic Veterinary Diagnostics: In veterinary medicine, the model can be used to identify freshwater species that may have specific diseases or health conditions. It could assist aquatic veterinarians in providing targeted treatments for diseases that are specific to particular species.
-
Home Aquarium Maintenance: The model could benefit aquarium enthusiasts, helping them to monitor and manage the health and well-being of their pet fishes. It facilitates the identification of species for better care, diet, and preventative treatment regimes.
-
Fish Farming and Aquaculture: This model could play a key role in fish farming industries. It could be used to monitor the population and health of specific species, helping progressive farmers and aquaculture companies keep track of their stocks and mitigate the risks associated with illness or invasive species.
-
Educational Tool: The model could serve as an educational tool for students studying marine biology and related fields. It could help students familiarize themselves with different freshwater fish species and observe their behaviors and interactions in various environments.
This dataset contains annotated pictures of animals (like wild pigs and deer) from trail cameras in East Texas.
You can use this dataset and the detection API to create computer vision applications for hunting, monitoring animal population health, counting deer sightings, and more!
Automatically filter through hours of trail cam footage to find the times/frames when wild game is caught on camera.
This is a dataset of bumble bee images curated by the Spiesman Lab at Kansas State University
Project Overview Creating a model to detect deer when driving down the road
Classes There are 3 categories used to create the class for each photo. Example: buck_front_standing
- Deer type - buck, doe, fawn; help model in case it improves results to distinguish between having antlers or not, having spots or not, etc
- View of deer - front, rear, side; not only to help the model identify a deer that looks different from different angles (2 legs from the front) but also in case long term there is usefulness in identifying that a deer is running towards the road or away from it
- Activity - standing, walking, running, eating; again, both to help the model but also for "threat" assessment as you drive and need to understand the current state of the deer
Status/Timeline Initial images loaded (c. 60) to experiment with. Hope to have a larger dataset by 2023
Contribution and Labeling Guidelines Any and all are welcome! We especially need deer in settings around roads.
About this Dataset
This dataset was created by exporting the Oxford Pets dataset from Roboflow Universe, generating a version with Modify Classes to drop all of the classes for the labeled dog breeds and consolidating all cat breeds under the label, "cat." The bounding boxes were also modified to incude the entirety of the cats within the images, rather than only their faces/heads.
Oxford Pets
-
The Oxford Pets dataset (also known as the "dogs vs cats" dataset) is a collection of images and annotations labeling various breeds of dogs and cats. There are approximately 100 examples of each of the 37 breeds. This dataset contains the object detection portion of the original dataset with bounding boxes around the animals' heads.
-
Origin: This dataset was collected by the Visual Geometry Group (VGG) at the University of Oxford.
Overview
The Aerial Sheep dataset contains images taken from a birds-eye view with instances of sheep in them. Images do not differentiate between gender or breed of sheep, instead grouping them into a single class named "sheep".
Example Footage
See RIIS's sheep counter application for additional use case examples. Link - https://riis.com/blog/counting-sheep-using-drones-and-ai/
About RIIS
The project is for automated processing of home video camera feeds. This dataset includes both daytime and nighttime (IR) images, typically from perspective of a typical camera.
I suggest splitting the dataset and training two models: one for daytime and the other for nighttime. The nighttime pictures have a single channenl while the daytime ones have three channels, this results in significantly different features being trained. I identify if the image has one or three channels using the following shell command: identify -colorspace HSL -verbose "$f" | egrep -q "(Channel 0: 1-bit|red: 1-bit)"
The images are full size, so different sized models can be created. I've been training at 608x608. It includes many null images which have in the past triggered a false positives.
The classes are simply the things of interest I've seen from my house. In general this is more useful than the standard yolo classes, such as Zebra. However, you may want to have bear or some other wildlife. I've found squirrels are too small for my cameras to reliable pickup and detect. The perspective and framing of content is quite different from typical stock photos, so I think it makes a lot of sense to train the model using only images from ipcams.
Ideally, I will make models available for the many different tools people are using for AI already, including: Deepstack / BlueIris MotionEye Frigate
Cat Detector
The aim for this project was to build a cat detector that can accuratlet identify my two cats to provide a signal to a water fountain. The waterflow is customised to the specic cat with one cat prefering a faster flow and the other a slower. By identifying which cat is in the proximity the fountain can activate only when necessary and provide a customised experience for the feline.
Classes The project includes two classes: 1)vifslan 2)bubba The classes are thus two seperate cats that share alot of similarties but also idiosyncracies.
Current status Working prototype