Browse » Environmental

Environmental

Roboflow hosts the world's biggest set of open source environmental datasets and pre-trained computer vision models. Captured from satellites, drones, handheld devices, etc. - these projects can help you find objects of interest in environmental settings, such as oceans, forests, trails, and more.

TACO: Trash Annotations in Context Dataset

From: Pedro F. Proença; Pedro Simões

TACO is a growing image dataset of trash in the wild. It contains segmented images of litter taken under diverse environments: woods, roads and beaches. These images are manually labeled according to an hierarchical taxonomy to train and evaluate object detection algorithms. Annotations are provided in a similar format to COCO dataset.

The model in action:

Gif of the model running inference

Examples images from the dataset:

Example Image #2 from the Dataset
Example Image #5 from the Dataset

For more details and to cite the authors:

  • Paper: https://arxiv.org/abs/2003.06975
  • Paper Citation:
    @article{taco2020,
    title={TACO: Trash Annotations in Context for Litter Detection},
    author={Pedro F Proença and Pedro Simões},
    journal={arXiv preprint arXiv:2003.06975},
    year={2020}
    }

All images are captured in the Eidselva river - Stadt Nordfjordeid Norway

This model have been created due to the lack of public models of Atlantic Salmon and other fish in Norwegian rivers.

Add a readme with project details and resources.

Some helpful things you should add are:
A project overview
Descriptions of each class type
Current status and timeline
Links to external resources
Contribution and labeling guidelines

Image example

Overview

This dataset contains 581 images of various shellfish classes for object detection. These images are derived from the Open Images open source computer vision datasets.

This dataset only scratches the surface of the Open Images dataset for shellfish!

Image example

Use Cases

  • Train object detector to differentiate between a lobster, shrimp, and crab.
  • Train object dector to differentiate between shellfish
  • Object detection dataset across different sub-species
  • Object detection among related species
  • Test object detector on highly related objects
  • Train shellfish detector
  • Explore the quality and range of Open Image dataset

Tools Used to Derive Dataset

Image example

These images were gathered via the OIDv4 Toolkit This toolkit allows you to pick an object class and retrieve a set number of images from that class with bound box lables.

We provide this dataset as an example of the ability to query the OID for a given subdomain. This dataset can easily be scaled up - please reach out to us if that interests you.

Detecting Wildfire Smoke with Computer Vision

This dataset is released by AI for Mankind in collaboration with HPWREN under a Creative Commons by Attribution Non-Commercial Share Alike license. The original dataset (and additional images without bounding boxes) can be found in their GitHub repo.

We have mirrored the dataset here for ease of download in a variety of common computer vision formats.

To learn more about this dataset and its possible applications in fighting wildfires, see this case study of Abhishek Ghosh's wildfire detection model.

About Roboflow

Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.

Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility.
:fa-spacer:

Roboflow Wordmark

Rotifers, Microbeads and Algae

By Jord Liu and The Exploratorium

Background

This is the Machine Learning half of a larger project at the Exploratorium's Biology Lab called Seeing Scientifically, which is a research project that investigates how to use machine learning and other exhibit technology to best teach visitors in an informal learning context like the Exploratorium.

In this iteration of the project, we train an ML model to detect microscopic animals called rotifers, parts of their body (e.g. head, gut, jaw), and microbeads and algae in real time. This model is then integrated into a museum exhibit kiosk prototype that is deployed live on the Exploratorium's museum floor, and visitor research is collected on the efficacy of the exhibit.

Short gif demo of ML detection

Data and Model

The images used here are captured directly from a microscope feed and then labelled by Exploratorium employees and volunteers. Some include up to hundreds of microbeads or algae, some are brightfield and some are darkfield. They show rotifers in multiple poses, including some where the tails are not readily visible. There is relatively little variance in the images here as the environment is highly controlled. We use tiled data of multiple sizes mixed in with the full images.

We use YOLOv4, though future work includes retraining with YOLO-R, YOLO-v7, and other SOTA models. We also experimented with KeypointRCNN for pose estimation but found that the performance did not exceed our baseline of using YOLOv4 and treating the keypoints as objects.

Current performance by class is:
class_id = 1, name = bead, ap = 77.01% (TP = 251, FP = 41)
class_id = 2, name = bigbead, ap = 82.46% (TP = 36, FP = 5)
class_id = 3, name = egg, ap = 95.51% (TP = 16, FP = 4)
class_id = 4, name = gut, ap = 82.55% (TP = 70, FP = 13)
class_id = 5, name = head, ap = 78.38% (TP = 59, FP = 3)
class_id = 6, name = mastics, ap = 86.82% (TP = 49, FP = 6)
class_id = 7, name = poop, ap = 56.27% (TP = 34, FP = 15)
class_id = 8, name = rotifer, ap = 72.60% (TP = 83, FP = 17)
class_id = 9, name = tail, ap = 46.14% (TP = 27, FP = 7)

Examples

Screen captures from our exhibit as of July 2022.
Rotifer body parts
Microbead detection
Algae detection

Overview

The PlantDoc dataset was originally published by researchers at the Indian Institute of Technology, and described in depth in their paper. One of the paper’s authors, Pratik Kayal, shared the object detection dataset available on GitHub.

PlantDoc is a dataset of 2,569 images across 13 plant species and 30 classes (diseased and healthy) for image classification and object detection. There are 8,851 labels. Read more about how the version available on Roboflow improves on the original version here.

And here's an example image:

Tomato Blight

Fork this dataset (upper right hand corner) to receive the raw images, or (to save space) grab the 416x416 export.

Use Cases

As the researchers from IIT stated in their paper, “plant diseases alone cost the global economy around US$220 billion annually.” Training models to recognize plant diseases earlier dramatically increases yield potential.

The dataset also serves as a useful open dataset for benchmarks. The researchers trained both object detection models like MobileNet and Faster-RCNN and image classification models like VGG16, InceptionV3, and InceptionResnet V2.

The dataset is useful for advancing general agriculture computer vision tasks, whether that be health crop classification, plant disease classification, or plant disease objection.

Using this Dataset

This dataset follows Creative Commons 4.0 protocol. You may use it commercially without Liability, Trademark use, Patent use, or Warranty.

Provide the following citation for the original authors:

@misc{singh2019plantdoc,
                                title={PlantDoc: A Dataset for Visual Plant Disease Detection},
                                author={Davinder Singh and Naman Jain and Pranjali Jain and Pratik Kayal and Sudhakar Kumawat and Nipun Batra},
                                year={2019},
                                eprint={1911.10317},
                                archivePrefix={arXiv},
                                primaryClass={cs.CV}
                            }
                            

About Roboflow

Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.

Developers reduce 50% of their code when using Roboflow's workflow, automate annotation quality assurance, save training time, and increase model reproducibility.

Roboflow Workmark