Top Plastic Straw Datasets and Models

The datasets below can be used to train fine-tuned models for plastic straw detection. You can explore each dataset in your browser using Roboflow and export the dataset into one of many formats.

At the bottom of this page, we have guides on how to train a model using the plastic straw datasets below.

1 - 50 of 500k+

Guide: How to Train a Computer Vision Model to Detect Plastic Straws

You can use datasets from Roboflow Universe to train a model to detect plastic straws in images and videos.

To download a dataset, first install the Roboflow Python package (pip install roboflow), then then the following code snippet.

When you run the code for the first time, you will be asked to authenticate with Roboflow.

                                    import roboflow
            
                                    roboflow.login()
            
                                    # replace with the plastic straw project you choose above
                                    roboflow.download_dataset(
                                    dataset_url="https://universe.roboflow.com/nora-slimani/trash-detection-otdmj/35",
                                    model_format="coco"
                                    )
                                

Where dataset_url is set to a project and version in the dataset you choose from the results above.

Roboflow has written guides on how to train computer vision models with popular architectures. Many guides come with accompanying notebooks you can follow to train a model.

Guide: Automatically Label Plastic Straws in an Unlabeled Dataset

You can use foundation models to automatically label data using Autodistill.

Autodistill supports using many state-of-the-art models like Grounding DINO and Segment Anything to auto-label data. This is useful if a dataset you want to use is not already labeled.

Autodistill performs well at identifying common objects, but may struggle with more obscure objects. We recommend trying Autodistill using Grounded SAM for detection and segmentation or CLIP for classification.

Follow our guides below to get started.