Browse » Gaming
Top Gaming Datasets
Gaming datasets and computer vision models can be used to automate gameplay. One of the most common use cases of computer vision in gaming are aimbots. Aimbots are used to automatically aim at specific targets within a game.
Top gaming datasets and models: https://blog.roboflow.com/top-gaming-datasets-for-computer-vision/
Tutorial for using computer vision in games: https://blog.roboflow.com/game-automation-computer-vision/
Example of game automation: https://blog.roboflow.com/computer-vision-win-games-duck-hunt/ | https://blog.roboflow.com/chess-boards/
This dataset was originally created by Zhe Fan. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/zhe-fan/marble-images.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
This dataset was originally created by Kais Al Hajjih. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/kais-al-hajjih/farcry6-hackathon.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
This dataset was originally created by Seokjin Ko. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/new-workspace-0pohs/avatar-recognition-rfw8d.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
Here are a few use cases for this project:
-
In-game Strategy Analysis: The "Call of Duty MW2" computer vision model can be used by gamers and coaches to analyze in-game strategies and tactics by identifying the player classes and their positioning. This can help in formulating effective gameplay strategies and improving team coordination.
-
Stream Highlight Creation: Streamers and content creators can use the model to automatically identify and compile exciting moments in their Modern Warfare 2 gameplay videos. By detecting the MW2_body and other class elements, the model can assist in generating engaging highlight reels for their audiences.
-
Game Tutorial and Walkthrough Development: Developers and gamers can use the "Call of Duty MW2" model to easily identify various MW2 classes within video footage, simplifying the process of creating informative and detailed game tutorials and walkthroughs for the gaming community.
-
Cheat Detection and Prevention: Game developers and moderators can use the model to analyze in-game recordings to identify potential cheaters using illegal modifications or exploits. By recognizing the MW2_body and other class elements, the model can aid in detecting unusual patterns or discrepancies that signify cheating or rule-breaking.
-
Automated Game Video Tagging: The "Call of Duty MW2" computer vision model can be used by video-sharing platforms to automatically generate metadata tags for user-uploaded Modern Warfare 2 gameplay videos. These tags can include key in-game elements, such as the classes or weapons used, thus improving video search-ability and discoverability for viewers.
This dataset was originally created by Team Roboflow and Augmented Startups. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/roboflow-100/poker-cards-cxcvz.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
This dataset was originally uploaded by Roboflow CEO, Joseph Nelson, and sourced from Adam Crenshaw. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/joseph-nelson/uno-cards.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
This dataset was originally created by Lukas D. Ringle. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/lamaitw/lama-itw/.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
Overview
The Playing Cards dataset is a collection of synthetically generated cards blended into various types of backgrounds. You will be able to perform object detection to detect both number and suit of the cards.
Example Footage
Training and Deployment
The playing cards model has been trained in Roboflow, available for inference on the Dataset tab.
One could also build a Card Counting model for either Black Jack or Poker using YOLOR. This is achieved using the Roboflow Platform which you can deploy the model for robust and real-time detections. You can learn more here: https://augmentedstartups.info/YOLOR-Get-Started
Video Demo using YOLOR for training- https://youtu.be/2lGTZuaH4ec
About Augmented Startups
We are at the forefront of Artificial Intelligence in computer vision. With over 90k subscribers on YouTube, we embark on fun and innovative projects in this field and create videos and courses so that everyone can be an expert in this field. Our vision is to create a world full of inventors that can turn their dreams into reality.
CSGO AIMBOT
Go Win
Trained on 5.9k Images
This dataset was created by Harry Field and contains the labelled images for capturing the game state of a draughts/checkers 8x8 board.
This was a fun project to develop a mobile draughts applciation enabling users to interact with draughts-based software via their mobile device's camera.
The data captured consists of:
- White Pieces
- White Kings
- Black Pieces
- Black Kings
- Bottom left corner square
- Top left corner square
- Top right corner square
- Bottom right corner square
Corner squares are captured so the board locations of the detected pieces can be estimated.
From this data, the locations of other squares can be estimated and game state can be captured. The image below shows the data of a different board configuration being captured. Blue circles refer to squares, numbers refer to square index and the coloured circles refer to pieces.
Once game state is captured, integration with other software becomes possible. In this example, I created a simple move suggestion mobile applciation seen working here.
The developed application is a proof of concept and is not available to the public. Further development is required in training the model accross multiple draughts boards and implementing features to add vlaue to the physical draughts game.
The dataset consists of 759 images and was trained using Yolov5 with a 70/20/10 split.
The output of Yolov5 was parsed and filtered to correct for duplicated/overlapping detections before game state could be determined.
I hope you find this dataset useful and if you have any questions feel free to drop me a message on LinkedIn as per the link above.
Overview
We have captured and annotated photos of the popular board game, Boggle. Images are predominantly from 4x4 Boggle with about 30 images from Big Boggle (5x5).
- 357 images
- 7110 annotated letter cubes
These images are released for you to use in training your machine learning models.
Use Cases
We used this dataset to create BoardBoss, an augmented reality board game helper app. You can download BoardBoss in the App Store for free to see the end result! :fa-spacer: :fa-spacer: The model trained from this dataset was paired with some heuristics to recreate the board state and overlay it with an AR representation. We then used a traditional recursive backtracking algorithm to find and show the best words on the board.
Using this Dataset
We're releasing the data as public domain. Feel free to use it for any purpose. It's not required to provide attribution, but it'd be nice! :fal-smile-wink:
About Roboflow
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.
Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:
Overview
This is a dataset of Chess board photos and various pieces. All photos were captured from a constant angle, a tripod to the left of the board. The bounding boxes of all pieces are annotated as follows: white-king
, white-queen
, white-bishop
, white-knight
, white-rook
, white-pawn
, black-king
, black-queen
, black-bishop
, black-knight
, black-rook
, black-pawn
. There are 2894 labels across 292 images.
Follow this tutorial to see an example of training an object detection model using this dataset or jump straight to the Colab notebook.
Use Cases
At Roboflow, we built a chess piece object detection model using this dataset.
You can see a video demo of that here. (We did struggle with pieces that were occluded, i.e. the state of the board at the very beginning of a game has many pieces obscured - let us know how your results fare!)
Using this Dataset
We're releasing the data free on a public license.
About Roboflow
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.
Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility.
Overview
This is a dataset of Chess board photos and various pieces. All photos were captured from a constant angle, a tripod to the left of the board. The bounding boxes of all pieces are annotated as follows: white-king
, white-queen
, white-bishop
, white-knight
, white-rook
, white-pawn
, black-king
, black-queen
, black-bishop
, black-knight
, black-rook
, black-pawn
. There are 2894 labels across 292 images.
Follow this tutorial to see an example of training an object detection model using this dataset or jump straight to the Colab notebook.
Use Cases
At Roboflow, we built a chess piece object detection model using this dataset.
You can see a video demo of that here. (We did struggle with pieces that were occluded, i.e. the state of the board at the very beginning of a game has many pieces obscured - let us know how your results fare!)
Using this Dataset
We're releasing the data free on a public license.
About Roboflow
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.
Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility.
Synthetic dataset of black and white stones on go boards. Generated using Unity Perception
Use Case
To be able to take a picture of a go game and figure out the position of each stone in order to score the game or analyze with AI. Project inspiration stems from this blog post along with past ideas we've had for this: https://blog.roboflow.com/chess-boards/
Classes
blackStone
: Black go stones, 90,501
labels whiteStone
: White go stones, 89,963
labels grid
: Cross section grid of a go board, 1,000
labels
Double twelve dominoes detection
Overview
Made as a side-project after my friends and I started getting into playing Mexican Train Dominoes. This is the data set used for the model applied on a website I made to keep track of my score at the end of each round.
https://pip-tracker.netlify.app/
About me
I'm a Software Engineer working for Google at the intersection between News and Search. I mostly work on Web projects but have tinkered in embedded apps, games, automation, and (most recently) ML over the years.
https://twitter.com/ricky_hartmann
https://github.com/hartmannr76
Background Information
This dataset was created as part of the World's Largest Game of Rock, Paper, Scissors talk and challenge introduced by Joseph Nelson and Salo Levy @ SXSW 2023.
- the above image is linked to the entry page
- the demo video was prepared from the Deploy Tab, and utilizes
v11
(YOLOv8n-100epochs
)
The dataset includes an aggregation of images cloned from the following datasets:
- https://universe.roboflow.com/brad-dwyer/egohands-public/ - null images
- https://universe.roboflow.com/presentations/rock-paper-scissors-presentation/
- https://universe.roboflow.com/team-roboflow/rock-paper-scissors-detection
- universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/
New images were added to the dataset and labeled to supplement the examples from the cloned datasets. Members of Team Roboflow, and more close friends of the team, are included in the dataset to assist with creating a more robust, generalized, model.
Participation Rules and FAQ
- the above image is linked to the FAQ and contest entry page
Make dum dum computer see ores for no particular reason whatsoever.
Contribute:
- Join Galaxy: https://www.roblox.com/games/200330616/Galaxy
- Make screenshots of you mining.
- Upload to here via the "Upload" button.
- Done!
Additional Contribution:
- Sign up for Labeller role in our Discord.
- Wait for me to assign you images to label.
Rummy Tiles Detector, is a dataset of 171 images of rummy tiles with pre-trained models. Version 2 was the best working model for me, you can try it yourself!
Here is the YouTube video explaining the project: https://www.youtube.com/watch?v=0XYCpQxcG4o
-Ahmet Aksünger
Lost Cities Cards
So, a few days ago I bought this game by Reiner Knizia, called Lost Cities. Got totally hooked on the game. But the scoring of the game is a bit hard, so let's try to come up with some kind of a model that can identify the set and a small program that can calculate the score.
The following 11 classes are used:
- 2-10 - numbers
- w - a bet
- set - a set of cards that together make up the score
Thanks to Erik Dekker for sending some images my way. If you have more images for me; please let me know: https://twitter.com/keestalkstech
Halo Infinite: Spartan Dataset
Classifications
There are four classifications:
- Enemy
- Enemy Head
- Friendly
- Friendly Head
Image Settings
Images are 320p by 320p centered on the targeting reticule
Game Settings
Images were gathered on low settings. Enemies are color:pineapple and allies are default blue.
Attribution and License
This dataset was created and annotated by Graham Doerksen and is available under CC BY 4.0 license
When learning to play Dreidel, I would sometimes forget what the names of each character are and what action they correspond to in the game. I thought it’d be fun to create a computer vision model that could understand what each symbol on a Dreidel is, making it easier to learn to play the game.
This model tracks the dreidel as it spins and detects the letters that are on the four sided dreidel.
How to Play Dreidel
Rules: 1. The players are dealt gelt (chocolate wrapped in gold paper made to look like a coin) 2. Each player takes a turn at spinning the Dreidel 3. The Dreidel has four sides that each prompt an action to take by the spinner If נ (nun) is facing up, the player does nothing. If ג (gimel) is facing up, the player gets everything in the pot. If ה (hay) is facing up, the player gets half of the pieces in the pool. If ש (shin) the player adds one of their gelt to the pot 4. The winner, of course, gets to eat all the gelt
Hopefully, with this application, one can create an application that teaches someone how to play dreidel.
Real life Valorant Gameplay Expirience
This Dataset was used to create an AI for Valorant that would Smoke and Flash me In Real Life.
Full Video: https://youtu.be/aopXw22iL1M
Background Information
This dataset was curated and annotated by Find This Base. A custom dataset composed of 16 classes from the popular mobile game, Clash of Clans.
- Classes: Canon, WizzTower, Xbow, AD, Mortar, Inferno, Scattershot, AirSweeper, BombTower, ClanCastle, Eagle, KingPad, QueenPad, RcPad, TH13 and WardenPad.
The original custom dataset (v1) is composed of 125 annotated images.
The dataset is available under the CC BY 4.0 license.
Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
Dataset Versions
Version 1 (v1) - 125 images
- Preprocessing - Auto-Orient and Resize: Fit (black edges) to 640x640
- Augmentations - No augmentations applied
- Training Metrics - Trained from Scratch (no checkpoint used) on Roboflow
- mAP = 83.1%, precision = 43.0%, recall = 99.1%
Version 4 (v4) - 301 images
- Preprocessing - Auto-Orient and Resize: Fit (black edges) to 640x640
- Augmentations - Mosaic
- Generated Images - Outputs per training example: 3
- Training Metrics - Trained from Scratch (no checkpoint used) on Roboflow
- mAP = %, precision = %, recall = %
Find This Base: Official Website | How to Use Find This Base | Discord | Patreon
Apex Enemy Detection
Use this PRE-TRAINED MODEL to create an aimbot and identify enemies
Use your webcam to infer, or use the hosted inference API
More deployment options are available
Overview
This dataset contains 8,992 images of Uno cards and 26,976 labeled examples on various textured backgrounds.
This dataset was collected, processed, and released by Roboflow user Adam Crawshaw, released with a modified MIT license: https://firstdonoharm.dev/
Use Cases
Adam used this dataset to create an auto-scoring Uno application:
Getting Started
Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more.
Annotation Guide
See here for how to use the CVAT annotation tool.
About Roboflow
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:
Apex Legends, S15 Detection
Built With
- Roboflow
- Python
- Yolo v7
- Opencv
- Mss
- Pytorch
Usage
** NOT IMPLEMENTED **
- To download a demo file clone the following code.
-
git clone https://example.com
-
python main.py
Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please email me and we can add your dataset to the model. You can also clone this project and make changes yourself!
Don't forget to give the project a star! Thanks again!
License
Distributed under the MIT License. See LICENSE.txt
for more information.
Contact
Email: fitzgeralderik.k@gmail.com
Acknowledgments
Valorant Head/Body Aimbot
This trained model uses our dataset that focuses on annotating head and body of enemies in valorant.
Overview
We have captured and annotated photos of six-sided dice. There are 359 total images from a few sets:
- 154 single dice of various styles on a white table
- 388 Catan Dice (Red and Yellow, some rolled on a white table, 160 on top of or near the Catan board)
- 13 mass groupings of dice in various styles
These images are released for you to use in training your machine learning models. :fa-spacer: :fa-spacer: Classes are generally balanced. Here's the output of Roboflow's Dataset Health check:
Use Cases
This would be a great dataset to test out different object detection models like YOLO v3, MaskRCNN, mobilenet, or others.
You could use it to create dice game helper apps (like a dice counter) or independent games.
Using this Dataset
We're releasing the data as public domain. Feel free to use it for any purpose. It's not required to provide attribution, but it'd be nice! :fal-smile-wink:
About Roboflow
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.
Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer: