Cars Dataset: A Rich Source of Data for Car Enthusiasts and Researchers
The data is composed of 14 bits for the x position, 14 bits for the y position and 1 bit for the polarity (encoded as -1/1). See the matlab script inside the downloaded package for more details on how to read the data.
download cars dataset
What seems to happen is that the annotation file (cars_annos.mat) that comes with the two folders is not the correct annotation file. It is the one for the consolidated dataset where all images (train and test) are in the same folder. To download the correct annotation file, must get the two files from jkrause/cars/car_devkit.tgz
All datasets below are provided in the form of csv files. We provide you with a class for loading these files into memory: download tableDemos.zip and uncompress it in your Processing project folder. The zip file contains the Table class (all files named Table.pde are identical) as well as examples on how to use it:
Since some of the datasets include country data, we also provide you with a file countries.csv that lists country names, country codes and vertices for drawing them on the screen. For an example on how to use this file to draw a map, download mapDemo.zip. Please note that country names in the csv file will not necessarily match all country names from your dataset.
These are datasets that we did not visualize, but the Table class loads them without any apparent problem. They are more interesting in that fewer (or no) visualizations are available online yet, and they can lead to interesting insights.
The above table is quite small and only provides the average rating for the question How happy would you say you are these days? Rating 1 (low) to 10 (high) by country and by sex. On its own, this dataset it probably insufficient for this class project. You are encouraged to download and visualize answers to other questions as well. For this, go to the Eurofound Website, select the question to the left then use the bottom links to download the csv file.
Other data per country per year can be downloaded from gapminder, such as electricity generation per person, alcohol consumption, air traffic accidents, and more classical measures such as GDP. You can possibly combine several indicators together.
download cars dataset github
download cars dataset kaggle
download cars dataset tensorflow
download stanford cars dataset
download us car models dataset
download 3d car models dataset
download car images dataset
download car price dataset
download car sales dataset
download car accident dataset
download car detection dataset
download car classification dataset
download car segmentation dataset
download car recognition dataset
download car make model year dataset
download car color dataset
download car type dataset
download car radar data
download car fatalities data
download car registration data
download car insurance data
download car rental data
download car reviews data
download car emissions data
download car fuel consumption data
download electric car data
download autonomous car data
download used car data
download new car data
download luxury car data
download sports car data
download vintage car data
download muscle car data
download hybrid car data
download sedan car data
download suv car data
download truck car data
download coupe car data
download hatchback car data
download convertible car data
download wagon car data
download minivan car data
download crossover car data
download pickup truck data
download limousine data
download taxi data
download bus data
download motorcycle data
Speed dating data with over 8,000 observations of matches and non-matches, with answers to survey questions about how people rate themselves and how they rate others on several dimensions. This is a large and rich dataset which might take you some time to fully understand.
These datasets have been gathered and cleaned up by Petra Isenberg, Pierre Dragicevic and Yvonne Jansen. Please acknowledge these authors when reusing content from this page, and the source data authors for external links. This page licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
The Cars dataset contains 16,185 images of 196 classes of cars. The data issplit into 8,144 training images and 8,041 testing images, where each class hasbeen split roughly in a 50-50 split. Classes are typically at the level of Make,Model, Year, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe.
2015-09-25 Surveillance-nature images are released in the download links as "sv_data.*". Download all such files, then unzip them with the same password as the web-nature data. We also conducted a fine-grained classification experiment for this part of data. The results are provided in the arXiv paper.
2015-06-30 As an extension to our CVPR paper, we conduct experiments for fine-grained car classification, attribute prediction, and car verification with the entire dataset and different deep models. See arXiv paper for the details. Train/test splits can be downloaded here. The fine-tuned GoogLeNet model is uploaded to the Caffe Model Zoo.
This dataset is presented in our CVPR 2015 paper,Linjie Yang, Ping Luo, Chen Change Loy, Xiaoou Tang. A Large-Scale Car Dataset for Fine-Grained Categorization and Verification, In Computer Vision and Pattern Recognition (CVPR), 2015. PDF
The Comprehensive Cars (CompCars) dataset contains data from two scenarios, including images from web-nature and surveillance-nature. The web-nature data contains 163 car makes with 1,716 car models. There are a total of 136,726 images capturing the entire cars and 27,618 images capturing the car parts. The full car images are labeled with bounding boxes and viewpoints. Each car model is labeled with five attributes,including maximum speed, displacement, number of doors, number of seats, and type of car. The surveillance-nature data contains 50,000 car images captured in the front view. Please refer to our paper for the details.
The train/test subsets of these tasks introduced in our paper are included in the dataset. Researchers are also welcome to use it for any other tasks such as image ranking, multi-task learning, and 3D reconstruction.
Roboflow hosts the world's biggest set of open-source car datasets and pre-trained computer vision models. The category includes images of cars from around the world, curated and annotated by the Roboflow Community. These projects can help you get started with things like object speed calculation, object tracking, autonomous vehicles, and smart-city transportation innovations.
The classes are typically at the level of Make, Model, Year, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe in the original dataset, and in this subset of the full dataset (v3, TestData and v4, original_raw-images).
Ishaan ultimately used this dataset to create a "Drone Surveillance" system to count the cars using YOLOv5 & Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) for a contest organized by ComputerVisionZone.
KAIST dataset is originally based on Thermal Infrared and corresponding RGB image pairs. I have made a model that can convert night-time infrared to day-time RGB using TIC-cGAN ( ). I have trained the YOLOv5 model on daytime RGB images and tested on generated fake day-time images from night-time infrared input to check mAP of proposed approach.
The PKLot dataset contains 12,416 images of parking lots extracted from surveilance camera frames. There are images on sunny, cloudy, and rainy days and the parking spaces are labeled as occupied or empty. We have converted the original annotations to a variety of standard object detection formats by enclosing a bounding box around the original dataset's rotated rectangle annotations.
The original Udacity Self Driving Car Dataset is missing labels for thousands of pedestrians, bikers, cars, and traffic lights. This will result in poor model performance. When used in the context of self driving cars, this could even lead to human fatalities.
All images are 1920x1200 (download size 3.1 GB). We have also provided a version downsampled to 512x512 (download size 580 MB) that is suitable for most common machine learning models (including YOLO v3, Mask R-CNN, SSD, and mobilenet).
Note: the dataset contains many duplicated bounding boxes for the same subject which we have not corrected. You will probably want to filter them by taking the IOU for classes that are 100% overlapping or it could affect your model performance (expecially in stoplight detection which seems to suffer from an especially severe case of duplicated bounding boxes).
We took 3 different cars: A minivan, a wagon, and an SUV. Then, we recorded road scenes of dhaka at the same time from 3 different perspective. Each of the 3 cars maintained different lanes, had 3 camera angles, but cruised through the same same roads at the same time. We carried this on for 1 hour and 15 minutes.
We take advantage of our autonomous driving platform Annieway to develop novel challenging real-world computer vision benchmarks. Our tasks of interest are: stereo, optical flow, visual odometry, 3D object detection and 3D tracking. For this purpose, we equipped a standard station wagon with two high-resolution color and grayscale video cameras. Accurate ground truth is provided by a Velodyne laser scanner and a GPS localization system. Our datsets are captured by driving around the mid-size city of Karlsruhe, in rural areas and on highways. Up to 15 cars and 30 pedestrians are visible per image. Besides providing all data in raw format, we extract benchmarks for each task. For each of our benchmarks, we also provide an evaluation metric and this evaluation website. Preliminary experiments show that methods ranking high on established benchmarks such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias and complement existing benchmarks by providing real-world benchmarks with novel difficulties to the community.To get started, grab a cup of your favorite beverage and watch our video trailer (5 minutes):
All datasets and benchmarks on this page are copyright by us and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.
25.2.2021: We have updated the evaluation procedure for Tracking and MOTS. Evaluation now uses the HOTA metrics and is performed with the TrackEval codebase.
04.12.2019: We have added a novel benchmark for multi-object tracking and segmentation (MOTS)!
18.03.2018: We have added novel benchmarks for semantic segmentation and semantic instance segmentation!
11.12.2017: We have added novel benchmarks for depth completion and single image depth prediction!
26.07.2017: We have added novel benchmarks for 3D object detection including 3D and bird's eye view evaluation.
26.07.2016: For flexibility, we now allow a maximum of 3 submissions per month and count submissions to different benchmarks separately.
29.07.2015: We have released our new stereo 2015, flow 2015, and scene flow 2015 benchmarks. In contrast to the stereo 2012 and flow 2012 benchmarks, they provide more difficult sequences as well as ground truth for dynamic objects. We hope for numerous submissions :)
09.02.2015: We have fixed some bugs in the ground truth of the road segmentation benchmark and updated the data, devkit and results.
11.12.2014: Fixed the bug in the sorting of the object detection benchmark (ordering should be according to moderate level of difficulty).
04.09.2014: We are organizing a workshop on reconstruction meets recognition at ECCV 2014!
31.07.2014: Added colored versions of the images and ground truth for reflective regions to the stereo/flow dataset.
30.06.2014: For detection methods that use flow features, the 3 preceding frames have been made available in the object detection benchmark.
04.04.2014: The KITTI road devkit has been updated and some bugs have been fixed in the training ground truth. The server evaluation scripts have been updated to also evaluate the bird's eye view metrics as well as to provide more detailed results for each evaluated method
04.11.2013: The ground truth disparity maps and flow fields have been refined/improved. Thanks to Donglai for reporting!
31.10.2013: The pose files for the odometry benchmark have been replaced with a properly interpolated (subsampled) version which doesn't exhibit artefacts when computing velocities from the poses.
10.10.2013: We are organizing a workshop on reconstruction meets recognition at ICCV 2013!
03.10.2013: The evaluation for the odometry benchmark has been modified such that longer sequences are taken into account
25.09.2013: The road and lane estimation benchmark has been released!
20.06.2013: The tracking benchmark has been released!
29.04.2013: A preprint of our IJRR data paper is available for download now!
06.03.2013: More complete calibration information (cameras, velodyne, imu) has been added to the object detection benchmark.
27.01.2013: We are looking for a PhD student in 3D semantic scene parsing (position available at MPI Tübingen).
23.11.2012: The right color images and the Velodyne laser scans have been released for the object detection benchmark.
19.11.2012: Added demo code to read and project 3D Velodyne points into images to the raw data development kit.
12.11.2012: Added pre-trained LSVM baseline models for download.
04.10.2012: Added demo code to read and project tracklets into images to the raw data development kit.
01.10.2012: Uploaded the missing oxts file for raw data sequence 2011_09_26_drive_0093.
26.09.2012: The velodyne laser scan data has been released for the odometry benchmark.
11.09.2012: Added more detailed coordinate transformation descriptions to the raw data development kit.
26.08.2012: For transparency and reproducability, we have added the evaluation codes to the development kits.
24.08.2012: Fixed an error in the OXTS coordinate system description. Plots and readme have been updated.
19.08.2012: The object detection and orientation estimation evaluation goes online!
24.07.2012: A section explaining our sensor setup in more details has been added.
23.07.2012: The color image data of our object benchmark has been updated, fixing the broken test image 006887.png.
04.07.2012: Added error evaluation functions to stereo/flow development kit, which can be used to train model parameters.
03.07.2012: Don't care labels for regions with unlabeled objects have been added to the object dataset.
02.07.2012: Mechanical Turk occlusion and 2D bounding box corrections have been added to raw data labels.
28.06.2012: Minimum time enforced between submission has been increased to 72 hours.
27.06.2012: Solved some security issues. Login system now works with cookies.
02.06.2012: The training labels and the development kit for the object benchmarks have been released.
29.05.2012: The images for the object detection and orientation estimation benchmarks have been released.
28.05.2012: We have added the average disparity / optical flow errors as additional error measures.
27.05.2012: Large parts of our raw data recordings have been added, including sensor calibration.
08.05.2012: Added color sequences to visual odometry benchmark downloads.
24.04.2012: Changed colormap of optical flow to a more representative one (new devkit available). Added references to method rankings.
23.04.2012: Added paper references and links of all submitted methods to ranking tables. Thanks to Daniel Scharstein for suggesting!
05.04.2012: Added links to the most relevant related datasets and benchmarks for each category.
04.04.2012: Our CVPR 2012 paper is available for download now!
20.03.2012: The KITTI Vision Benchmark Suite goes online, starting with the stereo, flow and odometry benchmarks.