labelme使用工具箱怎么用

随笔 - 206&
文章 - 59&
评论 - 26&
&&&&&&&&&&&
1.搜狗实验室数据集:
互联网图片库来自sogou图片搜索所索引的部分数据。其中收集了包括人物、动物、建筑、机械、风景、运动等类别,总数高达2,836,535张图片。对于每张图片,数据集中给出了图片的原图、缩略图、所在网页以及所在网页中的相关文本。200多G
IMAGECLEF致力于位图片相关领域提供一个基准(检索、分类、标注等等)&&。从2003年开始每年举行一次比赛.
Xiaorong Li 维护的数据集。PhD ,Intelligent Systems Lab Amsterdam.research on video and image retrieval.
: A collection of 3.5 million social-tagged images.
: A ground-truth set for tag-based social image retrieval.
: A ground-truth set for retrieving bi-concepts (concept pairs) in unlabeled images.
: A set of negative examples automatically harvested from social-tagged images for 20 PASCAL VOC concepts.
wikipedia featured articles 函数图片(以及特征)以及对应的wiki文本。可以看看文章A New Approach to Cross-Modal Multimedia Retrieval,还有一批文章On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval不过还没有下载链接
To our knowledge, this is the largest real-world web image dataset comprising over 269,000 images with over 5,000 user-provided tags, and ground-truth of 81 concepts for the entire dataset. The dataset is much larger than the popularly available Corel and Caltech 101 datasets. Though some datasets comprise over 3 million images, they only have ground-truth for a small fraction of images. Our proposed NUS-WIDE dataset has the ground-truth for the entire dataset.
Jegou的数据集,不过Jegou是专门做CBIR的,图像有ground truth,没有标注。
vgg的osford building dataset。也是专门CBIR的数据。
The dataset for the Microsoft Image Grand Challenge on Image Retrieval&
另外介绍cvpaper上的整理的数据集
Participate in Reproducible Research
Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasetsLabelMe is a web-based image annotation tool that allows researchers to label images and share the annotations with the rest of the community. If you use the database, we only ask that you contribute to it, from time to time, by using the labeling tool.1521 images with human faces, recorded under natural conditions, i.e. varying illumination and complex background. The eye positions have been set manually.Cars, Motorcycles, Airplanes, Faces, Leaves, BackgroundsPictures of objects belonging to 101 categoriesPictures of objects belonging to 256 categories15,560 pedestrian and non-pedestrian samples (image cut-outs) and 6744 additional full images not containing pedestrians for bootstrapping. The test set contains more than 21,790 images with 56,492 pedestrian labels (fully visible or partially occluded), captured from a vehicle in urban traffic.CVC Pedestrian DatasetsCBCL Pedestrian DatabaseCBCL Face DatabaseCBCL Car DatabaseCBCL Street DatabaseA large set of marked up images of standing or walking peopleA set of car and non-car images taken in a parking lot nearby INRIAA set of horse and non-horse images3D skeletons and segmented regions for 1000 people in imagesA large-scale vehicle detection dataset10000 images of natural scenes, with 37 different logos, and 2695 logos instances, annotated with a bounding box.10000 images of natural scenes grabbed on Flickr, with 2695 logos instances cut and pasted from the BelgaLogos dataset.The dataset FlickrLogos-32 contains photos depicting logos and is meant for the evaluation of multi-class logo detection/recognition as well as logo retrieval methods on real-world images. It consists of 8240 images downloaded from Flickr.30000+ frames with vehicle rear annotation and classification (car and trucks) on motorway/highway sequences. Annotation semi-automatically generated using laser-scanner data. Distance estimation and consistent target ID over time available.Phos is a color image database of 15 scenes captured under different illumination conditions. More particularly, every scene of the database contains 15 different images: 9 images captured under various strengths of uniform illumination, and 6 images under different degrees of non-uniform illumination. The images contain objects of different shape, color and texture and can be used for illumination invariant feature detection and selection.California-ND contains 701 photos taken directly from a real user's personal photo collection, including many challenging non-identical near-duplicate cases, without the use of artificial image transformations. The dataset is annotated by 10 different subjects, including the photographer, regarding near duplicates.
Classification
Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasetsCars, Motorcycles, Airplanes, Faces, Leaves, BackgroundsPictures of objects belonging to 101 categoriesPictures of objects belonging to 256 categoriesA dataset for testing object class detection algorithms. It contains 255 test images and features five diverse shape-based classes (apple logos, bottles, giraffes, mugs, and swans).17 Flower Category DatasetA dataset for Attribute Based Classification. It consists of 30475 images of 50 animals classes with six pre-extracted feature representations for each image.Dataset of 20,580 images of 120 dog breeds with bounding-box annotation, for fine-grained image categorization.
Recognition
Face and Gesture Recognition Working Group FGnetFace and Gesture Recognition Working Group FGnet9971 images of 100 peopleA database of face photographs designed for studying the problem of unconstrained face recognitionTraffic Lights Recognition, Lara's public benchmarks.The PubFig database is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. Unlike most other existing face datasets, these images are taken in completely uncontrolled situations with non-cooperative subjects.The data set contains 3,425 videos of 1,595 different people. The shortest clip duration is 48 frames, the longest clip is 6,070 frames, and the average length of a video clip is 181.3 frames.The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system.This dataset contains 250 pedestrian image pairs + 775 additional images captured in a busy underground station for the research on person re-identification.Face tracks, features and shot boundaries from our latest CVPR 2013 paper. It is obtained from 6 episodes of Buffy the Vampire Slayer and 6 episodes of Big Bang Theory.ChokePoint is a video dataset designed for experiments in person identification/verification under real-world surveillance conditions. The dataset consists of 25 subjects (19 male and 6 female) in portal 1 and 29 subjects (23 male and 6 female) in portal 2.
Walking pedestrians in busy scenarios from a bird eye viewThree pedestrian crossing sequencesThe set was recorded in Zurich, using a pair of cameras mounted on a mobile platform. It contains 12'298 annotated pedestrians in roughly 2'000 frames.BMP image sequences.Data sets for tracking&&and&&in aerial image sequences.MIT traffic data set is for research on activity analysis and crowded scenes. It includes a traffic video sequence of 90 minutes long. It is recorded by a stationary camera.
Segmentation
Ground truth database of 50 images with: Data, Segmentation, Labelling - Lasso, Labelling - RectangleClassification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasetsCows for object segmentation, Five video sequences for motion segmentationGeometric Context Dataset: pixel labels for seven geometric classes for 300 imagesThis dataset contains videos of crowds and other high density moving objects. The videos are collected mainly from the BBC Motion Gallery and Getty Images website. The videos are shared only for the research purposes. Please consult the terms and conditions of use of these videos from the respective websites.Contains hand-labelled pixel annotations for 38 groups of images, each group containing a common foreground. Approximately 17 images per group, 643 images total.200 gray level images along with ground truth segmentationsImage segmentation and boundary detection. Grayscale and color segmentations for 300 images, the images are divided into a training set of 200 images, and a test set of 100 images.328 side-view color images of horses that were manually segmented. The images were randomly collected from the WWW.10 videos as inputs, and segmented image sequences as ground-truth
Foreground/Background
For evaluating background modelling algorithmsForeground/Background segmentation and Stereo dataset from Microsoft CambridgeThe SABS (Stuttgart Artificial Background Subtraction) dataset is an artificial dataset for pixel-wise evaluation of background models.
Saliency Detection ()
120 Images / 20 Observers (Neil D. B. Bruce and John K. Tsotsos 2005).27 Images / 40 Observers (O. Le Meur, P. Le Callet, D. Barba and D. Thoreau 2006).100 Images / 31 Observers (Kootstra, G., Nederveen, A. and de Boer, B. 2008).101 Images / 29 Observers (van der Linde, I., Rajashekar, U., Bovik, A.C., Cormack, L.K. 2009).912 Images / 14 Observers (Krista A. Ehinger, Barbara Hidalgo-Sotelo, Antonio Torralba and Aude Oliva 2009).758 Images / 75 Observers (R. Subramanian, H. Katti, N. Sebe1, M. Kankanhalli and T-S. Chua 2010).235 Images / 19 Observers (Jian Li, Martin D. Levine, Xiangjing An and Hangen He 2011).ECSSD contains 1000 natural images with complex foreground or background. For each image, the ground truth mask of salient object(s) is provided.
Video Surveillance
For the CAVIAR project a number of video clips were recorded acting out the different scenarios of interest. These include people walking alone, meeting with others, window shopping, entering and exitting shops, fighting and passing out and last, but not least, leaving a package in a public place.ViSOR contains a large set of multimedia data and the corresponding annotations.
Multiview stereo data sets: a set of imagesDinosaur, Model House, Corridor, Aerial views, Valbonne Church, Raglan Castle, Kapel sequenceOxford collegesTemple, DinoVenus de Milo, Duomo in Pisa, Notre Dame de ParisDataset provided by Center for Machine PerceptionCVLab dense multi-view stereo image databaseObjects viewed from 144 calibrated viewpoints under 3 different lighting conditionsImages from 19 sites collected from a helicopter flying around Providence, RI. USA. The imagery contains approximately a full circle around each site.24 scenarios recorded with 8 IP video cameras. The first 22 first scenarios contain a fall and confounding events, the last 2 ones contain only confounding events.
This dataset consists of a set of actions collected from various sports which are typically featured on broadcast television channels such as the BBC and ESPN. The video sequences were obtained from a wide range of stock footage websites including BBC Motion gallery, and GettyImages.This dataset features video sequences that were obtained using a R/C-controlled blimp equipped with an HD camera mounted on a gimbal.The collection represents a diverse pool of actions featured at different heights and aerial viewpoints. Multiple instances of each action were recorded at different flying altitudes which ranged from 400-450 feet and were performed by different actors.It contains 11 action categories collected from YouTube.Walk, Run, Jump, Gallop sideways, Bend, One-hand wave, Two-hands wave, Jump in place, Jumping Jack, Skip.UCF50 is an action recognition dataset with 50 action categories, consisting of realistic videos taken from YouTube.The Action Similarity Labeling (ASLAN) Challenge.The dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gesture 2-3 times.Contains six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects in four different scenarios: outdoors, outdoors with scale variation, outdoors with different clothes and indoors.Hollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video in total.This dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which were recorded by consumer hand-held digital camera with varying view point.The Olympic Sports Dataset contains YouTube videos of athletes practicing different sports.Surveillance-type videosThe dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets.Collected from various sources, mostly from movies, and a small proportion from public databases, YouTube and Google videos. The dataset contains 6849 clips divided into 51 action categories, each containing a minimum of 101 clips.Dataset of 9,532 images of humans performing 40 different actions, annotated with bounding-boxes.Fully annotated dataset of RGB-D video data and data from accelerometers attached to kitchen objects capturing 25 people preparing two mixed salads each (4.5h of annotated data). Annotated activities correspond to steps in the recipe and include phase (pre-/ core-/ post) and the ingredient acted upon.
Human pose/Expression
Dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies.
Image stitching
Images and parameters for registeration
Collection of endoscopic and laparoscopic (mono/stereo) videos and images
ZuBuD Image Database contains over 1005 images about Zurich city building.The mall dataset was collected from a publicly accessible webcam for crowd counting and activity profiling research.A busy traffic dataset for research on activity analysis and behaviour understanding.
CVOnline的数据集
Index by Topic
Action Databases
&- fully annotated 4.5 hour dataset of RGB-D video + accelerometer data, capturing 25 people preparing two mixed salads each (Dundee University, Sebastian Stein)
&database (Orit Kliper-Gross)
&(Ferda Ofli)
&(Scott Blunsden, Bob Fisher, Aroosha Laghaee)
&(Janez Pers)
&- synchronised video, depth and skeleton data for 20 gaming actions captured with Microsoft Kinect (Victoria Bloom)
&- 650 3D action recognition in the wild videos, 14 action classes (Simon Hadfield)
&(Marcin Marszalek, Ivan Laptev, Cordelia Schmid)
: Synchronized Video and Motion Capture Dataset for Evaluation of Articulated Human Motion (Brown University)
&(Hansung Kim)
&(Paul Hosner)
&- 7 types of human activity videos taken from a first-person viewpoint (Michael S. Ryoo, JPL)
&(KTH CVAP lab)
&- 2 cameras, annotated, depth images (Christian Wolf, et al)
&- Multicamera Human Action Video Data (Hossein Ragheb)
&(Oxford Visual Geometry Group)
&(Ross Messing)
&(Michael S. Ryoo, J. K. Aggarwal, Amit K. Roy-Chowdhury)
&(Michael S. Ryoo, J. K. Aggarwal, Amit K. Roy-Chowdhury)
&(Moritz Tenorth, Jan Bandouch)
&(Alonso Patron-Perez)
&(Univ of Central Florida)
&(Univ of Central Florida)
&(Kishore Reddy)
&101 action classes, over 13k clips and 27 hours of video data (Univ of Central Florida)
&(Univ of Central Florida)
&Aerial camera, Rooftop camera and Ground camera (UCF Computer Vision Lab)
&(Amit K. Roy-Chowdhury)
&(Marco Cristani)
&(B. Bhanu, G. Denina, C. Ding, A. Ivers, A. Kamal, C. Ravishankar, A. Roy-Chowdhury, B. Varda)
&(userID: VIHASI password: virtual$virtual) (Hossein Ragheb, Kingston University)
&Kinect dataset for exercise actions (Ceyhun Akgul)
&- 88 open-source YouTube cooking videos with annotations (Jason Corso)
&(Univ. of West Virginia)
Biological/Medical
&(Lauge Sorensen)
&(Eric Ehrsam)
&(Allen Institute for Brain Science et al)
&(Lappeenranta Univ of Technology)
&(Univ of Utrecht)
&(Mammographic Image Analysis Society)
&(Nicholas Edelman)
&(Univ of Groningen)
&(Digital Imaging Group of London Ontario, Shuo Li)
&(Univ of Central Florida)
&- 120 3D vascular tree like structures with ground truth (Mengliu Zhao, Ghassan Hamarneh)
&(Alexander Andreopoulos)
Face Databases
&- 76500 frames of 17 persons using Kinect RGBD with eye positions (Sebastien Marcel)
&(Mobile Biometry MOBIO&)
&(Univ of Surrey)
&(Lijun Yin, Peter Gerhardstein and teammates)
&(BioID group)
&- 1000 high quality, dynamic 3D scans of faces, recorded while pronouncing a set of English sentences.
&(CMU/MIT)
&(CMU/MIT)
&(CMU/MIT)
&(Simon Baker)
&(Ajmal Mian)
&(FRVT - Face Recognition Vendor Test)
&(Neeraj Kumar, P. N. Belhumeur, and S. K. Nayar)
&(University of Massachusetts Computer Vision Laboratory)
&(Face and Gesture Recognition Research Network)
&(USA National Institute of Standards and Technology)
&(Michael J. Lyons)
&- unconstrained face recognition.&&- original images, but aligned using "deep funneling" method. (University of Massachusetts, Amherst)
&(Timothy Cootes)
&(Ethan Meyers)
&(University of North Carolina Wilmington)
&(Center for Biological and Computational Learning)
&(USA National Institute of Standards and Technology)
&(ATT Cambridge Labs)
&(Oxford Visual Geometry Group)
&(Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur, and Shree K. Nayar)
&- Surveillance Cameras Face Database (Mislav Grgic, Kresimir Delac, Sonja Grgic, Bozidar Klimpak))
&(Igor Barros Barbosa)
&- University of Buffalo kinship verification and recognition database
&(Surrey University)
&(A. Georghaides)
&(A. Georghaides)
Fingerprints
&(University of Bologna)
&(University of Bologna)
&- a subset of FVC (Fingerprint Verification Competition) 2002 and 2004 fingerprint image databases, manually extracted minutiae data & associated documents (Umut Uludag)
&(USA National Institute of Standards and Technology)
(SPD 2010 committee)
General Images
&(Swiss Federal Institute of Technology)
&(Nathan Jacobs)
&(Ben Kimia)
&(F. Yasuma, T. Mitsunaga, D. Iso, and S.K. Nayar)
&(Bob Fisher et al)
&(David H. Foster)
&(David H. Foster)
&(Li Fei-Fei, Jia Deng, Hao Su, Kai Li)
&(Alex Berg, Jia Deng, Fei-Fei Li)
&(Ohio State Team)
&(Adriana Olmos and Fred Kingdom)
&79 million 32x32 color images (Fergus, Torralba, Freeman)
Gesture Databases
&(Face and Gesture Recognition Research Network)
&(Euripides G.M. Petrakis)
&(Sebastien Marcel)
&- 2160 RGBD hand gesture sequences, 6 subjects, 10 gestures, 3 postures, 3 backgrounds, 2 illuminations (Ling Shao)
Image, Video and Shape Database Retrieval
&(Ben Kimia)
&(Michael Grubinger)
&(Hugo Jair Escalante)
&(Stefanie Nowak)
&- multi-label classification challenge in Flickr photos
&(Siddiqi, Zhang, Macrini, Shokoufandeh, Bouix, Dickinson)
&(USA National Institute of Standards and Technology)
&(USA National Institute of Standards and Technology)
&(USA National Institute of Standards and Technology)
&(Princeton Shape Retrieval and Analysis Group)
&- millions of images and text documents for "cross-media" retrieval (Yi Yang)
&(Bronstein, Bronstein, Kimmel)
Object Databases
&(Ajmal Mian)
&(University of Amsterdam/Intelligent Sensory Information Systems)
&(Li Fei-Fei, Marco Andreeto, Marc'Aurelio Ranzato)
&(Columbia University)
&(Gabriele Peters, Universiteit Dortmund)
&(Ruhr-Universitat Bochum)
&(A. Pinz)
&(Fredrik Viksten and Per-Erik Forssen)
&(Antonio Criminisi, Pushmeet Kohli, Tom Minka, Carsten Rother, Toby Sharp, Jamie Shotton, John Winn)
&(Liu, Sun Zheng, Tang, Shum)
&(Center for Biological and Computational Learning)
&(Stan Bileschi)
&(Hossein Mobahi)
&(PASCAL Consortium)
&(PASCAL Consortium)
&(PASCAL Consortium)
&(PASCAL Consortium)
&(PASCAL Consortium)
&(PASCAL Consortium)
&Category classification, detection, and segmentation, and still-image action classification (PASCAL Consortium)
&(S. Savarese and L. Fei-Fei)
&(Emanuele Rodola)
People, Pedestrian, Eye/Iris, Template Detection/Tracking Databases
&(L. Igual, A. Lapedriza, R. Borr&s from UB, CVC and UOC, Spain)
&(P. Dollar, C. Wojek, B. Schiele and P. Perona)
&(Chinese Academy of Sciences)
&(Chinese Academy of Sciences, T. N. Tan, Z. Sun)
&(CAVIAR team/Edinburgh University - EC project IST-)
&21790 images with 56492 pedestrians plus empty scenes (M. Enzweiler, D. M. Gavrila)
&(RobeSafe + Jesus Nuevo-Chiquero)
&(Bob Fisher, Bashia Majecka, Gurkirt Singh, Rowland Sillito)
&(Stefan Winkler)
&database of 27 human attributes (Gaurav Sharma, Frederic Jurie)
&(Navneet Dalal)
(Sebastian Lieberknecht)
&(Center for Biological and Computational Learning)
&(Judd et al)
&(Patrick J. Flynn)
&(Reading University & James Ferryman)
&(Reading University & James Ferryman)
&(Reading University & James Ferryman)
&(University of Beira)
&(Saad Ali)
&(Saad Ali)
&(Neil Bruce)
Segmentation
&(Sharon Alpert, Meirav Galun, Ronen Basri, Achi Brandt)
&(David Martin and Charless Fowlkes)
&(C. Rother, V. Kolmogorov, A. Blake, M. Brown)
&(Bryan Russell, Antonio Torralba, Kevin Murphy, William Freeman)
Surveillance
&(Andrea Cavallaro)
&(INRIA Orion Team and others)
&(Zsolt Husz)
&(Queen Mary University London)
&- synthetic trajectory datasets with outliers (Univ of Udine Artificial Vision and Real Time Systems Laboratory)
&(textures.forrest.cz)
&(Columbia & Utrecht Universities)
&(Renaud Piteri, Mark Huiskes and Sandor Fazekas)
&(Oulu University)
&(Mikes, Haindl)
&- fabrics, grains, etc.
&(MIT Media Lab)
General Videos
&- 156,823 videos (2,907,447 keyframes) crawled from YouTube videos (Yi Yang)
Other Collections
&(Multitel)
&(Carnegie Mellon Univ)
&(ETH Zurich, Computer Vision Lab)
&(Bastian Leibe)
&(Sealeen Ren, Benjamin Yao, Michael Yang)
&(Oxford Visual geometry Group)
&(Pilot European Image Processing Archive)
&(Univ of Bern, Computer Vision and Artificial Intelligence)
&(Keith Price)
&(USC Signal and Image Processing Institute)
Miscellaneous
&(Guillaume Lavoue)
&(Mikkel B. Stegmann)
&(Ajmal Mian)
&(Brostow, Shotton, Fauqueur, Cipolla)
&(Yalin Bastanlar)
&(Teo de Campos -&
&(Ullah, Pronobis, Caputo, Luo, and Jensfelt)
&(M.D. Grossberg and S.K. Nayar)
&(Jinwei Gu, Ravi Ramamoorthi, Peter Belhumeur, Shree Nayar)
&(Christoph Strecha)
&(Henrik Aanaes)
: .enpeda.. Image Sequence Analysis Test Site (Auckland University Multimedia Imaging Group)
&- 8240 images of 32 product logos (Stefan Romberg)
&(Allan Hanbury)
&(Derek Hoiem)
&(Stefan Winkler)
&(Krystian Mikolajczyk)
&(INRIA Rhone-Alpes)
&(INRIA Rhone-Alpes)
&(Geiger, Lenz, Urtasun)
&(Andreas Nuechter)
&(Per-Erik Forssen and Erik Ringaby)
&(Daniel Scharstein and Richard Szeliski)
&(Michael Black)
&(ESAT-PSI/VISICS,FGAN-FOM,EPFL/IC/ISIM/CVLab)
&(National Cancer Institute)
&- prostate images (National Cancer Institute)
&(Helin Dutagaci, Afzal Godil)
&(USDA Natural Resources Conservation Service)
&(Andrew Stein)
&(Gary Marchionini, Barbara M. Wildemuth, Gary Geisler, Yaxiao Song)
&(Gamhewage Chaminda de Silva)
: Artistic images of prints of well known paintings, including detail annotations. A benchmark for automatic annotation and retrieval tasks with this database was published at ECCV. (Nuno Miguel Pinho da Silva)
&(Rawseeds Project)
&- 3D point clouds from robotic experiments of scenes (Osnabruck and Jacobs Universities)
&(Jean-Philippe Tarel, et al)
&- 66 views of 45 objects
&(Oisin Mac Aodha)
&(Manuela Chessa)
&(Francesco Vivarelli)
&- a collection of ground-truth files based on the extraction of violent events in movies
&(S. Narasimhan, C. Wang. S. Nayar, D. Stolyarov, K. Garg, Y. Schechner, H. Peri)
阅读(...) 评论()

我要回帖

更多关于 labelme toolbox 的文章

 

随机推荐