Difference between revisions of "Datasets List"

From TC11
Jump to: navigation, search
(On-line and Off-line)
(Mixed Content Documents)
 
(24 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
NOTICE: TC11 datasets will be soon moved to the new Web portal at http://tc11.cvc.uab.es This page will remain available but will not be updated from January 2015 onwards.
 +
 
[[Datasets]] -> [[Datasets List]]
 
[[Datasets]] -> [[Datasets List]]
  
Line 16: Line 18:
 
= Complex Text Containers =
 
= Complex Text Containers =
 
== Scene Text ==
 
== Scene Text ==
 
+
* [[MSRA Text Detection 500 Database (MSRA-TD500)]]
 +
* [[The Street View Text Dataset]]
 
* [[The Street View House Numbers (SVHN) Dataset]]
 
* [[The Street View House Numbers (SVHN) Dataset]]
 
* [[NEOCR: Natural Environment OCR Dataset]]
 
* [[NEOCR: Natural Environment OCR Dataset]]
Line 32: Line 35:
 
* [[Table Ground Truth for the UW3 and UNLV datasets]]
 
* [[Table Ground Truth for the UW3 and UNLV datasets]]
 
* [[The DocLab Dataset for Evaluating Table Interpretation Methods]]
 
* [[The DocLab Dataset for Evaluating Table Interpretation Methods]]
 +
* [[http://www.digitisation.eu/data/ the IMPACT data base]] The dataset contains more than half a million representative text-based images compiled by a number of major European libraries. Covering texts from as early as 1500, and containing material from newspapers, books, pamphlets and typewritten notes, the dataset is an invaluable resource for future research into imaging technology, OCR and language enrichment.
 
* [http://dataset.primaresearch.org/ PRImA Layout Analysis Dataset]
 
* [http://dataset.primaresearch.org/ PRImA Layout Analysis Dataset]
 
* [http://www.dfki.uni-kl.de/~shafait/downloads.html DFKI Dewarping Contest Dataset (CBDAR 2007)] The dataset, that was used in the CBDAR 2007 Dewarping Contest, contains 102 camera captured documents with their corresponding ASCII text ground-truth. Additionally, text-line level ground-truth was also prepared to benchmark curled text-line segmentation algorithms. Part of the dataset (76 out of 102 pages) were also scanned with a flat-bed scanner to create a ground-truth image for image based evaluation of page dewarping algorithms.
 
* [http://www.dfki.uni-kl.de/~shafait/downloads.html DFKI Dewarping Contest Dataset (CBDAR 2007)] The dataset, that was used in the CBDAR 2007 Dewarping Contest, contains 102 camera captured documents with their corresponding ASCII text ground-truth. Additionally, text-line level ground-truth was also prepared to benchmark curled text-line segmentation algorithms. Part of the dataset (76 out of 102 pages) were also scanned with a flat-bed scanner to create a ground-truth image for image based evaluation of page dewarping algorithms.
 
* [http://diuf.unifr.ch/diva/APTI/ APTI: Arabic Printed Text Image Database]
 
* [http://diuf.unifr.ch/diva/APTI/ APTI: Arabic Printed Text Image Database]
 +
* [[LRDE Document Binarization Dataset (LRDE DBD)]] This dataset is composed of documents images extracted from the same French magazine: Le Nouvel Observateur, issue 2402, November 18th-24th, 2010. The provided dataset is composed of 375 Full-Document Images (A4 format, 300-dpi resolution).
 +
* [http://ciir.cs.umass.edu/downloads/ocr-evaluation/ RETAS OCR Evaluation Dataset] The RETAS dataset (used in the paper by Yalniz and Manmatha, ICDAR'11) is created to evaluate the optical character recognition (OCR) accuracy of real scanned books. The dataset contains real OCR outputs for 160 scanned books (100 English, 20 French, 20 German, 20 Spanish) downloaded from the Internet Archive website. The corresponding ground truth text for each scanned book is obtained from the Project Gutenberg database. The OCR output of each scanned book is aligned with its ground truth at the word and character level and the alignment output is provided along with estimated OCR accuracies. The dataset is provided for research purposes.
  
 
= Graphical Documents =
 
= Graphical Documents =
  
 
* [[Chem-Infty Dataset: A ground-truthed dataset of Chemical Structure Images]]
 
* [[Chem-Infty Dataset: A ground-truthed dataset of Chemical Structure Images]]
<!-- * [[Braille Dataset - Shiraz University]] -->
+
* [[Braille Dataset - Shiraz University]]
 +
* [http://www.eurecom.fr/~huet/work.html TradeMarks Image Database] - By way of Benoit Huet, 999 trademark and logo images
  
 
= Mixed Content Documents =  
 
= Mixed Content Documents =  
* [http://www.umiacs.umd.edu/~zhugy/Tobacco800.html Tobacco800 Document Image Database] - composed of 1290 document images collected and scanned using a wide variety of equipment over time.
+
* [http://dataset.iapr-tc11.org/datasets/Tobacco800_1 Tobacco800 Document Image Database] - composed of 1290 document images collected and scanned using a wide variety of equipment over time.
  
 
= Handwritten Documents =
 
= Handwritten Documents =
Line 48: Line 55:
  
 
* [[ICDAR 2009 Signature Verification Competition (SigComp2009)]]
 
* [[ICDAR 2009 Signature Verification Competition (SigComp2009)]]
 +
 +
* [[ICFHR 2010 Signature Verification Competition (4NSigComp2010)]]
  
 
* [[ICDAR 2011 Signature Verification Competition (SigComp2011)]]
 
* [[ICDAR 2011 Signature Verification Competition (SigComp2011)]]
 +
 +
* [[ICFHR 2012 Signature Verification Competition (4NSigComp2012)]]
  
 
* [http://www.nlpr.ia.ac.cn/databases/handwriting/Home.html CASIA Online and Offline Chinese Handwriting Databases] - The Chinese handwriting datasets were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained. Both the online and the offline dataset consists of three subsets for isolated characters (DB1.0–1.2, about 3.9 million samples of 7,356 classes) and three for handwritten texts (DB2.0–2.2, about 5,090 pages and 1.35 million characters). The datasets are free for academic research for handwritten document segmentation and retrieval, character and text line recognition, writer adaptation and identification.
 
* [http://www.nlpr.ia.ac.cn/databases/handwriting/Home.html CASIA Online and Offline Chinese Handwriting Databases] - The Chinese handwriting datasets were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained. Both the online and the offline dataset consists of three subsets for isolated characters (DB1.0–1.2, about 3.9 million samples of 7,356 classes) and three for handwritten texts (DB2.0–2.2, about 5,090 pages and 1.35 million characters). The datasets are free for academic research for handwritten document segmentation and retrieval, character and text line recognition, writer adaptation and identification.
 +
 +
* [[Persian Heritage Image Binarization Dataset (PHIBD 2012)]] This dataset contains 15 historical and old manuscript images collected from the historical records at the Documents and old manuscripts treasury of Mirza Mohammad Kazemaini (affiliated with Hazrate Emamzadeh Jafar), Yazd, Iran. The images suffer from various types of degradation including bleed-through, faded ink, and blur. The dataset is the first in a series to provide document images and their ground truth as a contribution to Document image analysis and recognition (DAIR) community. It is planned to provide more data and ground-truth information in the fture.
  
 
== On-line ==
 
== On-line ==
 +
* [[CROHME: Competition on Recognition of Online Handwritten Mathematical Expressions]]
  
 
* [[Devanagari Character Dataset]]
 
* [[Devanagari Character Dataset]]
Line 71: Line 85:
  
 
== Off-line ==
 
== Off-line ==
 +
* [http://www.rimes-database.fr/wiki/doku.php The Rimes Database] comprises 12,723 handwritten pages corresponding to 5605 mails of two to three pages. It was collected by asking volunteers to write a letter given one of nine predefined scenarios related to business/customer relations. The dataset has been used in numerous competitions in ICDAR and ICFHR. It is available for research purposes only, through the Web site of the authors.
  
 
* [[IBN SINA: A database for research on processing and understanding of Arabic manuscripts images]]
 
* [[IBN SINA: A database for research on processing and understanding of Arabic manuscripts images]]
  
 
* [http://www.cedar.buffalo.edu/Databases/CDROM1/ CEDAR Off-line Handwriting CDROM1]
 
* [http://www.cedar.buffalo.edu/Databases/CDROM1/ CEDAR Off-line Handwriting CDROM1]
 +
 +
* [[CVL-Database]] - An Off-line Database for Writer Retrieval, Writer Identification and  Word Spotting
  
 
* [http://www.iam.unibe.ch/fki/databases/iam-handwriting-database IAM Database] - A full English sentence database for off-line handwriting recognition.
 
* [http://www.iam.unibe.ch/fki/databases/iam-handwriting-database IAM Database] - A full English sentence database for off-line handwriting recognition.
Line 87: Line 104:
  
 
= Software and Tools =
 
= Software and Tools =
 +
* [http://labs.europeana.eu/api/ Europeana API] The Europeana network represents more than 2,500 cultural heritage organisations and is the principal point of reference for digitised European culture. Europeana offers open access to over 32 million records, a large percentage of which are document images originating from various memory institutions including national libraries and archives. In many cases it offers links to high resolution scans of such documents. The Europeana APIs allow you to search and retrieve the contents of their database for use in your own applications. Two APIs are offered. The first is a REST-API that is suited for dynamic search and retrieval of our data. The second API is more experimental and supports download of complete datasets and advanced semantic search and retrieval of our data via the SPARQL query language.
 +
* [http://www.digitisation.eu/tools/ Tools from the IMPACT Centre of Competence] The tools offered by the Impact Centre of Competence are software components which have been developed by the different technical IMPACT partners during the IMPACT project (2008-2012). Generally, a "tool" is a piece of software which operates on image or text data modifying the data or extracting information from it. Every IMPACT tool has a specific functionality related to OCR, to the the pre- and post-processing stages. They include new approaches in areas such as image enhancement, segmentation, and document structuring, alongside existing and experimental OCR engines.
 
* [http://lampsrv02.umiacs.umd.edu/projdb/project.php?id=53 GEDI: Groundtruthing Environment for Document Images] - A generic annotation tool for scanned text documents.
 
* [http://lampsrv02.umiacs.umd.edu/projdb/project.php?id=53 GEDI: Groundtruthing Environment for Document Images] - A generic annotation tool for scanned text documents.
 +
* [http://www2.parc.com/isl/groups/pda/pixlabeler/index.html PixLabeler] - a research tool for labeling elements in a document image at a pixel level.
 +
* [http://code.google.com/p/ocropus/ OCRopus(tm)] - The OCRopus(tm) open source document analysis and OCR system
 +
* [http://htk.eng.cam.ac.uk/ The Hidden Markov Model Toolkit (HTK)] - a portable toolkit for building and manipulating hidden Markov models
 +
* [https://github.com/meierue/RNNLIB Bidirectional Long-Short Term Memory Networks] - Implementation of Bidirectional Long-Short Term Memory Networks (BLSTM) combined with Connectionist Temporal Classification (CTC) - including examples for Arabic recognition.
 +
* [http://www.speech.sri.com/projects/srilm/ SRILM - The SRI Language Modeling Toolkit] - SRILM is a toolkit for building and applying statistical language models (LMs), primarily for use in speech recognition, statistical tagging and segmentation, and machine translation.
 +
* [http://torch5.sourceforge.net/ Torch 5] - a Matlab-like environment for state-of-the-art machine learning algorithms.
 +
* [http://www.prtools.org/ PRTools] - a Matlab based toolbox for pattern recognition
  
  

Latest revision as of 07:21, 20 September 2019

NOTICE: TC11 datasets will be soon moved to the new Web portal at http://tc11.cvc.uab.es This page will remain available but will not be updated from January 2015 onwards.

Datasets -> Datasets List

Last updated: 2019-009-20

See the datasets sorted according to the Journal / Conference they first appeared in.

Complex Text Containers

Scene Text


Machine-printed Documents

  • Table Ground Truth for the UW3 and UNLV datasets
  • The DocLab Dataset for Evaluating Table Interpretation Methods
  • [the IMPACT data base] The dataset contains more than half a million representative text-based images compiled by a number of major European libraries. Covering texts from as early as 1500, and containing material from newspapers, books, pamphlets and typewritten notes, the dataset is an invaluable resource for future research into imaging technology, OCR and language enrichment.
  • PRImA Layout Analysis Dataset
  • DFKI Dewarping Contest Dataset (CBDAR 2007) The dataset, that was used in the CBDAR 2007 Dewarping Contest, contains 102 camera captured documents with their corresponding ASCII text ground-truth. Additionally, text-line level ground-truth was also prepared to benchmark curled text-line segmentation algorithms. Part of the dataset (76 out of 102 pages) were also scanned with a flat-bed scanner to create a ground-truth image for image based evaluation of page dewarping algorithms.
  • APTI: Arabic Printed Text Image Database
  • LRDE Document Binarization Dataset (LRDE DBD) This dataset is composed of documents images extracted from the same French magazine: Le Nouvel Observateur, issue 2402, November 18th-24th, 2010. The provided dataset is composed of 375 Full-Document Images (A4 format, 300-dpi resolution).
  • RETAS OCR Evaluation Dataset The RETAS dataset (used in the paper by Yalniz and Manmatha, ICDAR'11) is created to evaluate the optical character recognition (OCR) accuracy of real scanned books. The dataset contains real OCR outputs for 160 scanned books (100 English, 20 French, 20 German, 20 Spanish) downloaded from the Internet Archive website. The corresponding ground truth text for each scanned book is obtained from the Project Gutenberg database. The OCR output of each scanned book is aligned with its ground truth at the word and character level and the alignment output is provided along with estimated OCR accuracies. The dataset is provided for research purposes.

Graphical Documents

Mixed Content Documents

Handwritten Documents

On-line and Off-line

  • CASIA Online and Offline Chinese Handwriting Databases - The Chinese handwriting datasets were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained. Both the online and the offline dataset consists of three subsets for isolated characters (DB1.0–1.2, about 3.9 million samples of 7,356 classes) and three for handwritten texts (DB2.0–2.2, about 5,090 pages and 1.35 million characters). The datasets are free for academic research for handwritten document segmentation and retrieval, character and text line recognition, writer adaptation and identification.
  • Persian Heritage Image Binarization Dataset (PHIBD 2012) This dataset contains 15 historical and old manuscript images collected from the historical records at the Documents and old manuscripts treasury of Mirza Mohammad Kazemaini (affiliated with Hazrate Emamzadeh Jafar), Yazd, Iran. The images suffer from various types of degradation including bleed-through, faded ink, and blur. The dataset is the first in a series to provide document images and their ground truth as a contribution to Document image analysis and recognition (DAIR) community. It is planned to provide more data and ground-truth information in the fture.

On-line

Off-line

  • The Rimes Database comprises 12,723 handwritten pages corresponding to 5605 mails of two to three pages. It was collected by asking volunteers to write a letter given one of nine predefined scenarios related to business/customer relations. The dataset has been used in numerous competitions in ICDAR and ICFHR. It is available for research purposes only, through the Web site of the authors.
  • CVL-Database - An Off-line Database for Writer Retrieval, Writer Identification and Word Spotting
  • IAM Database - A full English sentence database for off-line handwriting recognition.
  • The GERMANA Dataset - GERMANA is the result of digitising and annotating a 764-page Spanish manuscript entitled “Noticias y documentos relativos a Doña Germana de Foix, ́última Reina de Aragón", written in 1891 by Vicent Salvador. It contains approximately 21K text lines manually marked and transcribed by palaeography experts.
  • The RODRIGO Dataset - RODRIGO is the result of digitising and annotating a manuscript dated 1545. Digitisation was done at 300dpi in color by the Spanish Culture Ministry. The original manuscript is a 853-page bound volume, entitled "Historia de España del arçobispo Don Rodrigo", completely written in old Castilian (Spanish) by a single author. Annotation exists for text blocks, lines and transcriptions, resulting in approximately 20K lines and 231K running words from a lexicon of 17K words.
  • MARG- Medical Article Records Groundtruth - A freely-available repository of document page images and their associated textual and layout data. The data has been reviewed and corrected to establish its "ground truth". Please contact Dr. George Thoma (thoma@lhc.nlm.nih.gov) at the National Library of Medicine for more information.

Software and Tools

  • Europeana API The Europeana network represents more than 2,500 cultural heritage organisations and is the principal point of reference for digitised European culture. Europeana offers open access to over 32 million records, a large percentage of which are document images originating from various memory institutions including national libraries and archives. In many cases it offers links to high resolution scans of such documents. The Europeana APIs allow you to search and retrieve the contents of their database for use in your own applications. Two APIs are offered. The first is a REST-API that is suited for dynamic search and retrieval of our data. The second API is more experimental and supports download of complete datasets and advanced semantic search and retrieval of our data via the SPARQL query language.
  • Tools from the IMPACT Centre of Competence The tools offered by the Impact Centre of Competence are software components which have been developed by the different technical IMPACT partners during the IMPACT project (2008-2012). Generally, a "tool" is a piece of software which operates on image or text data modifying the data or extracting information from it. Every IMPACT tool has a specific functionality related to OCR, to the the pre- and post-processing stages. They include new approaches in areas such as image enhancement, segmentation, and document structuring, alongside existing and experimental OCR engines.
  • GEDI: Groundtruthing Environment for Document Images - A generic annotation tool for scanned text documents.
  • PixLabeler - a research tool for labeling elements in a document image at a pixel level.
  • OCRopus(tm) - The OCRopus(tm) open source document analysis and OCR system
  • The Hidden Markov Model Toolkit (HTK) - a portable toolkit for building and manipulating hidden Markov models
  • Bidirectional Long-Short Term Memory Networks - Implementation of Bidirectional Long-Short Term Memory Networks (BLSTM) combined with Connectionist Temporal Classification (CTC) - including examples for Arabic recognition.
  • SRILM - The SRI Language Modeling Toolkit - SRILM is a toolkit for building and applying statistical language models (LMs), primarily for use in speech recognition, statistical tagging and segmentation, and machine translation.
  • Torch 5 - a Matlab-like environment for state-of-the-art machine learning algorithms.
  • PRTools - a Matlab based toolbox for pattern recognition



This page is editable only by TC11 Officers .