The datasets are maintained at http://datasets.iapr-tc11.org The old dataset repository will remain accessible during here
Overview – Message from TC-11
It is extremely important for the Document Image Analysis and Recognition community to be able to cross check and reproduce results described in published papers in the field. In order to achieve this, any datasets used as the basis for publications should be publicly available, as is the norm in many other disciplines.
Authors are actively encouraged to submit the datasets they used to train and / or evaluate their algorithms to the TC-11 in order for them to be published on the TC-11 Web site.
This initiative is not restricted to datasets. At TC-11 we are interested in archiving online any piece of data (ground-truth data, software, etc) which would allow to easily reproduce results, set new targets, foster healthy competition, encourage collaboration and generally advance the DIAR field as a whole.
A wealth of datasets and corresponding ground truth data are already available through the TC-11 Web portal.
If you wish to contribute, please read below about the procedure to submit material to the TC-11 web-portal. The dataset curators will be notified as soon as the dataset is uploaded. For any comments or suggestions, please contact Joseph Chazalon, the dataset curator at joseph(dot)chazalon+tc11(at)lrde.epita.fr
In order to submit a protocol please create an account on the TC11 datasets portal (http://datasets.iapr-tc11.org) and follow the online submission instructions. For any problems, please contact the dataset curators.
TC-11 provides dataset hosting services as a benefit to the international research community. If it is determined that copyrighted material is improperly included in a dataset submitted to inclusion on the TC-11 website, we will immediately remove the offending material upon notification of the copyright holder.
By submitting a dataset for inclusion to the TC-11 Web site, the author certifies that he/she has the right to publish the dataset and any associated data in the public domain and the act of doing so does not violate intellectual property rights or copyrights of some third party.
The TC-11 will provide a service through which the submitted dataset and any associated data will be made public to the Document Analysis community worldwide. In case any legal dispute arises in the future in relation to the publishing of this dataset and associated data in the public domain, the author will hold TC-11 free from any wrongdoing and accept responsibility for the publication of these data.
By submitting a dataset and associated data to the TC-11, you explicitly accept that any third party can independently submit additional information that relates to the original dataset (e.g. additional ground-truth data, software, etc).
We strongly encourage the authors, where they own the copyrights of the submitted information, to consider offering it to the community under a creative commons license. See this link for guidelines about how to choose a proper Creative Commons license.
Dataset: A collection of data along with metadata information, as required to use these data.
Metadata: Metadata is information specific to a particular dataset. Metadata are usually tightly structured within the dataset itself (e.g. information encoded within the filenames of submitted images). Metadata can only be submitted at the time of submission of the dataset.
Ground Truth Specification: The definition of the required information that accurately describes a particular aspect of the data at a high level where agreement between different human observers can be established, as well as the definition of an appropriate structure (format) for storing this information.
Ground Truth Data: A set of data conforming to a particular ground truth specification and relating to a specific dataset. Ground Truth Data can be submitted at any time, while different Ground Truth Data (corresponding to different aspects of the data) can be associated with the same dataset.
Task: A well defined process to evaluate algorithms in the context of a specific scientific problem. A task would typically provide a specific evaluation protocol, and link to specific resources as required (a dataset, and usually related ground truth data). Tasks should correspond to open challenges in the field. If you undertake any of the tasks defined and you have published results or code available, we would really like to know!
Resources: Any other type of related resources that are not specifically covered by the above definitions. Examples would include software to browse and visualise a dataset, software to create ground truth data, algorithms to do performance evaluation, codecs, reports, publications, etc.
This page is editable only by TC11 Officers .