<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://iapr-tc11.org/mediawiki/index.php?action=history&amp;feed=atom&amp;title=OCR_Evaluation_for_LRDE_DBD</id>
	<title>OCR Evaluation for LRDE DBD - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://iapr-tc11.org/mediawiki/index.php?action=history&amp;feed=atom&amp;title=OCR_Evaluation_for_LRDE_DBD"/>
	<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=OCR_Evaluation_for_LRDE_DBD&amp;action=history"/>
	<updated>2026-04-21T17:20:01Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.16</generator>
	<entry>
		<id>http://iapr-tc11.org/mediawiki/index.php?title=OCR_Evaluation_for_LRDE_DBD&amp;diff=1911&amp;oldid=prev</id>
		<title>Liwicki: Created page with &quot;Datasets -&gt; Datasets List -&gt; Current Page  {| style=&quot;width: 100%&quot; |- | align=&quot;right&quot; |   {|  |- | '''Created: '''2013-05-30 |- | {{Last updated}} |}  |}  =Keywords= scann…&quot;</title>
		<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=OCR_Evaluation_for_LRDE_DBD&amp;diff=1911&amp;oldid=prev"/>
		<updated>2013-07-03T16:24:54Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;&lt;a href=&quot;/mediawiki/index.php/Datasets&quot; title=&quot;Datasets&quot;&gt;Datasets&lt;/a&gt; -&amp;gt; &lt;a href=&quot;/mediawiki/index.php/Datasets_List&quot; title=&quot;Datasets List&quot;&gt;Datasets List&lt;/a&gt; -&amp;gt; Current Page  {| style=&amp;quot;width: 100%&amp;quot; |- | align=&amp;quot;right&amp;quot; |   {|  |- | &amp;#039;&amp;#039;&amp;#039;Created: &amp;#039;&amp;#039;&amp;#039;2013-05-30 |- | {{Last updated}} |}  |}  =Keywords= scann…&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;[[Datasets]] -&amp;gt; [[Datasets List]] -&amp;gt; Current Page&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;right&amp;quot; | &lt;br /&gt;
&lt;br /&gt;
{| &lt;br /&gt;
|-&lt;br /&gt;
| '''Created: '''2013-05-30&lt;br /&gt;
|-&lt;br /&gt;
| {{Last updated}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Keywords=&lt;br /&gt;
scanned, magazine, documents, OCR&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Description=&lt;br /&gt;
&lt;br /&gt;
OCR evaluation: Lines are extracted from the binarization outputs and OCR (Tesseract) is run in order to compare to OCR ground-truth. It is performed from binarization of “clean”, “scanned” and “original”  documents.&lt;br /&gt;
&lt;br /&gt;
Purpose of the three document qualities :&lt;br /&gt;
&lt;br /&gt;
* Original : evaluate the binarization quality on perfect documents mixing text and images.&lt;br /&gt;
* Clean : evaluate the binarization quality on perfect document with text only.&lt;br /&gt;
* Scanned : evaluate the binarization quality on slightly degraded documents with text only.&lt;br /&gt;
&lt;br /&gt;
Lines for OCR evaluation are also grouped by size: small, medium and large. (0 &amp;lt; small &amp;lt;= 30 &amp;lt; medium &amp;lt;= 55 &amp;lt; large &amp;lt; +inf). It shows how robust is a binarization algorithm to objects with different sizes in a single document.&lt;br /&gt;
&lt;br /&gt;
=Evaluation Protocol=&lt;br /&gt;
&lt;br /&gt;
Tools are provided to read and process all the data.&lt;br /&gt;
 &lt;br /&gt;
A setup script is provided to download and configure the benchmarking environment.&lt;br /&gt;
&lt;br /&gt;
A Python script is provided to launch the benchmark and compute scores.&lt;br /&gt;
&lt;br /&gt;
C++ programs (and sources) are provided for performing evaluations and reading ground-truth data.&lt;br /&gt;
&lt;br /&gt;
	6 binarization algorithms (and their respective C++ sources) are provided and compiled to run this benchmark on their results.&lt;br /&gt;
&lt;br /&gt;
A setup script is available to download and setup the benchmark system. This is the recommanded way to run this benchmark. Note that this script also includes features to update the dataset if a new version is released.&lt;br /&gt;
&lt;br /&gt;
Minimum requirements: 5GB of free space, Linux (Ubuntu, Debian, …)&lt;br /&gt;
&lt;br /&gt;
Dependencies: Python 2.7, tesseract-ocr, tesseract-ocr-fra, git, libgraphicsmagick++1-dev, graphicsmagick-imagemagick-compat, graphicsmagick-libmagick-dev-compat, build-essential. libtool. automake, autoconf. g++-4.6, libqt4-dev (installed automatically with the setup script on Ubuntu and Debian).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Related Dataset=&lt;br /&gt;
* [[LRDE Document Binarization Dataset (LRDE DBD)]]&lt;br /&gt;
&lt;br /&gt;
=Related Ground Truth Data=&lt;br /&gt;
* [[Ground Truth for LRDE DBD OCR]]&lt;br /&gt;
&lt;br /&gt;
=Submitted Files=&lt;br /&gt;
==Version 1.0==&lt;br /&gt;
* [http://www.iapr-tc11.org/dataset/LRDE/lrde-dbd-tools-1.0.zip Tools for processing] (0.08 Mb)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
This page is editable only by [[IAPR-TC11:Reading_Systems#TC11_Officers|TC11 Officers ]].&lt;/div&gt;</summary>
		<author><name>Liwicki</name></author>
		
	</entry>
</feed>