<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://iapr-tc11.org/mediawiki/index.php?action=history&amp;feed=atom&amp;title=IAPR_TC11_Newsletter_2023_02</id>
	<title>IAPR TC11 Newsletter 2023 02 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://iapr-tc11.org/mediawiki/index.php?action=history&amp;feed=atom&amp;title=IAPR_TC11_Newsletter_2023_02"/>
	<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;action=history"/>
	<updated>2026-05-12T11:41:17Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.16</generator>
	<entry>
		<id>http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3211&amp;oldid=prev</id>
		<title>Nibalnayef at 18:48, 1 March 2023</title>
		<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3211&amp;oldid=prev"/>
		<updated>2023-03-01T18:48:54Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 18:48, 1 March 2023&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l61&quot; &gt;Line 61:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 61:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt; [[Image&lt;/del&gt;:Icdar2023.png&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;|300px]]&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;a href=&amp;quot;https&lt;/ins&gt;:&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;//icdar2023.org&amp;quot;&amp;gt;&amp;lt;img width=300 src=&amp;quot;http://www.iapr-tc11.org/mediawiki/images/&lt;/ins&gt;Icdar2023.png&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;ICDAR 2023-related calls:&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;ICDAR 2023-related calls:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Nibalnayef</name></author>
		
	</entry>
	<entry>
		<id>http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3210&amp;oldid=prev</id>
		<title>Nibalnayef at 18:46, 1 March 2023</title>
		<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3210&amp;oldid=prev"/>
		<updated>2023-03-01T18:46:46Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 18:46, 1 March 2023&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l19&quot; &gt;Line 19:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 19:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* SSDA 2023 and SSDA 2024&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* SSDA 2023 and SSDA 2024&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;** Call for SSDA 2023 and SSDA 2024 Proposals&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;** Call for SSDA 2023 and SSDA 2024 Proposals&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt; &lt;/del&gt;* Job Offers&amp;#160; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Job Offers&amp;#160; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;** 2x Post-doctoral positions at the Computer Vision Center, Barcelona&amp;#160; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;** 2x Post-doctoral positions at the Computer Vision Center, Barcelona&amp;#160; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Datasets&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Datasets&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff::1.12:old-3209:rev-3210 --&gt;
&lt;/table&gt;</summary>
		<author><name>Nibalnayef</name></author>
		
	</entry>
	<entry>
		<id>http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3209&amp;oldid=prev</id>
		<title>Nibalnayef at 18:45, 1 March 2023</title>
		<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3209&amp;oldid=prev"/>
		<updated>2023-03-01T18:45:17Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 18:45, 1 March 2023&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l61&quot; &gt;Line 61:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 61:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;a href=&amp;quot;https&lt;/del&gt;:&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;//icdar2023.org&amp;quot;&amp;gt;&amp;lt;img width=300 src=&amp;quot;http://www.iapr-tc11.org/mediawiki/images/&lt;/del&gt;Icdar2023.png&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt; [[Image&lt;/ins&gt;:Icdar2023.png&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|300px]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/center&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;ICDAR 2023-related calls:&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;ICDAR 2023-related calls:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l162&quot; &gt;Line 162:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 163:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;accordion parent=&amp;quot;accordion&amp;quot; heading=&amp;quot;Job Offers&amp;#160; &amp;quot;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;accordion parent=&amp;quot;accordion&amp;quot; heading=&amp;quot;Job Offers&amp;#160; &amp;quot;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Find below a post 2 open postdoc positions at the Computer Vision Center (CVC), Barcelona. The positions are focused on computer vision and federated learning and differential privacy.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Find below a post &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;about &lt;/ins&gt;2 open postdoc positions at the Computer Vision Center (CVC), Barcelona. The positions are focused on computer vision and federated learning and differential privacy.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 2x Post-doctoral positions at the Computer Vision Center, Barcelona ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 2x Post-doctoral positions at the Computer Vision Center, Barcelona ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff::1.12:old-3208:rev-3209 --&gt;
&lt;/table&gt;</summary>
		<author><name>Nibalnayef</name></author>
		
	</entry>
	<entry>
		<id>http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3208&amp;oldid=prev</id>
		<title>Nibalnayef at 18:38, 1 March 2023</title>
		<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3208&amp;oldid=prev"/>
		<updated>2023-03-01T18:38:24Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;amp;diff=3208&amp;amp;oldid=3207&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Nibalnayef</name></author>
		
	</entry>
	<entry>
		<id>http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3207&amp;oldid=prev</id>
		<title>Nibalnayef: Created page with &quot;  # IAPR TC-11 (Reading Systems) Newsletter   ## February, 2023  &lt;center&gt; &lt;a href=&quot;https://iapr.org&quot;&gt;&lt;img width=150 src=&quot;http://www.iapr-tc11.org/mediawiki/images/IAPR_logo.gi...&quot;</title>
		<link rel="alternate" type="text/html" href="http://iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02&amp;diff=3207&amp;oldid=prev"/>
		<updated>2023-03-01T18:37:45Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;  # IAPR TC-11 (Reading Systems) Newsletter   ## February, 2023  &amp;lt;center&amp;gt; &amp;lt;a href=&amp;quot;https://iapr.org&amp;quot;&amp;gt;&amp;lt;img width=150 src=&amp;quot;http://www.iapr-tc11.org/mediawiki/images/IAPR_logo.gi...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
# IAPR TC-11 (Reading Systems) Newsletter &lt;br /&gt;
&lt;br /&gt;
## February, 2023&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;https://iapr.org&amp;quot;&amp;gt;&amp;lt;img width=150 src=&amp;quot;http://www.iapr-tc11.org/mediawiki/images/IAPR_logo.gif&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;http://www.iapr-tc11.org/mediawiki/index.php?title=IAPR-TC11:Reading_Systems&amp;quot;&amp;gt;&amp;lt;img width=200 src=&amp;quot;http://www.iapr-tc11.org/mediawiki/images/Tc-11_Logo_v2_72dpi.png&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Online, phone-friendly version:** [February 2023 Newsletter](http://www.iapr-tc11.org/mediawiki/index.php?title=IAPR_TC11_Newsletter_2023_02)   &lt;br /&gt;
**TC-11:** [TC-11 Homepage](http://www.iapr-tc11.org) &amp;amp;nbsp;&amp;amp;nbsp;  **Twitter:** [iapr_tc11](https://twitter.com/iapr_tc11)&lt;br /&gt;
&lt;br /&gt;
### TABLE OF CONTENTS&lt;br /&gt;
&lt;br /&gt;
- Message from the Editor  &lt;br /&gt;
- Dates and Deadlines&lt;br /&gt;
    - Deadlines&lt;br /&gt;
    - Upcoming Conferences and Events&lt;br /&gt;
- ICDAR 2023&lt;br /&gt;
    - Workshops of ICDAR 2023&lt;br /&gt;
    - ICDAR 2023 Competitions *(repost)*&lt;br /&gt;
    - DUDE Competition&lt;br /&gt;
- SSDA 2023 and SSDA 2024&lt;br /&gt;
    - Call for SSDA 2023 and SSDA 2024 Proposals&lt;br /&gt;
 - Job Offers  &lt;br /&gt;
    - 2x Post-doctoral positions at the Computer Vision Center, Barcelona  &lt;br /&gt;
- Datasets&lt;br /&gt;
    - TC11 Datasets Repository&lt;br /&gt;
      - Where to share datasets&lt;br /&gt;
&lt;br /&gt;
Message from the Editor&lt;br /&gt;
=======================&lt;br /&gt;
&lt;br /&gt;
Dear TC11 members,&lt;br /&gt;
&lt;br /&gt;
Now is the time to take part in ICDAR workshops and competitions!. Take&lt;br /&gt;
a look at the many interesting workshops that will be held in&lt;br /&gt;
conjunction with ICDAR ([ICDAR 2023&lt;br /&gt;
Workshops](https://icdar2023.org/program/workshops/)). This issue lists&lt;br /&gt;
the different workshops along with their websites.  &lt;br /&gt;
As for the various ICDAR competitions, researchers from academia or&lt;br /&gt;
industry are encouraged to participate. Find those interesting&lt;br /&gt;
competitions at [ICDAR 2023&lt;br /&gt;
Competitions](https://icdar2023.org/program/competitions/)&lt;br /&gt;
&lt;br /&gt;
In this issue you will find a call for proposals for organizing the IAPR&lt;br /&gt;
TC10/TC11 Summer School on Document Analysis (SSDA) for 2023 or 2024.&lt;br /&gt;
You will also find new job offers.&lt;br /&gt;
&lt;br /&gt;
**Nibal Nayef, TC11 Communication Officer**  &lt;br /&gt;
( &amp;lt;n.nayef@gmail.com&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
**Join us!** If you are not already a member of the TC11 community,&lt;br /&gt;
please consider joining the [TC11 mailing&lt;br /&gt;
list](https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=iapr-tc11&amp;amp;A=1).&lt;br /&gt;
**Follow us on Twitter (iapr\_tc11):** &amp;lt;https://twitter.com/iapr_tc11&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dates and Deadlines&lt;br /&gt;
===================&lt;br /&gt;
&lt;br /&gt;
Deadlines&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
**2023**&lt;br /&gt;
&lt;br /&gt;
-   **March 26 -- April 2** ICDAR conference paper rebuttal period&lt;br /&gt;
    [ICDAR 2023 dates](https://icdar2023.org/important-dates/)&lt;br /&gt;
&lt;br /&gt;
-   **March 31** Proposals due for organizing SSDA 2023 - [Organization&lt;br /&gt;
    guidelines](http://www.iapr-tc11.org/mediawiki/index.php/Guidelines_for_Organising_and_Bidding_to_Host_the_TC10_/_TC11_Summer_School)&lt;br /&gt;
&lt;br /&gt;
Upcoming Conferences and Events&lt;br /&gt;
-------------------------------&lt;br /&gt;
&lt;br /&gt;
**2023 and Later**&lt;br /&gt;
&lt;br /&gt;
-   [ICDAR 2023](https://icdar2023.org). San José, USA (August&lt;br /&gt;
    21-26, 2023)&lt;br /&gt;
&lt;br /&gt;
-   ICDAR 2024. Athens, Greece (September, 2024)&lt;br /&gt;
&lt;br /&gt;
ICDAR 2023&lt;br /&gt;
==========&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;https://icdar2023.org&amp;quot;&amp;gt;&amp;lt;img width=300 src=&amp;quot;http://www.iapr-tc11.org/mediawiki/images/Icdar2023.png&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ICDAR 2023-related calls:&lt;br /&gt;
&lt;br /&gt;
Workshops of ICDAR 2023&lt;br /&gt;
-----------------------&lt;br /&gt;
&lt;br /&gt;
A number of interesting workshops will be held in conjunction with ICDAR&lt;br /&gt;
1.    The workshops are listed below along with links to their websites.&lt;br /&gt;
Some of the workshops have already announced the paper submission&lt;br /&gt;
deadlines.&lt;br /&gt;
&lt;br /&gt;
**[IWCDF](https://warwick.ac.uk/siplab/IWCDF2023/)**  &lt;br /&gt;
ICDAR 2023 Workshop on International Workshop on Computational Document&lt;br /&gt;
Forensics (4th edition) (https://warwick.ac.uk/siplab/IWCDF2023/):  &lt;br /&gt;
The Fourth International Workshop on Computational Document Forensics&lt;br /&gt;
(IWCDF 2023) aims at presenting the most recent theoretical and&lt;br /&gt;
practical advances related to digital document forgery while fostering&lt;br /&gt;
discussions between academy and industry.&lt;br /&gt;
&lt;br /&gt;
**[IWCP](https://www.csmc.uni-hamburg.de/iwcp2023.html)**  &lt;br /&gt;
ICDAR 2023 Workshop on Computational Paleography (2nd edition)&lt;br /&gt;
(https://www.csmc.uni-hamburg.de/iwcp2023.html):  &lt;br /&gt;
The goal of this workshop is to bridge the gap between the different&lt;br /&gt;
research fields analyzing handwritten scripts in ancient artifacts. It&lt;br /&gt;
is primarily targeted at computer scientists, natural scientists, and&lt;br /&gt;
humanists involved in the study of ancient writing systems and their&lt;br /&gt;
materials, but it is not limited to these groups. By promoting&lt;br /&gt;
discussion among these three communities, the workshop aims to encourage&lt;br /&gt;
future interdisciplinary collaborations that will address current&lt;br /&gt;
research questions about ancient manuscripts.&lt;br /&gt;
&lt;br /&gt;
**[CBDAR](https://dll.seecs.nust.edu.pk/cbdar2023)**  &lt;br /&gt;
ICDAR 2023 Workshop on Camera-Based Document Analysis and Recognition&lt;br /&gt;
(https://dll.seecs.nust.edu.pk/cbdar2023):  &lt;br /&gt;
The ICDAR 2023 Workshop on Camera-Based Document Analysis and&lt;br /&gt;
Recognition (CBDAR 2023) will be the successor of the previous nine&lt;br /&gt;
CBDAR workshops. The CBDAR series has a special focus on the analysis of&lt;br /&gt;
camera captured documents and text. CBDAR is a forum for presenting&lt;br /&gt;
up-to-date research, sharing experiences, and fomenting discussions on&lt;br /&gt;
future directions in camera based document analysis.&lt;br /&gt;
&lt;br /&gt;
**[GREC](https://grec2023.univ-lr.fr/)**  &lt;br /&gt;
ICDAR 2023 International Workshop on Graphics Recognition (15th edition)&lt;br /&gt;
(https://grec2023.univ-lr.fr/)  &lt;br /&gt;
GREC 2023 will provide an excellent opportunity for researchers and&lt;br /&gt;
practitioners at all levels of experience to meet colleagues and to&lt;br /&gt;
share new ideas and knowledge about graphics recognition methods.&lt;br /&gt;
Graphics Recognition is a subfield of document image analysis that deals&lt;br /&gt;
with graphical entities in engineering drawings, comics, musical scores,&lt;br /&gt;
sketches, maps, architectural plans, mathematical notation, tables,&lt;br /&gt;
diagrams, etc.&lt;br /&gt;
&lt;br /&gt;
**[ADAPDA](https://sites.google.com/view/adapdaicdar23)**  &lt;br /&gt;
ICDAR 2023 Workshop on Automatically Domain-Adapted and Personalized&lt;br /&gt;
Document Analysis (ADAPDA)&lt;br /&gt;
(https://sites.google.com/view/adapdaicdar23)  &lt;br /&gt;
This workshop aims at gathering expertise and novel ideas for&lt;br /&gt;
personalized Document Analysis tasks (training and adaptation strategies&lt;br /&gt;
of writer, language, and visual-specific models, new benchmarks, and&lt;br /&gt;
data collection strategies), both on-line and off-line, with attention&lt;br /&gt;
to privacy-preserving solutions.&lt;br /&gt;
&lt;br /&gt;
**[HIP](https://blog.sbb.berlin/hip2023/)**  &lt;br /&gt;
ICDAR 2023 International Workshop on Historical Document Imaging and&lt;br /&gt;
Processing (7th edition) (HIP'23) (https://blog.sbb.berlin/hip2023/)  &lt;br /&gt;
The 7th International Workshop on Historical Document Imaging and&lt;br /&gt;
Processing (HIP'23) will bring together researchers from various fields&lt;br /&gt;
working on document image acquisition, restoration, analysis, indexing,&lt;br /&gt;
and retrieval to make these documents accessible in digital libraries.&lt;br /&gt;
It is the seventh satellite workshop of ICDAR dedicated to this topic,&lt;br /&gt;
following HIP'11 in Beijing, HIP'13 in Washington, HIP'15 in Nancy,&lt;br /&gt;
HIP'17 in Kyoto and HIP'19 in Sydney and HIP'21 in Lausanne (hybrid)&lt;br /&gt;
that were a significant success with strong participation. HIP aims to&lt;br /&gt;
provide the researchers with a forum that is complementary and&lt;br /&gt;
synergetic to the main sessions at ICDAR on document analysis and&lt;br /&gt;
recognition.&lt;br /&gt;
&lt;br /&gt;
**[VINALDO](https://sites.google.com/view/vinaldo-workshop-icdar-2023/home)**  &lt;br /&gt;
ICDAR 2023 Workshop on Machine vision and NLP for Document Analysis (1st&lt;br /&gt;
edition) (VINALDO)&lt;br /&gt;
(https://sites.google.com/view/vinaldo-workshop-icdar-2023/home)  &lt;br /&gt;
The first edition of the machine VIsion and NAtural Language processing&lt;br /&gt;
for DOcument analysis (VINALDO) workshop comes as an extension of the&lt;br /&gt;
GLESDO workshop, where we encourage the description of novel problems or&lt;br /&gt;
applications for document analysis in the area of information retrieval&lt;br /&gt;
that has emerged in recent years. We also encourage works that include&lt;br /&gt;
NLP tools for extracted text, such as language models and Transforms.&lt;br /&gt;
Finally, we also encourage works that develop new scanned document&lt;br /&gt;
datasets for novel applications.&lt;br /&gt;
&lt;br /&gt;
**[WML](https://www.isical.ac.in/~cvpr/ICDARWML23/)**  &lt;br /&gt;
ICDAR 2023 International Workshop on Machine Learning (4th edition)&lt;br /&gt;
(WML) (https://www.isical.ac.in/\~cvpr/ICDARWML23/)  &lt;br /&gt;
Since 2010, the year of initiation of the annual Imagenet Competition&lt;br /&gt;
where research teams submit programs that classify and detect objects,&lt;br /&gt;
machine learning has gained significant popularity. In the present age,&lt;br /&gt;
Machine learning, in particular deep learning, is incredibly powerful to&lt;br /&gt;
make predictions based on large amounts of available data. There are&lt;br /&gt;
many applications of machine learning in Computer vision, pattern&lt;br /&gt;
recognition including Document analysis, Medical image analysis etc. In&lt;br /&gt;
order to facilitate innovative collaboration and engagement between&lt;br /&gt;
document analysis community and other research communities like computer&lt;br /&gt;
vision and images analysis etc. here we plan to organize this workshop&lt;br /&gt;
of Machine learning.&lt;br /&gt;
&lt;br /&gt;
**[ScaleDoc](http://cvit.iiit.ac.in/scaldoc2023/)**  &lt;br /&gt;
ICDAR 2023 Workshop on Scaling-up document Image understanding&lt;br /&gt;
(http://cvit.iiit.ac.in/scaldoc2023/)  &lt;br /&gt;
Document Analysis has long suffered from both a fragmented landscape of&lt;br /&gt;
task-specific datasets and a strong focus on narrow-focused information&lt;br /&gt;
extraction and document conversion tasks. The Scaling-up document Image&lt;br /&gt;
understanding workshop aims to open the discussion on possible ways for&lt;br /&gt;
the community to align data preparation efforts and define large-scale&lt;br /&gt;
(grand) challenges that drive progress in the field.. This is meant to&lt;br /&gt;
be one of a series of such events to be organized on our scientific&lt;br /&gt;
forums in the near future. The aspiration is to set the seed for an&lt;br /&gt;
initiative to create our own community's document-oriented &amp;quot;ImageNet&amp;quot;,&lt;br /&gt;
over which multiple long-term grand challenges can be defined.&lt;br /&gt;
&lt;br /&gt;
ICDAR 2023 Competitions *(repost)*&lt;br /&gt;
----------------------------------&lt;br /&gt;
&lt;br /&gt;
ICDAR 2023 competitions are now up and running. Find those interesting&lt;br /&gt;
competitions at [ICDAR 2023&lt;br /&gt;
Competitions](https://icdar2023.org/program/competitions/).&lt;br /&gt;
&lt;br /&gt;
Researchers from academia or industry are encouraged to participate. As&lt;br /&gt;
usual, each competition has its own website, where one may download the&lt;br /&gt;
datasets, submission guidelines etc. For any questions, please contact&lt;br /&gt;
the respective organizers of the competition in which you are&lt;br /&gt;
interested.&lt;br /&gt;
&lt;br /&gt;
DUDE Competition&lt;br /&gt;
----------------&lt;br /&gt;
&lt;br /&gt;
ICDAR 2023 Competition on Document UnderstanDing of Everything (DUDE)&lt;br /&gt;
proposes a new dataset for benchmarking Document Understanding systems&lt;br /&gt;
under real-world settings that have been previously overlooked. In&lt;br /&gt;
contrast to previous datasets, we extensively source multi-domain,&lt;br /&gt;
multi-purpose, and multi-page documents of various types, origins, and&lt;br /&gt;
dates.  &lt;br /&gt;
Importantly, we bridge the yet unaddressed gap between Document Layout&lt;br /&gt;
Analysis and Question Answering paradigms by introducing complex&lt;br /&gt;
layout-navigating questions and unique problems that often demand&lt;br /&gt;
advanced information processing or multi-step reasoning. Finally, the&lt;br /&gt;
multi-phased evaluation protocol also assesses the few-shot capabilities&lt;br /&gt;
of models by testing their generalization power to previously unseen&lt;br /&gt;
questions and domains, a condition essential to business use cases&lt;br /&gt;
prevailing in the field.&lt;br /&gt;
&lt;br /&gt;
The registration has been open since December 2022, and the&lt;br /&gt;
training/validation sets are available on the competition website. This&lt;br /&gt;
is the competition schedule for the coming deadlines.&lt;br /&gt;
&lt;br /&gt;
**Important Dates**&lt;br /&gt;
&lt;br /&gt;
    March   1, 2023        Evaluation phase 2 open&lt;br /&gt;
    March  15, 2023        Task 1 submission deadline&lt;br /&gt;
    April   1, 2023        Evaluation phase 2 deadline&lt;br /&gt;
&lt;br /&gt;
All dates are 23:59 AoE and subject to change.&lt;br /&gt;
&lt;br /&gt;
SSDA 2023 and SSDA 2024&lt;br /&gt;
=======================&lt;br /&gt;
&lt;br /&gt;
Find below a call for proposals for organizing the IAPR TC10/TC11 Summer&lt;br /&gt;
School on Document Analysis (SSDA). The call can be answered for&lt;br /&gt;
organizing either the 2023 edition or 2024 edition. The deadline for&lt;br /&gt;
SSDA 2023 has been extended till March 31, 2023. Proposals for&lt;br /&gt;
organizing the summer school in 2024 are also welcome.&lt;br /&gt;
&lt;br /&gt;
Call for SSDA 2023 and SSDA 2024 Proposals&lt;br /&gt;
------------------------------------------&lt;br /&gt;
&lt;br /&gt;
SSDA 2023/2024 call for proposals: IAPR TC10/TC11 Summer School on&lt;br /&gt;
Document Analysis&lt;br /&gt;
&lt;br /&gt;
**Important Dates**&lt;br /&gt;
&lt;br /&gt;
    March 31, 2023    proposal submission deadline for SSDA 2023&lt;br /&gt;
    TBD               proposal submission deadline for SSDA 2024&lt;br /&gt;
&lt;br /&gt;
Submit Proposals via email to:  &lt;br /&gt;
\* Foteini Simistira Liwicki (TC11 Representative),&lt;br /&gt;
foteini.liwicki\@ltu.se  &lt;br /&gt;
\* KC Santosh (TC10 Representative), Santosh.KC\@usd.edu&lt;br /&gt;
&lt;br /&gt;
Part of the mission of International Association for Pattern Recognition&lt;br /&gt;
(IAPR) TC11 and TC10 is to promote high quality educational activities&lt;br /&gt;
related to Reading Systems and Graphics Recognition. Responding to this&lt;br /&gt;
need, TC10 and TC11 have established a series of summer schools. After&lt;br /&gt;
the successful organization of summer schools in Jaipur, India, La&lt;br /&gt;
Rochelle, France, Islamabad, Pakistan, and Luleå, Sweden, **we are now&lt;br /&gt;
soliciting proposals** for the organization of the fifth &amp;quot;IAPR TC10/TC11&lt;br /&gt;
Summer School on Document Analysis&amp;quot; (SSDA) in 2023.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;IAPR TC10/TC11 Summer School on Document Analysis&amp;quot; is intended to&lt;br /&gt;
become the primary educational activity of IAPR TC11 (Reading Systems)&lt;br /&gt;
and TC10 (Graphics Recognition). The School is meant to be a training&lt;br /&gt;
activity where participants are exposed to the latest trends and&lt;br /&gt;
techniques of Reading Systems and Graphics Recognition.&lt;br /&gt;
&lt;br /&gt;
The aim of the School is to provide both an objective and clear overview&lt;br /&gt;
and an in-depth analysis of the state-of-the-art research in selected&lt;br /&gt;
topics of Reading Systems and Graphics Recognition. The School should&lt;br /&gt;
aim to provide a stimulating opportunity for young researchers and PhD&lt;br /&gt;
students in the field.&lt;br /&gt;
&lt;br /&gt;
Individuals and groups who are interested in Reading Systems and&lt;br /&gt;
Graphics Recognition are invited to submit proposals for organizing and&lt;br /&gt;
hosting the 2023 IAPR TC10 / TC11 Summer School. As the previous summer&lt;br /&gt;
schools were organized in Asia, Europe, and the Sub-continent,&lt;br /&gt;
organizing teams from the Americas are encouraged to submit a bid in&lt;br /&gt;
order to facilitate the envisioned rotational scheme of the IAPR TC10 /&lt;br /&gt;
TC11 Summer School.&lt;br /&gt;
&lt;br /&gt;
In order to fully plan their bid, it is expected that proposers&lt;br /&gt;
familiarize themselves with the guidelines for organizing the School&lt;br /&gt;
first. The Guidelines can be found at the TC11 Web site:  &lt;br /&gt;
http://www.iapr-tc11.org/mediawiki/index.php/Guidelines\_for\_Organising\_and\_Bidding\_to\_Host\_the\_TC10\_/\_TC11\_Summer\_School&lt;br /&gt;
&lt;br /&gt;
The submission of a bid implies full agreement with the rules and&lt;br /&gt;
procedures for organizing the School. Especially, this means that&lt;br /&gt;
organizers will apply for IAPR support and that the event will use the&lt;br /&gt;
series title &amp;quot;IAPR TC10/TC11 Summer School on Document Analysis&amp;quot; with an&lt;br /&gt;
optional sub-title denoting a special focus of the respective event.&lt;br /&gt;
&lt;br /&gt;
Please consider submitting a proposal for this increasingly important&lt;br /&gt;
event for the TC10/TC11 community. If you have questions, please do not&lt;br /&gt;
hesitate to contact the TC11 and TC10 SSDA representatives: Foteini&lt;br /&gt;
Simistira Liwicki (TC11 Representative) and KC Santosh (TC10&lt;br /&gt;
Representative).&lt;br /&gt;
&lt;br /&gt;
Previous events: As a reference, the 2021 Summer School on Document&lt;br /&gt;
Analysis was held in Luleå, Sweden with the theme&lt;br /&gt;
[Digital Transformation in a Changing&lt;br /&gt;
World](https://www.ltu.se/research/subjects/Maskininlarning/Workshoppar/SSDA-2021?l=en).&lt;br /&gt;
&lt;br /&gt;
Job Offers&lt;br /&gt;
==========&lt;br /&gt;
&lt;br /&gt;
Find below a post 2 open postdoc positions at the Computer Vision Center&lt;br /&gt;
(CVC), Barcelona. The positions are focused on computer vision and&lt;br /&gt;
federated learning and differential privacy.&lt;br /&gt;
&lt;br /&gt;
2x Post-doctoral positions at the Computer Vision Center, Barcelona&lt;br /&gt;
-------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
We are seeking two postdoctoral researchers to join the Vision, Language&lt;br /&gt;
and Reading group at the Computer Vision Center (CVC), in Barcelona,&lt;br /&gt;
Spain, focused on (1) COMPUTER VISION and (2) FEDERATED LEARNING AND&lt;br /&gt;
DIFFERENTIAL PRIVACY.&lt;br /&gt;
&lt;br /&gt;
The positions are available for a minimum of 2 years, and are linked to&lt;br /&gt;
the &amp;quot;European Lighthouse on Secure and Safe AI&amp;quot; (ELSA), a European&lt;br /&gt;
Project funded by Horizon Europe and backed by the ELLIS network of&lt;br /&gt;
excellence. The project covers research topics that include robustness,&lt;br /&gt;
privacy and human agency and will develop use cases in areas such as&lt;br /&gt;
autonomous driving, robotics, health and document intelligence. The&lt;br /&gt;
candidate researchers will focus on privacy aware methods for document&lt;br /&gt;
understanding.&lt;br /&gt;
&lt;br /&gt;
**CANDIDATE 'S PROFILE**  &lt;br /&gt;
The candidate should possess a PhD in machine learning or computer&lt;br /&gt;
vision and have a strong publication record. We are looking for&lt;br /&gt;
candidates who have publications in top conferences like CVPR, ECCV,&lt;br /&gt;
ICCV, ICDAR, NeurIPS, ICML, ICLR.&lt;br /&gt;
&lt;br /&gt;
The candidate should have a strong background in machine learning and&lt;br /&gt;
computer vision. Experience on document image analysis and/or visual&lt;br /&gt;
question answering would be positive. The applicants are expected to be&lt;br /&gt;
fluent in both oral and written communication in English. They should&lt;br /&gt;
work well in a team while demonstrating initiative and independence. The&lt;br /&gt;
candidate is expected to co-supervise PhD students.&lt;br /&gt;
&lt;br /&gt;
The successful candidate is expected to contribute to the design and&lt;br /&gt;
development of AI solutions for document understanding, employing&lt;br /&gt;
privacy preserving techniques and infrastructures set up by the ELSA&lt;br /&gt;
project.&lt;br /&gt;
&lt;br /&gt;
**THE COMPUTER VISION CENTER**  &lt;br /&gt;
The selected candidate will work in the Computer Vision Centre (CVC),&lt;br /&gt;
Barcelona, a research institute comprising more than 130 researchers and&lt;br /&gt;
support staff, dedicated to computer vision research and knowledge&lt;br /&gt;
transfer. With a strong international projection and links to the&lt;br /&gt;
industry, the Computer Vision Centre offers an exciting environment for&lt;br /&gt;
scientific career development. The Computer Vision Centre has a plan for&lt;br /&gt;
expansion of its permanent research staff base and has received the &amp;quot;HR&lt;br /&gt;
Excellence in Research&amp;quot; award as a provider and supporter of a&lt;br /&gt;
stimulating and favourable working environment.&lt;br /&gt;
&lt;br /&gt;
The direct responsible for these posts will Dr Dimosthenis Karatzas,&lt;br /&gt;
leading the Vision, Language and Reading research group at the CVC.&lt;br /&gt;
&lt;br /&gt;
Barcelona is a vibrant city and an important Artificial Intelligence&lt;br /&gt;
hub. The high quality of life is combined with an open and international&lt;br /&gt;
looking character of the city. Barcelona is very well connected by air,&lt;br /&gt;
sea and ground transportation. The region of Catalonia boosts its own AI&lt;br /&gt;
strategy, in which the CVC is a key player.&lt;br /&gt;
&lt;br /&gt;
**RESEARCH CONTACT**  &lt;br /&gt;
If you are interested in the position, please contact Dr Dimosthenis&lt;br /&gt;
Karatzas for more information and applications (dimos\@cvc.uab.es)&lt;br /&gt;
&lt;br /&gt;
**APPLICATION PROCESS**  &lt;br /&gt;
Apply by filling in the online form at:  &lt;br /&gt;
Computer Vision:&lt;br /&gt;
http://www.cvc.uab.es/blog/2023/01/11/postdoc-position-in-computer-vision/  &lt;br /&gt;
FL and DP:&lt;br /&gt;
http://www.cvc.uab.es/blog/2023/01/11/postdoc-position-in-computer-vision/&lt;br /&gt;
&lt;br /&gt;
**MORE INFO**  &lt;br /&gt;
ELSA project: https://elsa-ai.eu/  &lt;br /&gt;
Computer Vision Center: http://www.cvc.uab.es/  &lt;br /&gt;
Vision, Language and Reading group: https://www.vlr.ai/&lt;br /&gt;
&lt;br /&gt;
Datasets&lt;br /&gt;
========&lt;br /&gt;
&lt;br /&gt;
TC11 Datasets Repository&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
### Where to share datasets&lt;br /&gt;
&lt;br /&gt;
Did you know it? We have two official places for datasets:  &lt;br /&gt;
- Our historical platform for storage and listing:&lt;br /&gt;
http://datasets.iapr-tc11.org&lt;br /&gt;
&lt;br /&gt;
-   A Zenodo community (if you choose to submit your datasets there):&lt;br /&gt;
    https://zenodo.org/communities/iapr-tc11&lt;br /&gt;
&lt;br /&gt;
TC11 maintains a collection of datasets that can be found online in the&lt;br /&gt;
[TC11 Datasets&lt;br /&gt;
Repository](http://www.iapr-tc11.org/mediawiki/index.php/Datasets).&lt;br /&gt;
&lt;br /&gt;
If you have new datasets (e.g., from competitions) that you wish to&lt;br /&gt;
share with the research community, please use the [online upload&lt;br /&gt;
form](http://tc11.cvc.uab.es/upload/). For questions and support, please&lt;br /&gt;
contact the TC11 Dataset Curator (contact information is below).&lt;br /&gt;
&lt;br /&gt;
**Joseph Chazalon (TC11 Dataset Curator)**  &lt;br /&gt;
( &amp;lt;joseph.chazalon@lrde.epita.fr&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
Contributions and Subscriptions&lt;br /&gt;
==================================&lt;br /&gt;
&lt;br /&gt;
**Call for Contributions:** To contribute news items, please send a&lt;br /&gt;
short email to the editor, [Nibal Nayef](mailto:n.nayef@gmail.com).&lt;br /&gt;
Contributions might include conference and workshop announcements or &lt;br /&gt;
reports, job opportunities, book reviews, or anything else of interest &lt;br /&gt;
to the TC11 community. &lt;br /&gt;
&lt;br /&gt;
**Subscription:** This newsletter is sent to subscribers of the IAPR&lt;br /&gt;
TC11 mailing list. &lt;br /&gt;
&lt;br /&gt;
To join the TC11 mailing list, please click on [this link](https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=iapr-tc11&amp;amp;A=1).&lt;br /&gt;
&lt;br /&gt;
To manage your subscription, please visit the [mailing list homepage](https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=IAPR-TC11).&lt;br /&gt;
&lt;br /&gt;
------------------------------------------------------------------------&lt;br /&gt;
IAPR TC11 HOMEPAGE: [http://www.iapr-tc11.org](http://www.iapr-tc11.org)&lt;br /&gt;
&lt;br /&gt;
The IAPR is the International Association for Pattern Recognition.&lt;br /&gt;
IAPR's Technical Committee No. 11 (TC11) includes researchers and&lt;br /&gt;
practitioners working with Optical Character Recognition (OCR), and more&lt;br /&gt;
generally the analysis and recognition of information in documents.&lt;/div&gt;</summary>
		<author><name>Nibalnayef</name></author>
		
	</entry>
</feed>