Difference between revisions of "IAPR TC11 Newsletter 2019 02"
(Created page with "<center> 100px 140px </center> == February, 2019 == <hr> <br> Click on the buttons below to view sections of the...") |
(No difference)
|
Latest revision as of 23:31, 15 February 2019
February, 2019
Click on the buttons below to view sections of the newsletter.
- Message from the Editor
- Dates and Deadlines
- Deadlines
- Upcoming Conferences and Events
- Conferences
- ICDAR 2019: 2nd Call for Papers
- IGS 2019: Call for Papers *(repost)*
- ICDAR: Awards and Proposals
- Call for Nominations for ICDAR 2019 Awards *(repost)*
- Call for Proposals to host ICDAR 2023 *(repost)*
- ICDAR 2019: Workshops
- GREC 2019: First Announcement *(repost)*
- HIP 2019: Call for Papers
- ICDAR-OST 2019: Call for Papers
- ICDAR 2019: Competitions
- List of ICDAR 2019 Competitions *(repost)*
- Call for Participation: ICDAR 2019 CROHME + TFD Competition *(repost)*
- Call for Participation: ICDAR 2019 SROIE Competition
- Journals
- Pattern Recognition Letters Special Issue on DLVTA: Deep Learning for Video Text Analysis *(repost)*
- Pattern Recognition Special Issue on Scene Text Reading and its Applications (STRA) *(repost)*
- Pattern Recognition Letters Special Issue on Hierarchical Representations: New Results and Challenges for Image Analysis
- IJDAR: Latest Issue (Vol. 21, Issue 4) *(repost)*
- IJDAR Discount for IAPR Members *(repost)*
- Books
- Book on Graphics Recognition *(repost)*
- Datasets
- TC11 Datasets Repository *(repost)*
- Careers
- Student Industrial Internship Opportunities (IAPR) *(repost)*
More time for preparing our ICDAR 2019 papers. The submission deadlines have been extended to March 1 (abstracts) and March 8 (full papers), respectively.
Related to ICDAR 2019, this newsletter further includes the calls for the 5th International Workshop on Historical Document Imaging and Processing (HIP 2019), the 2nd ICDAR Workshop on Open Services and Tools for Document Analysis (ICDAR-OST 2019), and the Competition on Scanned Receipts OCR and Information Extraction (ICDAR 2019 SROIE Competition).
Finally, you will find a new call for papers for the Pattern Recognition Letters special issue on Hierarchical Representations: New Results and Challenges for Image Analysis.
Reposts include the call for papers for the IGS 2019 conference and the Pattern Recognition Letters special issue on Deep Learning for Video Text Analysis, both with the rapidly approaching deadline on February 28.
Andreas Fischer, TC11 Communications Officer
( andreas.fischer@hefr.ch )
Join us! If you are not already a member of the TC11 community, please consider joining the TC11 mailing list. Follow us on Twitter (iapr_tc11): https://twitter.com/iapr_tc11
Deadlines
2019
- Feb. 28: Paper submission deadline for IGS 2019.
- Feb. 28: Paper submission deadline for Pattern Recognition Letters Special Issue on Deep Learning for Video Text Analysis
- March 1 & 8: - EXTENDED - Abstract & Paper submission deadlines for ICDAR 2019. (Call for Papers)
- March 31: Paper submission deadline for Pattern Recognition Special Issue on Scene Text Reading
- May 1: Nominations for ICDAR 2019 Awards
- May 31: Paper submission deadline for Pattern Recognition Letters Special Issue on Hierarchical Representations
- June 1: Proposals for Hosting ICDAR 2023
Upcoming Conferences and Events
2019
- IGS 2019, Cancún, Mexico (June 9-13, 2019)
- ICDAR 2019, Sydney, Australia (September 22-25, 2019)
2020 and Later
- ICFHR 2020. Dortmund, Germany (September 8-10, 2020)
- DAS 2020. Wuhan, China (May 17-20, 2020)
- ICPRAI 2020. Zhongshan, China (May 12-15, 2020)
- ICFHR 2022. Hyderabad, India (December, 2022)
ICDAR 2019: 2nd Call for Papers
The 15th International Conference on Document Analysis and Recognition will be held in Sydney, Australia from September 20-25, 2019. ICDAR is the premier international forum for researchers and practitioners in the document analysis community.
Important Dates
Mar 01, 2019 *EXTENDED* Abstract Submission Deadline Mar 08, 2019 *EXTENDED* Paper Submission Deadline May 15, 2019 Author Notification Jun 15, 2019 Camera-Ready Papers Due
Accepted papers will be published by IEEE Computer Society’s Conference Publishing Services (CPS) and included in the IEEE Xplore Digital Library.
Topics of Interest include, but not limited to:
- Document Image Processing
- Physical and logical layout analysis
- Character and text recognition
- Pen‐based document analysis
- Historical document analysis
- Document analysis systems
- Symbol and graphics recognition
- Document forensics
- Human document interaction
- Scene text detection and recognition
- Document retrieval
- Signature verification and writer identification
- Multimedia documents
- Performance evaluation
- Machine learning for document analysis
- Applications of document analysis
- Cognitive issues of documents
- Semantic information extraction from documents
- Document summarization classification and translation
- Document simulation and synthesis
Submission and Review
ICDAR 2019 will follow a double blind review process. Authors should not include their names and affiliations anywhere in the manuscript. Authors should also ensure that their identity is not revealed indirectly by citing their previous work in the third person, and omitting including acknowledgements until the camera-ready version.
Paper format and length
Papers accepted for the conference will be allocated 6 pages in the proceedings, with the option of purchasing up to 2 extra pages for AUD 100 per page. This will have to be paid after paper acceptance and at the time of registration. The length of the submitted manuscript should match that intended for final publication. Therefore, if you are unwilling or unable to pay the extra charge you should limit your paper to 6 pages. Otherwise the page limit is 8 pages. Paper formatting template and instructions are available in the link given below:
http://icdar2019.org/paper-submission/
The list of ICDAR 2019 Workshops are available in the link given below:
http://icdar2019.org/workshops/
The list of ICDAR 2019 Competitions are available in the link given below:
http://icdar2019.org/competitions-2/
Cheng-Lin Liu, Andreas Dengel, and Rafael Lins, ICDAR 2019 Program Chairs
( liucl@nlpr.ia.ac.cn, dengel@dfki.uni-kl.de, rdl@ufpe.br )
IGS 2019: Call for Papers (repost)
The 19th International Graphonomics Conference (IGS 2019) will be held in Cancún, Mexico from June 9-13, 2019.
Important Dates
Feb 28, 2019 Paper submission Mar 29, 2019 Author Notification Apr 29, 2019 Camera-Ready Papers Due
For more information about the International Graphonomics Society, please visit https://graphonomics.net.
The Conference theme is “Graphonomics and Your Brain on Art, Creativity and Innovation” and will be a single track international forum for discussion on recent advances at the intersection of the creative arts, neuroscience, engineering, media, technology, industry, education, design, forensics, and medicine. Participants will convene to review the state of the art, identify challenges and opportunities and create a Roadmap for the field of Graphonomics and Your Brain on Art.
Topics to be addressed include but are not limited to: Integrative Strategies for Understanding Neural, Affective and Cognitive Systems in Realistic, Complex Environments; Neural and Behavioral Individuality and Variation; Neuroaesthetics; Creativity and Innovation; Neuroengineering and Brain-Inspired Art, Creative Concepts and Wearable MoBI Designs; Creative Art Therapy; Informal Learning; Education; and Forensics. Findings, including contributed papers and discussions, will appear in a peer-reviewed special issue on Graphonomics and your Brain on Art.
Jose L Contreras-Vidal, IGS 2019 General Chair
( jlcontreras-vidal@uh.edu )
Call for Nominations for ICDAR 2019 Awards (repost)
Important Dates
May 1, 2019 Nominations Due
Submission Method: email to dimos@cvc.uab.es / afornes@cvc.uab.es
The ICDAR Award Program is an established program designed to recognize individuals who have made outstanding contributions to the field of Document Analysis and Recognition in one or more of the following areas:
- Research
- Training of students
- Research/Industry interaction
- Service to the profession
Every two years, two awards categories are presented. Namely, the IAPR/ICDAR Young Investigator Award (less than 40 years old at the time the award is made), and the IAPR/ICDAR Outstanding Achievements Award. Each award will consist of a token gift and a suitably inscribed certificate. The recipient of the Outstanding Achievements award will be invited to give the opening key note speech at the ICDAR 2019 conference, introduced by the recipient from the previous conference.
Nominations are invited for the ICDAR 2019 Awards in both categories.
The nomination packet should include the following:
- A nominating letter (1 page) including a brief citation to be included in the certificate.
- A brief vitae (2 pages) of the nominee highlighting the accomplishments being recognized.
- Supporting letters (1 page each) from 3 active researchers from at least 3 different countries.
A nomination is usually put forward by a researcher (preferably from a different Institution than the nominee) who is knowledgeable of the scientific achievements of the nominee, and who organizes letters of support.
Submission procedure is strictly confidential, and self-nominations are not allowed.
Please send nominations packets electronically to the Awards Committee Co-Chairs Dimosthenis Karatzas (dimos@cvc.uab.es) and Alicia Fornes (afornes@cvc.uab.es). The deadline for receipt of nominations is May 1st, 2019 but early submissions are strongly encouraged.
The final decision will be made by the Awards Committee which is composed of the ICDAR advisory board and the previous awardees.
ICDAR Advisory Board
Call for Proposals to host ICDAR 2023 (repost)
Important Dates
Jun 1, 2019 Proposals Due
Submission Method: email to dimos@cvc.uab.es / afornes@cvc.uab.es
The ICDAR Advisory Board is seeking proposals to host the 17th International Conference on Document Analysis and Recognition, to be held in 2023 (ICDAR2023).
ICDAR is the premier IAPR event in the field of Document Analysis and Recognition with 300 to 500 participants. The aim of this conference is to bring together international experts to share their experiences and to promote research and development in all areas of Document Analysis and Recognition.
Any consortium interested in making a proposal to host an ICDAR should first familiarise themselves with the “Guidelines for Organizing and Bidding to Host ICDAR” document which is available on the TC10 and TC11 websites (iapr-tc10.univ-lr.fr and www.iapr-tc11.org, respectively).
A link to the most current version of the guidelines appears below. Small updates in the guidelines are expected during the next few weeks, so please check on the Web site of TC11 for the latest version. http://www.iapr-tc11.org/mediawiki/images/ICDAR_Guidelines_2016_02_27.pdf
The submission of a bid implies full agreement with the rules and procedures outlined in that document.
The submitted proposal must define clearly the items specified in the guidelines (Section 5.2).
It has been the tradition that the location of ICDAR conferences follows a rotating schedule among different continents. Hence, proposals from America are encouraged. However, high quality bids from other locations, for example, from countries where we have had no ICDAR before, will also be considered. Proposals will be examined by the ICDAR Advisory Board.
Proposals should be emailed to Dr. Dimosthenis Karatzas at dimos@cvc.uab.es and Dr. Alicia Fornes at afornes@cvc.uab.es by June 1, 2019.
ICDAR Advisory Board
GREC 2019: First Announcement (repost)
The 13th IAPR International Workshop on Graphics Recognition (GREC 2019), organized by the IAPR TC10, will be held on September 20-21, 2019 (Sydney, Australia) in conjunction with ICDAR 2019.
Important Dates
May 20, 2019 Paper Submission Deadline Jun 15, 2019 Acceptance notification Jun 30, 2019 Camera-Ready Papers Due
The GREC workshops provide an excellent opportunity for researchers and practitioners at all levels of experience to meet colleagues and to share new ideas and knowledge about graphics recognition methods. The aim of this workshop is to maintain a very high level of interaction and creative discussions between participants, maintaining a “workshop” spirit, and not being tempted by a “mini-conference” model.
Three special sessions will focus on: Music Scores Recognition, Comics Analysis and Understanding Sketch Recognition and Understanding. We encourage authors to submit papers on these topics, but papers on other GR topics are also welcome.
Jean-Christophe Burie, GREC 2019 General Chair
( jcburie@univ-lr.fr )
HIP 2019: Call for Papers
The 5th International Workshop on Historical Document Imaging and Processing (HIP 2019) will be held in conjunction with ICDAR, on September 20-21, 2019 in Sydney, Australia.
Important Dates
Jun 01, 2019 Paper Submission Deadline Jul 15, 2019 Acceptance notification Aug 01, 2019 Camera-Ready Papers Due
The workshop brings together researchers working with historical documents and is intended to be complementary and synergistic to the work in analysis and recognition featured in the main sessions of ICDAR, the premier international forum for researchers and practitioners in the document analysis community.
Workshop topics include (but are not limited to):
- Imaging and Image Acquisition (fragile materials, multispectral, non-invasive, applications, …)
- Digital Archiving Considerations (compression, metadata, collection, records, archives, …)
- Document Restoration/Improving readability (dealing with defects, enhancement, interactive tools, …)
- Content Extraction (retrieval, transcription, user interfaces, algorithms, context, ontologies, …)
- Family History Documents and Genealogies (collections, extracting / linking, historical networks, …)
- Automated Classification, Grouping and Hyperlinking of Historical Documents (style identification, online navigation, querying, summarisation, tagging, …)
For more information please visit:
https://www.primaresearch.org/hip2019/callForPapers
Stefan Pletschacher (Organising Chair), Apostolos Antonacopoulos (General Chair), Clemens Neudecker and Christian Clausner (Program Chairs)
( hip2019@primaresearch.org )
ICDAR-OST 2019: Call for Papers
The 2nd ICDAR Workshop on Open Services and Tools for Document Analysis (ICDAR-OST 2019) will be held in conjunction with ICDAR, on September 21, 2019 in Sydney, Australia.
Important Dates
Jun 01, 2019 Paper Submission Deadline Jul 15, 2019 Acceptance notification Aug 01, 2019 Camera-Ready Papers Due
The workshop aims at promoting open tools, software, open services (for processing, evaluation or visualization), as well as facilitating public dataset usage, in the domain of Document Image Analysis Research, building on the experience of our community and of other ones. Such tools, softwares, services, formats or datasets should observe the principles of being reusable (I can use it on my data), transferable (I can use it on my premises) and reproducible (I can obtain the same results).
The accepted contributions are presented during interactive pitch and demo sessions, enabling authors to advertise their work, identify potential issues and solutions in their approach, as well as igniting collaboration with other participants. While this is encouraged, releasing tools with a free/open-source license is not required.
Topics of interests include, but are not limited to:
- Fully Open Source Tools
- Web Services for Document Image Analysis
- Collaborative Platforms
- Creating Ground Truth
- Performance Evaluation
- Coordination systems for DAR
- Deployment of document image processing tools for production
- All methods or tools related to Document Image Analysis Research in general
For more information please visit:
https://sites.google.com/view/icdar-ost2019/call-for-papers
Fouad Slimane and Lars Vögtlin (Main Organizers), Marcel Würsch and Marcus Liwicki (Program Chairs)
( ost2019@unifr.ch )
List of ICDAR 2019 Competitions (repost)
ICDAR2019 will organize a set of competitions dedicated to a large set of document analysis problems. The list of accepted competitions can be accessed online.
Category: Handwritten Historical Document Layout Recognition
- ICDAR 2019 Competition on Historical Book Analysis (M. Mehri et al.)
- ICDAR 2019 Competition on Digitised Magazine Article Segmentation (historical documents) (L. Wilms et al.)
- ICDAR 2019 Competition on German-Brazilian Newspaper Layout Analysis (A. Araujo)
- ICDAR 2019 Competition on Baseline Detection and Page Segmentation (M. Diem et al.)
Category: Historical Handwritten Script Analysis
- ICDAR 2019 Competition on Recognition of Historical Arabic Scientific Manuscripts (A. Schoonbaert et al.)
- ICDAR 2019 Competition on Recognition of Early Indian Printed Documents (T. Derrick et al.)
- ICDAR 2019 Historical Document Reading Challenge on Large Structured Family Records (F. Liwicki et al.)
- ICDAR 2019 Competition on Image Retrieval for Historical Handwritten Documents (V. Christlein)
Category: Document Recognition (Layout analysis & Text Recognition)
- ICDAR 2019 Competition on Table Detection and Recognition in Archival Documents (H. Déjean et al.)
- ICDAR 2019 Competition on Table Recognition (L. Gao et al.)
- ICDAR 2019 Scanned Receipts OCR and Information Extraction (Z. Huang et al.)
- ICDAR 2019 Competition on Form Understanding in Noisy Scanned Documents
- ICDAR 2019 Competition on Recognition of Documents with Complex Layouts
Category: Handwriting recognition
- ICDAR 2019 Competition on Recognition of Handwritten Mathematical Expressions and Typeset Formula Detection (M. Mahdavi et al.)
Category: Document Image Binarization
- ICDAR 2019 Competition on Binarization of Handwritten, printed, or mobile captured Documents (R. Lins)
- ICDAR 2019 Competition on Document Image Binarization (I. Pratikakis et al.)
Category: Robust Reading
- ICDAR 2019 Competition on Robust Text Reading from Large-scale Street View Images with Partial Labels (Y. Sun et al.)
- ICDAR 2019 RRC on Scene Text Visual Question Answering (A. Biten et al.)
- ICDAR 2019 RRC on Arbitrary-shaped scene text detection and recognition (Y. Sun et al.)
- ICDAR 2019 RRC on Reading Chinese text on signboard (Dong Wang)
- ICDAR 2019 RRC on Multi-lingual scene text detection and recognition (M. Busta et al.)
Category: Post-OCR Correction
- ICDAR 2019 Competition on Post-OCR Text Correction (C. Rigaud et al.)
Category: Chart Parsing
- ICDAR 2019 Competition on Chart Elements Parsing (C. Tensmeyer)
- ICDAR 2019 Competition on Harvesting Raw Tables from Infographics (R. Setlur)
Category: Miscellaneous Competitions
- ICDAR 2019 Competition on Fine-Grained Classification of comic characters (J.-C. Burie)
- ICDAR 2019 Competition on Object Detection and Recognition in Floorplan images (M. Luqman)
- ICDAR 2019 Competition on Signer Identification based on a Multi-device On-line Signature Dataset (G. Hanczar, E. Griechisch)
Marcus Liwicki and Luiz Eduarde S. Olivera, ICDAR2019 Competition Chairs
( marcus.liwicki@ltu.se, lesoliveira@inf.ufpr.br )
Call for Participation: ICDAR 2019 CROHME + TFD Competition (repost)
The 6th Competition on Recognition of Handwritten Mathematical Expressions (CROHME) and Typeset Formula Detection (TFD) will be organized in conjunction with ICDAR 2019.
Important Dates
Feb 28, 2019 Release of training set Mar 15, 2019 Registration deadline Apr 1, 2019 Release of test set Apr 30, 2019 Result submission due May 15, 2019 Initial submission of competition reports
During its history, the CROHME competition has become the standard benchmark for comparing online handwritten math recognition systems. An IJDAR paper summarizing the systems and findings from the first four years of the competition is available, along with publications summarizing the outcome of each competition in ICFHR and ICDAR (2011-2014, 2016). Currently, the CROHME dataset is used by research groups from around the world.
This new instance of the CROHME competition will have three main tasks. As in previous CROHMEs, the first task concerns the recognition of isolated handwritten formulas. There will be a new task for offline handwritten formula recognition from images. Finally, we will have a new task for detecting formulas in document pages.
Task Overview
1.) Online handwritten formula recognition: Expression recognition from strokes. Subtasks:
- Isolated math symbol recognition
- Parsing expressions from valid symbol locations and labels
2.) Offline handwritten formula recognition: Expression recognition from images. Subtasks:
- Isolated math symbol recognition
- Parsing expressions from valid symbol locations and labels
3.) Detection of formulas in document pages. Subtasks:
- From raw images of document pages
- From provided symbol locations and labels
Subtasks 1-a)/2-a) and 1-b)/2-b) are provided to observe the behavior of symbol recognition and relationship parsing in isolation, without the additional complexity of symbol segmentation and recognition.
Awards. Awards will be provided for each of the main tasks, based on expression and detection rates.
Data and Submission
The CROHME organizers will provide an expanded training set, along with a new test set. Participants are welcome to enhance/expand the provided training data, or use additional data for their systems. Submissions will be made using an online system: recognition results will be uploaded through the site, and then the leaderboard will be updated automatically.
CROHME 2019 Organizers
- Mahshad Mahdavi (mxm7832@rit.edu), Rochester Institute of Technology, NY, USA
- Dr. Richard Zanibbi (rlaz@cs.rit.edu), Rochester Institute of Technology, NY, USA
- Dr. Harold Mouchère (harold.mouchere@univ-nantes.fr), University of Nantes, France
- Dr. Utpal Garain (utpal@isical.ac.in), Indian Statistical Institute, India
- Pr. Christian Viard-Gaudin (christian.viard-gaudin@univ-nantes.fr), University of Nantes, France
Questions
Please, send questions, concerns or comments about the competition to: crohme2019@cs.rit.edu
Call for Participation: ICDAR 2019 SROIE Competition
The Competition on Scanned Receipts OCR and Information Extraction (SROIE) will be organized in conjunction with ICDAR 2019.
Important Dates
Mar 01, 2019 Training/validation set available Mar 31, 2019 Registration deadline Apr 15, 2019 Submission open Apr 30, 2019 Deadline for competition participants
Scanned receipts OCR and information extraction (SROIE) are the processes of recognizing and extracting key texts from scanned receipts and invoices. SROIE can provide services for many applications, such as efficient archiving, fast indexing and document analytics, therefore playing critical roles in streamlining document-intensive processes and office automation in many financial, accounting and taxation areas. However, SROIE also faces big challenges, such as very high accuracy requirements and low receipt quality. In recognition of the above challenges, importance and huge commercial potentials of SROIE, this ICDAR 2019 competition on SROIE is proposed. We welcome participation from interested parties, aiming to draw attention from the OCR and DAR communities and promote research and development on SROIE. This competition uses the Robust Reading Competition Web portal for organization and management.
Dataset and Tasks The proposed competition will develop well-annotated receipt dataset with 1000 whole scanned receipt images. Each receipt image has four key text fields of interests, such as goods name, unit price and total cost. There are two specific competition tasks in this competition: scanned receipt OCR and key information extraction. Compared to the other widely studied OCR tasks of ICDAR, receipt OCR is a much less studied problem and has some unique challenges. On the other hand, research works on extraction of key information from receipts have been rarely published.
Task One: Scanned Receipt OCR
The aim of this task is to accurately localize texts with 4 vertices and recognize the text in the localized bounding boxes. The text localization ground truth will be at least in the level of words. As participating teams may apply localization algorithms to locate text at different levels (e.g. text lines), the methodology based on DetVal will be implemented for the evaluation of text localizazation in this task.
Task Two: Key Information Extraction
The aim of this task is to extract texts of a number of key fields from scanned receipts. For each test receipt image, the extracted texts are compared to the ground truth. An extract text is marked as correct if both submitted content and category of the extracted text match the groundtruth. The mAP is computed over all the extracted texts of the test receipt images. F1 score is computed based on the mAP and recall. F1 score is used for ranking.
Organization Team
- Dr. Zheng Huang (huang-zheng@sjtu.edu.cn), Shanghai Jiaotong University, China
- Dr. Kai Chen (kaichen@onlyou.com), Onlyou, China
- Dr. Jianhua He (j.he7@aston.ac.uk), Aston University, UK
- Dr Xiang Bai (xbai@hust.edu.cn), Huazhong University of Science and Technology, China
- Dr. Dimosthenis Karatzas (dimos@cvc.uab.es), Universitat Autónoma de Barcelona, Spain
- Dr. Shijian Lu (Shijian.Lu@ntu.edu.sg), Nanyang Technological University, Singapore
- Dr. C. V. Jawahar (jawahar@iiit.ac.in), IIIT Hyderabad, India
Pattern Recognition Letters Special Issue on DLVTA: Deep Learning for Video Text Analysis (repost)
Important Dates:
Feb 1 - Feb 28, 2019 Paper Submission Period
We are living in a world where we are seamlessly surrounded by multimedia content: text, image, audio, video etc. Much of it is due to the advancement in multimodal sensor technology. For example, intelligent video-capturing devices capture data about how we live and what we do, using surveillance and action cameras as well as smart phones. These enable us to record videos at an unprecedented scale and pace, embedded with exceedingly rich information and knowledge. Now the challenge is to mine such massive visual data to obtain valuable insight about what is happening in the world. Due to the remarkable successes of deep learning techniques, new research initiates are taken to boost video analysis performance significantly.
Deep learning is a new field of machine learning research, to design models and learning algorithms for deep neural networks. Due to the ability of learning from big data and the superior representation and prediction performance, deep learning has gained great successes in various applications of pattern recognition and artificial intelligence, including video processing, character and text recognition, image segmentation, object detection and recognition, face recognition, traffic sign recognition, speech recognition, machine translation, to name a few.
Deep video analytics, or video text analytics with deep learning, is becoming an emerging research area in the field of pattern recognition. It is important to understand the opportunities and challenges emerging in video text analysis with deep learning techniques, identify key tasks and evaluate the state of the art, showcase innovative methodologies and ideas, introduce large scale real systems or applications, as well as propose new real-world datasets and discuss future directions. This virtual special issue will offer a coordinated collection of research updates in the broad fields ranging from computer vision, multimedia, text processing to machine learning. We solicit original research and survey papers addressing the synergy of video understanding, text analysis and deep learning techniques. The topics of interest include, but are not limited to:
- Deep learning for video text segmentation
- Deep learning for video text analysis
- Deep learning for character and text recognition in video
- Deep learning for scene text detection and recognition
- Deep learning for text retrieval from video
- Deep learning for graphics and symbol recognition in video
- Video categorization based on text
- Deep learning for other CBDAR tasks, etc.
Guest Editors:
- Prof. Umapada Pal (Managing Guest Editor), CVPR Unit, Indian Statistical Institute, Kolkata, India, umapada@isical.ac.in
- Dr. Subhadip Basu, Computer Science and Engineering Department, Jadavpur University, Kolkata, India, subhadip@cse.jdvu.ac.in
- Prof. Ujjwal Maulik, Computer Science and Engineering Department, Jadavpur University, Kolkata, India, umaulik@cse.jdvu.ac.in
Pattern Recognition Special Issue on Scene Text Reading and its Applications (STRA) (repost)
Important Dates:
Feb 15 - Mar 31, 2019 Paper Submission Period
Text in scenes is an important source of information, since it conveys high-level semantics and is almost seen everywhere. These unique traits make scene text reading, involving automated detection and recognition of texts in scene images and videos, a very active research topic in the communities of computer vision, pattern recognition, and multimedia. Recently, these communities have observed a significant surge of research interests and efforts regarding the topic of Scene Text Reading, which is evidenced by the huge number of participants and submissions of ICDAR competitions as well as papers published on top journals and conferences.
Meanwhile, various real-world applications, such as product search, augmented reality, video indexing and autonomous driving, have formed strong demands for technique and systems that can effectively and efficiently extract and understand textual information in scene images and videos. This special issue will feature original research papers related to the theories, ideas, algorithms and systems for Scene Text Reading, together with applications to real-world problems.
The topics of interest include (but not limited to) the following:
- Basic theories and representations regarding text in natural scenes
- Scene text detection methods for scene images
- Scene text recognition methods for scene images
- End-to-end reading systems for scene images
- Text detection and recognition in born-digital images
- Text detection, recognition and tracking in videos
- Script identification in the wild
- Text information mining from web images and videos
- Quality assessment and text image/video restoration methods
- Benchmark datasets and performance evaluation methods
- Applications related to scene text understanding
- Survey papers on scene text understanding
Guest Editors:
- Dr. Xiang Bai (xbai@hust.edu.cn) Huazhong University of Science and Technology, China
- Dr. Dimosthenis Karatzas (dimos@cvc.uab.es) Universitat Autónoma de Barcelona, Spain
- Dr. Shijian Lu (Shijian.Lu@ntu.edu.sg) Nanyang Technological University, Singapore
- Dr. C. V. Jawahar (jawahar@iiit.ac.in) IIIT Hyderabad, India
Pattern Recognition Letters Special Issue on Hierarchical Representations: New Results and Challenges for Image Analysis
Important Dates:
May 1 - May 31, 2019 Paper Submission Period
Image representations based on hierarchical, scale-space models and other non-regular / irregular grids have become increasingly popular in image processing and computer vision over the past decades. Indeed, they allow a modeling of image contents at different (and complementary) levels of scales, resolutions and semantics. Methods based on such image representations have been able to tackle various complex challenges such as multi-scale image segmentation, image filtering, object detection, recognition, and more recently image characterization and understanding, potentially involving higher level of semantics.
The proposed virtual special issue will consider extended and updated versions of papers published at the recent ICPRAI 2018 conference as well as submissions from anybody proposing innovative methods in the field of image representation with emphasis, but not restricted to computer vision and image processing, medical imaging, 2D and 3D images, multi-modality, remote sensing image analysis, image indexation and understanding.
Topics
The main topics of this HIERARCHY virtual special issue include, but are not limited to:
- image decomposition on the basis of frequency spaces (Fourier, wavelets, etc.)
- hierarchies linked to the image space, mathematical morphology, connected operators
- scale-space representations
- non-regular / irregular grid image representations
- multi-scale representation in computer-vision
- evaluation of hierarchical image representations
The aim of this special issue is to popularize the use of hierarchical methods in image processing and analysis. Indeed, although these methods have become very popular in computer vision over the last 20 years, their potential impact is largely underexploited in image processing and analysis.
Submission:
The review process will follow the standard PRLetters scheme. In particular, each paper will be reviewed by two referees. The referees will include the program committee members of the special session at ICPRAI 2018 on the same topic and other invited referees selected from the EES.
This is an open call-for-papers from outside the ICPRAI 2018 conference, though participants of the virtual special session on hierarchical representations of images will be invited to submit an extended article of their contribution. These articles must include at least 30% of new contribution (theoretical or experimental results), with different figures and a different title than the one of the ICPRAI paper. The common content between the ICPRAI paper and the HIERARCHY article cannot be exactly the same. This virtual special issue also accepts totally new proposals; both types of submissions must respect the following guidelines.
All submissions have to be prepared according to the Guide for Authors as published in the Journal Web Site at:
http://www.elsevier.com/journals/pattern-recognition-letters/0167-8655/guide-for-authors
The authors are invited to upload their article through http://ees.elsevier.com/prletters/ during the submission period (see above), by indicating the virtual special issue acronym (HIERARCHY). Their contribution should not have been published previously, or not be under revision for any other publication elsewhere. In the case of an extension of ICPRAI session, the original work must be attached, and a description of major changes must be provided. Guest editors will judge the suitability and scope of each contribution.
All submitted papers will be reviewed according to the guidelines and standards of PRLetters. At least two (2) reviewers will be assigned, and 2 reviewing rounds can occur for each submitted article. The maximal length is 7 pages, except for articles requiring major revision and additional content by the reviewers, for which the maximal length can be extended to 8 pages. If an article still need major revision after the second round, it will be rejected.
Only original, high-quality, technically sound articles and in-line with the PRLetters standard guidelines, will be considered for publication in this virtual special issue. Submissions will be judged by their contributions to the virtual special issue topics, clarity of presentation, potential impact to the field, and suitability to publication in an archival journal.
Guest Editors:
- Nicolas Passat nicolas.passat@univ-reims.fr
- Camille Kurtz camille.kurtz@parisdescartes.fr
- Antoine Vacavant antoine.vacavant@uca.fr
IJDAR: Latest Issue (Vol. 21, Issue 4) (repost)
The December 2018 issue of IJDAR has been released. Click on the links below to go directly to the Springer Link page for each article.
Table of Contents
- Building efficient CNN architecture for offline handwritten Chinese character recognition. Zhiyuan Li, Nanjun Teng, Min Jin & Huaxiang Lu
- A comprehensive study of hybrid neural network hidden Markov model for offline handwritten Chinese text recognition. Zi-Rui Wang, Jun Du, Wen-Chao Wang, Jian-Fang Zhai & Jin-Shui Hu
- Augmented incremental recognition of online handwritten mathematical expressions. Khanh Minh Phan, Anh Duc Le, Bipin Indurkhya & Masaki Nakagawa
- A combined strategy of analysis for the localization of heterogeneous form fields in ancient pre-printed records. Aurélie Lemaitre, Jean Camillerapp, Cérès Carton & Bertrand Coüasnon
- KERTAS: dataset for automatic dating of ancient Arabic manuscripts. Kalthoum Adam, Asim Baig, Somaya Al-Maadeed, Ahmed Bouridane & Sherine El-Menshawy
IJDAR Discount for IAPR Members (repost)
IAPR is pleased to announce a partnership agreement with Springer, the publisher of IJDAR, the International Journal on Document Analysis and Recognition. This new agreement will allow IAPR members to receive a subscription to the electronic version of IJDAR at a discount of nearly 50%. For additional details, see the links below:
Koichi Kise, Daniel Lopresti and Simone Marinai, IJDAR Editors-in-Chief
( kise@cs.osakafu-u.ac.jp, lopresti@cse.lehigh.edu, simone.marinai@unifi.it )
Book on Graphics Recognition (repost)
KC Santosh, Document Image Analysis: Current Trends and Challenges in Graphics Recognition, Springer, 2018. ISBN: 978-981-13-2338-6.
URL: https://link.springer.com/book/10.1007%2F978-981-13-2339-3
Table of Contents
- Document Image Analysis
- Graphics Recognition
- Graphics Recognition and Validation Protocol
- Statistical Approaches
- Structural Approaches
- Hybrid Approaches
- Syntactic Approaches
- Conclusion and Challenges
Description (taken from the Foreward by Jean-Marc Ogier):
The book starts with a clear and concise overview of document image analysis; the author puts a position about where does graphics processing lie (Chap. 1), which is immediately followed by graphics recognition (Chap. 2) in detail. The best part of the book is it summarizes the rich state-of-the-art techniques in addition to those international contests that have been happening in every 2 years since the 90s. This summary helps readers understand the scope and importance of graphics recognition in the domain. Another important issue is the author framed the need for validation protocol (Chap. 3) so that it allows a fair comparison that let us review our advancements then and now. Three different fundamental approaches, viz. statistical (Chap. 4), structural (Chap. 5), and syntactic (Chap. 7), are comprehensively described for graphics recognition by taking state-of-the-art (up to date) research techniques in addition to the hybrid approaches (Chap. 6). For a complex graphics recognition problem, structural approaches are found to be appropriate and have been well covered in the book. Interestingly, even though there exist a few works on the syntactic approach for graphical symbol recognition, the author sets a position and its importance as the image description happens to be close to human understanding language. The summary of the book (Chap. 8) is succinct and to the point […] I strongly believe the book has the potential to attract a large audience.
TC11 Datasets Repository (repost)
TC11 maintains a collection of datasets that can be found online in the TC11 Datasets Repository.
If you have new datasets (e.g., from competitions) that you wish to share with the research community, please use the online upload form. For questions and support, please contact the TC11 Dataset Curator (contact information is below).
Joseph Chazalon (TC11 Dataset Curator)
( joseph.chazalon@lrde.epita.fr )
Student Industrial Internship Opportunities (IAPR) (repost)
IAPR’s Industrial Liaison Committee is pleased to announce the opening of its Company Internship Brokerage List.
The web page lists internship opportunities for students at different levels of education and specialism. We expect many additional internship opportunities to be listed here as the community becomes more aware of the site.
IAPR Company Internship Brokerage List:
http://homepages.inf.ed.ac.uk/rbf/IAPR/INDUSTRIAL
Bob Fisher, Chair, IAPR Industrial Liason Committee
( rbf@inf.ed.ac.uk )
Call for Contributions: To contribute news items, please send a short
email to the editor, Andreas Fischer ([1]).
Contributions might include conference and workshop
announcements/updates/reports, career opportunities, book reviews, or anything
else of interest to the TC-11 community.
Subscription: This newsletter is sent to subscribers of the IAPR TC11 mailing list. To join the TC-11 mailing list, please click on this link: Join the TC-11 Mailing List. To manage your subscription, please visit the mailing list homepage: TC-11 Mailing List Homepage.