ICDAR 2019 Competition on Table Detection and Recognition


The competition will contain two tasks:

  1. Task 1: Table Region Detection

  2. Task 2: Table Recognition

The ICDAR 2019 cTDaR evaluates two aspects of table analysis: table detection and recognition. The objective is to detect the table region within a document and to correctly detect the table structure. The participating methods will be evaluated on a modern dataset and additionally, on archival documents with printed and handwritten tables present.

The dataset consists of modern documents and archival ones with various formats, including document images and born-digital formats such as PDF. The annotated contents contain the table entities and cell entities in a document, while we do not deal with nested tables. We gathered 1000 modern ones and 1000 archival ones as table region detection task's test dataset and 80 documents as table recognition task's test dataset (see Figure 1).

Figure 1: Example of table dataset

Detailed description will be announced shortly together with the input format.

Regarding the evaluation of table detection, most of the current work uses the precision & recall rates to measure the experiment results. There are two well-known metrics for evaluating the performance of algorithms, (i) the metric based on the table regions [1], and (ii) the metric based on the text regions [2],[3]. We choose the metric (i) to evaluate the performance of table region detection, and apply the metric (ii) to evaluate that of table recognition. Based on these measures, an overall performance of various algorithms can be compared with each other.

  1. Metric for table region detection task

    The task is evaluated by a traditional method. Intersection over Union (IOU)[1] is calculated to estimate whether a table region detected by the participant is correctly located. Let A denote the region detected by a participant and B denote the corresponding region described in the groundtruth file. The IOU is calculated as follows:

    $$IOU=\frac{A\bigcap B}{A+B-A\bigcap B}$$

    Average Precision (AP) is the metric to evaluate the task. For the task, the precision/recall curve is computed from a method’s ranked output. Recall is defined as the proportion of all positive examples ranked above a given rank. Precision is the proportion of all examples above that rank which are from the positive class. The AP summarizes the shape of the precision/recall curve, and is defined as the mean precision at a set of eleven equally spaced recall levels [0,0.1,0.2,...,1].


  2. Metric for table recognition task

    Firstly, the structure of a table is defined as a matrix of cells. The groundtruth provides row bounding box list、column bounding box list、cell’s bounding box、 textual content and its start and end column and row positions. We propose the following metric:

    1. Cell’s adjacency relation-based table structure evaluation[2]

      Blank cells are not represented in the grid. A benefit of such a representation is that each cell is independent from what has previously occurred in the table definition. For comparing two cell structures, we use the method: for each table region we generate a list of adjacency relations between each content cell and its nearest neighbor in horizontal and vertical directions. No adjacency relations are generated between blank cells or a blank cell and a content cell. This 1-D list of adjacency relations can be compared to the groundtruth by using precision and recall measures, as shown in Figure 2. If both cells are identical and the direction matches, then it is marked as correctly retrieved; otherwise it is marked as incorrect.

      Figure 2: Comparison of an incorrectly detected cell structure with the groundtruth [2]

    2. To be Added.

  3. We will also release a number of tools to enable the participants to automatically compare their result to the groundtruth.

Release of website and samples: March 1, 2019
Release of the training dataset: March 10, 2019
Release of the test dataset and evaluation tools: March 25, 2019
Deadline of result submission: April 20, 2019
Release of the annotations of the test dataset: April 30, 2019

[1] L. Gao, X. Yi, Z. Jiang, L. Hao and Z. Tang, “ICDAR 2017 POD Competition,” in ICDAR, 2017, pp. 1417-1422.

[2] M. C. Gobel, T. Hassan, E. Oro, G. Orsi, ”ICDAR2013 Table Competition,” in Proc. of the 12th ICDAR (IEEE, 2013), pp. 1449-1453.

[3] A. C. e Silva, “Metrics for evaluating performance in document analysis: application to tables,” IJDAR, vol. 14, no. 1, pp. 101–109, 2011.

Hervé Déjean, Naver Labs Europe, France
herve.dejean@naverlabs.com
Jean-Luc Meunier, Naver Labs Europe, France
jean-luc.meunier@naverlabs.com
Florian Kleber, Computer Vision Lab, TU Wien, Austria
kleber@cvl.tuwien.ac.at
Eva Lang, Archiv des Bistums Passau, Germany
eva.lang@ieee.org
Liangcai Gao, Institute of Computer Science & Technology, Peking University, China
Document Image Analysis and Recognition Technical Committee,China Society of Image and Graphics(DIAR-CSIG)
glc@pku.edu.cn
Yilun Huang, Institute of Computer Science & Technology, Peking University,China
huangyilun@pku.edu.cn
Yu Fang, State Key Laboratory of Digital Publishing Technology, Founder Group Co. LTD., China
fangyu@founder.com

If you have any query, please contact us on the following email address : cTDAR@cvl.tuwien.ac.at