- 1 Project description
- 2 Work plan
- 2.1 Prototypical workflows for editioning and data analysis
- 2.2 Capturing the originals
- 2.3 Metadata editor
- 2.4 OCR module
- 2.5 Synoptic editing tools
- 2.6 Data exports und online presentation
- 2.7 Semantic MediaWiki
- 2.8 Textual analysis interface
- 2.9 Versioning und archiving
- 3 Contact
- 3.1 Coordination
- 3.2 Partners at Würzburg University
- 3.3 External Partners
KALLIMACHOS merges humanists, computer scientists and librarians in a regional digital humanities center. Already existing cooperations and competencies at Würzburg University are complemented by new partnerships with the DFKI Kaiserslautern (OCR) and the University of Erlangen-Nürnberg (Linguistic Informatics). The constution of the new hub is sponsored by the Federal Ministry of Education and Research as a part of the e-Humanities funding programme.
Our main point of interest lies in the supervision and coordination of digital editions and the application of quantitative analysis via different methods of text mining, i.e. stilometric analysis, topic modeling and named entity recognition. We seek to offer our partners the technical and social infrastructure needed to answer a broad palette of research questions in the humanities based on digital methods.
From a technical point of view, our work includes the development and provision of the required software components and the establishment of prototypical workflows to be incorporated in existing infrastructures. In this regard, we emphasize long-term availability, maintenance and archiving for the projects, portals and data in our care. Insofar, KALLIMACHOS strives to build an integrated structure for research data in the humanities.
From a more social point of view, we also promote a constant exchange between regional and trans-regional projects in the digital humanities through annual conferences and workshops. By providing advice and training, we introduce experts and newcomers into new and exciting methods and research fields.
Prototypical workflows for editioning and data analysis
Starting from our subprojects as Use Cases, we establish prototypical workflows for data assimilation and analysis in the humanities in a way comprehensible for our target audiences. In our workflow management system Wüsyphus II, established tools can be lined into work chains. Through internal and public training courses, these solutions are propagated among a broad audience in the context of Digital Humanities. The newfound best-practice implementations are integrated into the workflow, empirically validated in the context of our Use Cases und finally provided to the research community. Thus, the established workflows can be easily replicated with similar datasets.
Not every Subproject has to be handed down through all the stations of the Wüsyphus II workflow system. If, for example, a new project already has access to high-quality Scans of a literary corpus, there´s no need to reproduce these Scans for a second time. However, the establishment of an individual project workflow based on previously found solutions is mandatory for every project maintained by KALLIMACHOS. The following graph shows which "links" in our workflow chain are significant for our current Subprojects.
Capturing the originals
The Center for digitalisation is hosted by Würzburg University´s Central Library and provides the technology and the trained personell necessary for new high-quality digitalisations and re-digitalisation alike. Even for usually troublesome cases, innovative solutions are at hand: For instance, our specially manufactured book rocker allows for the scanning of books that only offer an opening angle of up to 60° or more, thus ensuring the proper conservation of the often highly valuable originals. For large size posters, an innovative suction wall is available as well.
The already existing metadata editor at Würzburg University´s center for digitalisiation allows for the centralized maintenance of a wide array of predefined metadata records for manuscripts, incunabula, newer printed publications and graphics. For the development of our workflow management system WüSyphus II, extensive optimizations of the online performance and the user interface are planned. The upgraded metadata editor will also be able to handle additional categories of metadata, for example those needed to describe historic artifacts and other kinds of realia.
Our OCR module provides an automated preprocessing system for the creation of digital text files. Two working groups, one at the DFKI Kaiserslautern and one at Würzburg University, seek to develop and cultivate new and existing tools and software components that are able to tap into texts that previously weren´t suitable for satisfying OCR solutions. The current focal point of these efforts is our Use Case Narragonien.
anyOCR: a self-learning OCR system
The DFKI established the term anyOCR for an adaptable optical OCR method, which – in contrast to established OCR systems (i.e. systems based on atomic character segments without more coarse-grained segments like lines or paragraphs) – can adapt to different requirements and the specific problems of OCR for historical documents. Traditional segmentation-free OCR methods based on sequence learning could already be utilized for handwritten, diversly printed and historical documents and were able to recognize complete lines of text at once and with a higher recognition rate than traditional segmentation-based OCR methods. However, to achieve satisfying results with these methods, a lot of manually transcribed training material is needed. The generation of this so called ground truth is time-consuming and expensive. Additionaly, the option of synthetically generating the required ground truth is not feasible in the domain of historical documents, as no representative text are available.
To deal with the problem of missing ground truth data for sequence learning, the DFKI has developed the framework OCRoRACT based on the anyOCR-method. Here, a conventional character-based OCR method is deployed to train an initial OCR model using individually recognized symbols. The resulting lines of text, which (in contrast to an actual ground truth) may be flawed by errors, are then used to train the sequence learning model instead of the manually generated ground truth. By using contextual information, the system is able to learn how to correct the errors in this pseudo-ground truth. An OCRoRACT-System trained in this fashion for historical documents has proven to be able to deliver suitable recognition rates despite the imposed lack of the required dictionaries.
Printshop-specific character inventories
The OCR-Team at Würzburg University´s central library accompanies and evaluates the development process at the DFKI with the help of existing tools stemming from the EMOP project (Franken+, Gamera, Tesseract). With the help of our specially developed tool Glyph Miner, specific inventories of letters are compiled for historic printers and publishers and coupled with a digital MUFI font type. These inventories allow for the creation of printer-specific training data for OCR, which can then be re-used to capture further texts using the same sets of letters. With this printship-specific approach, we are already able to reach recognition rates of 93% and higher, which has not been reached on similar types of texts before.
Synoptic editing tools
This module establishes a framework for online editing tools that enables the users to view texts and images side by side for annotation and text-image linking. These editors can be tailored to suit different project specific requirements. The resulting intuitively usable web-based edition tools can be used without a need for deeper insight into XML and other contemporary text encoding formats, which has been proven to be especially useful for the manual correction of OCR output. In concert with the user management system und the editorial infrastructure provided by the WÜsyphus II workflow system, this allows for the organized inclusion of research assistants, students and even interested “laymen” in the editing process.
Data exports und online presentation
The annotated texts, images und additional datatypes will be transferrable into various established export and interchange formats, depending on the individual project requirements. For instance, the data exchange with the TextGrid-Repository can be enabled through XML encoding conforming to the TEI standards. Beneath these export options, individual solutions for presenting data will be available for the project´s web portal. Especially the preceded framework for synoptic editors will also be reusable for building a synopic viewer suitable for presentation of references between images and texts. For instance, the scans used for the edition, the initial OCR text, manual transcriptions, localisations, annotations and metadata can be viewed, hidden or highlighted simultaneously.
Based on Semantic MediaWiki, an open-source expansion of the MediaWiki system (best known as the scaffolding for Wikipedia and many other wiki systems), an easily usable and quickly adaptable web 3.0 component can be provided for the processing, structuring and presentation of various datasets. Through MediaWiki´s user management system and the automated versioning of changes, SMW is suited especially well for the implementation of crowdsourcing into a project´s workflow. For the transfer of data from the wiki environment WüySyphus II, new interfaces and import routines are to be developed. For less challenging projects, SMW can also be used directly as a means of presenting data. The options for searches and queries already incorporated in SWM are especially convenient for the implementation of primarily database-driven projects like academic source catalogs.
Textual analysis interface
Building upon our textual analysis use cases, this module supports:
- The aggregation of a corpus of texts to be analyzed from the TextGrid-Repository and WÜsyphus II based on their metadates,
- The preparation of the chosen texts and their metadata for analysis,
- The analysis in UIMA and finally
- The incorporation of the results in TextGrid by reassigning UIMA annotations to TEI.
These steps can be customized and generalized to be reusable in future projects. Perspectively, even novices and “laymen” in the field of data analysis will be able to profit from automated analysis methods, which can, for instance, be used for the recognition of grammatic cases and structures or keywords in a text. As a data transfer format between the textual textual analysis and the WüSyphus II workflow system, the CoNLL format is proposed.
Versioning und archiving
A crucial and often neglected factor regarding the success of digital projects not only in the humanities is the conclusive guarantee for long-term reproducibility and reusability of the underlying data. For “living”, i.e. for continuously maintained and expanded data collections and corpora, ensuring data security is of primary concern. To ensure proper versioning, Git-based Systems are envisioned in addition to our wiki systems. Alongside the stable availability and versioning of datasets, methods or long term archiving are to be implemented as well.
- Dr. Hans-Günter Schmidt (Project director)
- Kerstin Kornhoff (Acquisition)
- Marion Friedlein (Acquisition)
- Regina Beitzinger (Acquisition)
- Almut Wenk (Acquisition)
- Tanja Altenhöfer (Acquisition)
- Jonathan Gaede (Wki systems and subproject communication)
- Dr. Herbert Baier-Saip (System development and administration)
- Dipl.-Inform. Felix Kirchner (System development and OCR)
- Martin Gruner (Development, wiki systems und OCR)
- Markus Kinner (OCR and maintenance)
- Dipl.-Ing. Marco Dittrich (Scan technology, OCR and digitalisation)
- Ulf Weinmann (Image editing and digitalisation)
- Irmgard Götz-Kenner (Image editing and photography)
Partners at Würzburg University
- Prof. Dr. Fotis Jannidis
Project group Narragonien digital
Am Hubland, Bau 5
D-97074 WürzburgTel.: 0931 31-85681
- Prof. Dr. Brigitte Burrichter
Am Hubland, Bau 4
D-97074 WürzburgTel.: 0931 31-81679
- Prof. Dr. Joachim Hamm
- Christine Grundig M.A.
Project group Anagnosis
Lehrstuhl I (Gräzistik)
Residenzplatz, 2 (Südflügel)D-97070 Würzburg
- Prof. Dr. Dr. h.c. Michael Erler
- AR Dr. Holger Essler
- Vincenzo Damiani, M.A.
Project group Schulwandbilder digital
- Univ.-Prof. Dr. phil. habil. Andreas Dörpinghaus (Chair holder)
- Dr. phil. Ina Uphoff (Project director)
- Dipl. Päd. Eva Zimmer, M.A. (Vice project director)
Project group Identifikation von Übersetzern
Residenz - Südflügel
D-97070 WürzburgTel. 0931 31 2778
- Prof. Dr. Dag Nikolaus Hasse
- Andreas Büttner, B.A.
- Jonathan Maier
Project group Romangattungen
- Prof. Dr. Fotis Jannidis
- Dipl.-Math. Lena Hettinger
Project group Romanfiguren
- Prof. Dr. Fotis Jannidis
- Prof. Dr. Frank Puppe
- Markus Krug, M.Sc.
Tel.: +49 09131 85-29251E-mail
- Prof. Dr. Stefan Evert
- Thomas Proisl, M.A.
- Prof. Dr. Andreas Dengel