Crowdsourcing have been widely deployed to cover some challenges in digital humanities, like in the transcription of old handwritten documents. Such approach is especially useful to tackle existing limits in automatic handwriting recognition techniques. Crowdsourcing allows workers to help experts in extraction and classification of information, when the workload is daunting. Yet, it yields
some specific challenges related to the quality of produced data. In this paper, we discuss data quality in a research project called CIRESFI which aims at transcribing Italian Comedy financial archives through the RECITAL web platform.We finally propose some leads to tackle these issues.
The automatic detection of changes in forests (deforestation, reforestation) relies on various data sets. This article reviews data sets both global and local that can be used to evaluate tasks of land cover classification, change detection, segmentation and annotation of images for the analysis of deforestation and reforestation phenomena.
In the last decade, political injunctions to curate and share research data have increased significantly. A survey conducted in 2017 in Rennes 2, a french Humanities and Social Sciences university, enabled us to question the habits and representations of the researchers in this matter, but also the term of “data” itself. Contrary to the idea that data are given, which is implicit in the french word “données”, the notion of “data” is far from being self-evident and actually proves to be complex and multifaceted. This article aims at showing that a triple redefinition and construction of research data is at stake in the discourses of researchers and institutional stakeholders: it operates at epistemological, intellectual and political levels. These concepts of data conflict with existing practices in the field.
In this article we try to tackle some problems arising with noisy and heterogeneous data in the domain of digital humanities. We investigate a corpus known as the mazarinades corpus which gathers around 5,500 documents in French from the 17th century. First of all, we show that this set of documents is not strictly speaking a corpus since its coverage has not been thoroughly
defined. Then, we advocate that it is possible to get interesting results even in the case of such an incomplete, heterogeneous and noisy dataset by strictly limiting the amount of pre-treatments necessary fro processing texts. Finally, we present some results on a case study on document dating where we aim to complete missing metadata in the mazarinades corpus. We exploit a method based on character strings analysis which is robust to noisy data and can even take advantage of this noise for improving the quality of the results.
Over the last decades, there has been an increasing use of information systems, resulting in an exponential increase in textual data. Although the volumetric dimension of these textual data has been resolved, its heterogeneous dimension remains a challenge for the scientific community. The management of the heterogeneity in data offers many opportunities through an access to a richer information. In our work, we design a process for mapping heterogeneous textual data, based on their spatiality. In this article, we present the results returned by this process on data produced in Madagascar as part of the BVLAC project, led by CIRAD. Based on a set of 4 quality criteria, we obtain good spatial correspondence between these documents.
In a very short time (1999-present), the data warehouse (DW) technology has gone through all the phases of a technological product’s life : introduction on the market, growth, maturity and decline, signaled by the appearance of Big Data. In the big data landscape, the arrival of Linked Open Data (LOD) transforms the Big Data threat into an opportunity for DWs, because they bring added value and knowledge that we do not find in the internal sources feeding a DW. However, the consideration of LODs increases the variety of sources, which must be managed effectively. In this paper, we present a new value and variety driven approach for DW design that we apply to a case study of the SHS domain.