Ing. Jan Slifka

Publications

The FAIR Data Point: Interfaces and Tooling

Authors
Benhamed, O.M.; Burger, K.; Kaliyaperumal, R.; Bonino da Silva Santos, L.O.; Suchánek, M.; Slifka, J.; Wilkinson, M.D.
Year
2022
Published
Data Intelligence. 2022, 1-18. ISSN 2641-435X.
Type
Article
Annotation
While the FAIR Principles do not specify a technical solution for ‘FAIRness’, it was clear from the outset of the FAIR initiative that it would be useful to have commodity software and tooling that would simplify the creation of FAIR-compliant resources. The FAIR Data Point is a metadata repository that follows the DCAT(2) schema, and utilizes the Linked Data Platform to manage the hierarchical metadata layers as LDP Containers. There has been a recent flurry of development activity around the FAIR Data Point that has significantly improved its power and ease-of-use. Here we describe five specific tools—an installer, a loader, two Web-based interfaces, and an indexer—aimed at maximizing the uptake and utility of the FAIR Data Point.

User Interface Modelling Languages for Normalised Systems: Systematic Literature Review

Year
2022
Published
Information Systems and Technologies. Springer, Cham, 2022. p. 349-358. ISSN 2367-3370. ISBN 978-3-031-04828-9.
Type
Proceedings paper
Annotation
Normalised System Theory provides a theoretical foundation on how to build software with respect to change over time. An advanced development platform has been built by the NSX company to build Normalised Systems in practice, from modelling tools to implementation. However, there is a lack of support for modelling user interfaces in the platform, so any non-default requirements require manual customisations, which can introduce combinatorial effects and thus harm evolvability. Nevertheless, the research and development of modelling languages for user interfaces has been a continuous effort since the software started using user interfaces. So in this study, we aim to find recent existing UI modelling languages, define the criteria of suitability for modelling UI of Normalised Systems and evaluate them. The results can be used for implementing UI modelling for Normalised Systems.

Laying the Foundation for Design System Ontology

Year
2020
Published
Trends and Innovations in Information Systems and Technologies. Springer, Cham, 2020. p. 778-787. ISSN 2194-5357. ISBN 978-3-030-45687-0.
Type
Proceedings paper
Annotation
There is a growing need for more client applications for different platforms while maintaining a consistent appearance. Managing this usually requires a lot of tedious labour work. In this paper, we explored what should be included in the design system based on the real-world needs, how to represent and formalise it using semantic web technologies to achieve evolvability and interoperability, and how to convert it into code automatically leveraging the Normalised System theory. Our solution is already a foundation for the ontology representing the design system and working prototype of the code generator using the ontology.

Evolvable and Machine-Actionable Modular Reports for Service-Oriented Architecture

Year
2019
Published
Enterprise and Organizational Modeling and Simulation. Springer, Cham, 2019. p. 43-59. 1. ISSN 1865-1348. ISBN 978-3-030-35645-3.
Type
Proceedings paper
Annotation
Independent and preferably atomic services sending messages to each other are a significant approach of Separations of Concerns principle application. There are already standardised formats and protocols that enable easy implementation. In this paper, we go deeper and introduce evolvable and machine-actionable reports that can be sent between services. It is not just a way of encoding reports and composing them together; it allows linking semantics using technologies from semantic web and ontology engineering, mainly JSON-LD and Schema.org. We demonstrate our design on the Data Stewardship Wizard project where reports from evaluations are crucial functionality, but thanks to its versatility and extensibility, it can be used in any message-oriented software system or subsystem.

FAIR Convergence Matrix: Optimizing the Reuse of Existing FAIR-Related Resources

Authors
Pergl Šustková, H.; Pergl, R.; Slifka, J.
Year
2019
Published
Data Intelligence. 2019, 2020(2), 158-170. ISSN 2641-435X.
Type
Article
Annotation
The FAIR Principles articulate the behaviors expected from digital artifacts that are Findable, Accessible, Interoperable and Reusable by machines and by people. Although by now widely accepted, the FAIR Principles by design do not explicitly consider actual implementation choices enabling FAIR behaviors. As different communities have their own, often well-established implementation preferences and priorities for data reuse, coordinating a broadly accepted, widely used FAIR implementation approach remains a global challenge. In an effort to accelerate broad community convergence on FAIR implementation options, the GO FAIR community has launched the development of the FAIR Convergence Matrix. The Matrix is a platform that compiles for any community of practice, an inventory of their self-declared FAIR implementation choices and challenges. The Convergence Matrix is itself a FAIR resource, openly available, and encourages voluntary participation by any self-identified community of practice (not only the GO FAIR Implementation Networks). Based on patterns of use and reuse of existing resources, the Convergence Matrix supports the transparent derivation of strategies that optimally coordinate convergence on standards and technologies in the emerging Internet of FAIR Data and Services.

“Data Stewardship Wizard”: A Tool Bringing Together Researchers, Data Stewards, and Data Experts around Data Management Planning

Year
2019
Published
Codata Science Journal. 2019, 18(1), 1-8. ISSN 1683-1470.
Type
Article
Annotation
The Data Stewardship Wizard is a tool for data management planning that is focused on getting the most value out of data management planning for the project itself rather than on fulfilling obligations. It is based on FAIR Data Stewardship, in which each data-related decision in a project acts to optimize the Findability, Accessibility, Interoperability and/or Reusability of the data. The background to this philosophy is that the first reuser of the data is the researcher themselves. The tool encourages the consulting of expertise and experts, can help researchers avoid risks they did not know they would encounter by confronting them with practical experience from others, and can help them discover helpful technologies they did not know existed. In this paper, we discuss the context and motivation for the tool, we explain its architecture and we present key functions, such as the knowledge model evolvability and migrations, assembling data management plans, metrics and evaluation of data management plans.