- Calls & Dates
ESWC 2017 Tutorials & Workshops
|Tutorial on Rule-based Processing of Dynamic Linked Data||How and Why Computers Read the Web||Getting Started With Knowledge Graphs|
|Semantic Deep Learning, SemDeep-17|
Tutorial: How and Why Computers Read the Web
Estevam Hruschka Junior
Machine Reading systems are developed to produce language-understanding technology that will automatically process text in affordable time. In this tutorial the idea of automatically reading the Web using Machine Reading techniques will be explored. Four of the most successful Machine Reading approaches intended to Read the Web (namely DBPedia, Yago, OIE (Open Information Extraction) and NELL projects) will be presented and discussed. The principles, the subtleties, as well as current results of each approach will be addressed. On-line resources (from each approach) will be explored and the future directions in each project will be pointed out. DBPedia, YAGO, OIE and NELL are not the only research efforts focusing on ‚ÄúReading the Web‚Äù. They were selected, to be presented in this tutorial, because they show different and very relevant approaches to this problem, but it does not mean they are the only relevant approaches at all. In spite of mainly focusing on the four aforementioned projects, some other independent contributions, on the Read the Web idea, will be mentioned and pointed out as related works. In addition, two other industrial projects, namely Google Knowledge Vault and IBM Watson will also be explored in a more summarized approach.
Tutorial: Modular Ontology Modeling with Ontology Design Patterns
Pascal Hitzler, Karl Hammar, Adila A. Krisnadhi, Agnieszka Lawrynowicz and Monika Solanki
Data sharing, integration and reuse remain key issues in today's information society. While the amounts of publicly available data on the Web, including linked data, keep growing, there currently also is a revival of the understanding that high-quality data organization principles, using ontologies, will be of rising importance. Ontology Design Patterns (ODPs) address the need for high quality ontology modeling by providing reusable solutions to recurrent modelling problems: they proved to be effective for learning ontology design, for improving the quality of ontologies, for decreasing the rate of errors commonly performed by designers, and generally for the design of versatile and modular ontologies. Furthermore, they have now a significant history of usage in linked data publishing projects, which allowed us to collect experiences and lessons learnt and make methodological guidelines more and more stable. This tutorial targets ontology designers and data publishers and practitioners (including generic skilled web users).
Tutorial: Link Discovery - Algorithms, Approaches and Benchmarks
Irini Fundulaki, Axel-Cyrille Ngonga Ngomo and Mohamed Ahmed Sherif
Link Discovery is a task of central importance when creating Linked Datasets. Two challenges need to be addressed when carrying out link discovery: The quadratic a-priori runtime complexity of this task demands the development of time-efficient approaches for linking large datasets. Second, the need for accuracy demands the development of generic approaches that can detect correct and complete sets of links. Third, the development of benchmarks that test the ability of instance matching techniques and tools is crucial for identifying and addressing technical difficulties and challenges in this domain. In this tutorial, we aim to help the audience when faced with all three challenges. First, we provide an overview of existing solutions for link discovery. Then, we look into some of the state-of-art algorithms for the rapid execution of link discovery tasks. In particular, we focus on algorithms which guarantee result completeness. We also present algorithms for the detection of complete and correct sets of links with a focus on supervised, active and unsupervised machine learning algorithms. Last, we discuss existing instance matching benchmarks for Linked Data. We will conclude the tutorial by providing hands-on experience with one of the most commonly used link discovery frameworks, Limes.
Tutorial: Linked Open Data for Semantics-aware Recommender Systems
Pierpaolo Basile, Cataldo Musto, Tommaso Di Noia and Paolo Tomeo
Linked Open Data provides a valuable source of information useful for making textual data machine-readable. Such information can be tremendously useful for data-intensive applications as Recommender Systems (RS), since both the preferences of users as well as the representation of items can be improved by using such data. However, only few applications really exploit their potential power. In this tutorial, we show how the information available in the Linked Open Data cloud can be used to develop a particular class of Recommender Systems called ‚Äúsemantics-aware‚Äù. We will show several methodologies to introduce semantics in recommender systems, ranging from entity linking to distributional semantics models and we will describe how such representations are used to provide users with personalized suggestions of items that can be of interest for them. Moreover, we will also sketch some preliminary work about the usage of Linked Open Data to generate personalized explanations supporting the recommendations.
Tutorial: Getting Started With Knowledge Graphs
Knowledge graphs are large networks of entities and their semantic relationships. They are a powerful tool that changes the way we do data integration, search, analytics, and context-sensitive recommendations. Knowledge graphs have been successfully utilized by the large Internet tech companies, with prominent examples such as the Google Knowledge Graph. Open knowledge graphs such as Wikidata make community-created knowledge freely accessible. In this tutorial, we will cover the fundamentals of knowledge graphs and also present specific examples of application areas. We explain how organizations can create their own knowledge graphs and utilize them in novel applications. In hands-on exercises we will create a small, but real knowledge graph, covering the entire lifecycle including integration and interlinking of existing sources, authoring, visualization, querying and search. The practical hands-on examples will be performed using the metaphacts Knowledge Graph platform.
Tutorial: Tutorial on Rule-based Processing of Dynamic Linked Data
Andreas Harth and Tobias Käfer
The goal of the tutorial is to introduce, motivate, and detail techniques for carrying out rule-based data processing on web data. Inspired by the growth in data available adhering to the Linked Data principles, our tutorial aims to educate researchers and practitioners about how to access and process such data using rules. We start with explaining how to access data and follow links, while applying reasoning over the collected data to answer queries. We also explain how such queries can be processed over changing data. We then focus on how to actually change data accessible via a Read-Write Linked Data interface, given that many emerging areas, such as the Linked Data Platform, Social Linked Data or the Web of Things require write access. We make the connection from the presented topics to related work in the area of Linked Data, such as link traversal query answering approaches. We also point out related work in the area of dynamical systems, especially in the area of Read-Write Linked Data where currently practitioners are developing systems, but not much work is done with regard to the underlying principles. We conclude with a set of still unresolved research problems and open issues.
Workshop: Querying the Web of Data (QuWeDa 2017)
Muhammad Saleem, Ricardo Usbeck, Ruben Verborgh and Axel-Cyrille Ngonga Ngomo
The size and growth of Linked Open Data (LOD) on the Web opens new challenges for querying such a massive amount of publicly available data. LOD datasets are available through various interfaces, such as data dumps, SPARQL endpoints, Triple Pattern Fragment, etc. In addition, various sources produce streaming data. Efficiently querying these sources is of central importance for scalability of Linked Data and Semantic Web technologies. The trend of publicly available and interconnected data is shifting the focus of Web technologies towards new paradigms of Linked Data querying. To exploit the massive amount of LOD data to its full potential, people should be able to query and combine this data easily and effectively. This workshop at the Extended Semantic Web Conference (ESWC) seeks original articles describing theoretical and practical methods and techniques for fostering, querying, and consuming the Data Web.
Workshop: Managing the Evolution and Preservation of the Data Web - MEPDaW 2017
Jeremy Debattista, Javier D. Fernández and Jürgen Umbrich
This workshop targets one of the emerging and fundamental problems in the Semantic Web, specifically the preservation of evolving linked datasets. This topic is of particular relevance to ESWC since it raises awareness of the many research challenges for preserving and managing dynamic linked datasets. Fostering active usage of such evolving datasets requires further research advances on topics such as storage, synchronisation, change representation and querying over evolving graphs.
Apart from researchers and practitioners, the target audience comprises data publishers and consumers. Publishers will benefit from attending this workshop by learning about ways and best practices to publish their evolving datasets. Consumers benefit by being able to discuss their expectations, requirements and experiences with current systems for handling and processing changing datasets in efficient ways.
The third edition of this workshop is organised differently than the previous two. We invited a number of experts in the field of Linked Data and Data Evolution and Preservation in order to suggest and advise on different topics that our workshop should cover this year. In addition to focusing on the verticals of the workshop themes, we also aim to widen the audience by considering a broader view of interdisciplinary themes. In the last edition, we had around 20 participants, and a keynote by Dr. Axel Polleres (Vienna University of Economics). We also had 3 full papers and 1 industry paper presented in the workshop. This year, we plan to have a keynote by Prof. Dr. Maria-Esther Vidal (Universidad Simon Bolivar, Fraunhofer IAIS), and around 25-30 participants.
Workshop: SALAD – Services and Applications over Linked APIs and Data
Maria Maleshkova, Ruben Verborgh, Laura Daniele and Felix Leif Keppmann
The World Wide Web has undergone significant changes, developing from a collection of a few interlinked static pages to a global ubiquitous platform for sharing, searching and browsing dynamic and customizable content, in a variety of different media formats. This transformation was triggered by the ever-growing number of users and websites and continues to be supported by current developments such as the increased use and popularity of Linked Data and Web APIs. Unfortunately, despite some initial efforts and progress towards integrated use, these two technologies remain mostly disjunct in terms of developing solutions and applications. To this purpose, SALAD aims to explore the possibilities of facilitating a better fusion of Web APIs and Linked Data, thus enabling the harvesting and provisioning of data through applications and services on the Web. In particular, we focus on investigating how both static and dynamic resources (for example, sensor data or streams), exposed via interfaces on the Web, can be used together with semantic data, as means for enabling a shared use and providing a basis for developing rich applications. This year we will encourage the submission of research work that employs Linked Data and Web APIs solutions in order to address challenges in the area of the Internet of Things (IoT).
Workshop: Semantic Web solutions for large-scale biomedical data analytics (SeWeBMeDA)
Ali Hasnain, Amit Sheth, Michel Dumontier and Dietrich Rebholz-Schuhmann
The life sciences domain has been an early adopter of linked data and, a considerable portion of the Linked Open Data cloud is composed of life sciences data sets. The deluge of in-flowing biomedical data, partially driven by high-throughput gene sequencing technologies, is a key contributor and motor to these developments. The available data sets require integration according to international standards, large-scale distributed infrastructures, specific techniques for data access, and offer data analytics benefits for decision support.
Especially in combination with Semantic Web and Linked Data technologies, these promises to enable the processing of large as well as semantically heterogeneous data sources and the capturing of new knowledge from those. In this workshop we invite papers for life sciences and biomedical data processing, as well as the amalgamation with Linked Data and Semantic Web technologies for better data analytics, knowledge discovery and user-targeted applications. This research contribution should provide useful information for the Knowledge Acquisition research community as well as the working Data Scientist.
Workshop: Scientometrics Workshop
Sabrina Kirrane, Aliaksandr Birukou, Paul Buitelaar, Javier D. Fernández, Anna Lisa Gentile, Paul Groth, Pascal Hitzler, Ioana Hulpus, Krzysztof Janowicz, Elmar Kiesling, Andrea Nuzzolese, Francesco Osborne, Axel Polleres, Marta Sabou and Harald Sack
Scientometrics is a field of research that analyses and measures science and technology research and innovation. When it comes to scientometrics the Semantic Web community has both much to offer and much to gain. On the one hand, we are very well positioned to integrate and analyse the complex data that is needed in order to push the field forward. On the other, we need to demonstrate the impact we as researchers and we the Semantic Web community have one academia, business and society. With this workshop we have identified a critical mass of people that are interested in scientometrics and aim to build a community, whose function is to identify research challenges and opportunities, align our research efforts and encourage the broader Semantic Web community to apply their existing tools and technologies to the field of scientometrics.
Workshop: 2nd RDF Stream Processing Workshop
Jean-Paul Calbimonte, Minh Dao-Tran, Daniele Dell'Aglio and Danh Le Phuoc
Data streams are an increasingly prevalent source of information in a wide range of domains and applications, e.g. environmental monitoring, disaster response, or smart cities. The RDF model is based on a traditional persisted-data paradigm, where the focus is on maintaining a bounded set of data items in a knowledge base. This paradigm does not fit the case of data streams, where data items flow continuously over time, forming unbounded sequences of data. To date several stream processing engines have been proposed to enable such applications and the semantic web community have been active in this area. However, each has defined its own extensions to RDF for modelling streaming data and query language.
In this context, the W3C RDF Stream Processing (RSP) Community Group has taken the task to explore the existing technical and theoretical proposals that incorporate streams to the RDF model, and to its query language, SPARQL. In this context, the RSP Group is fostering a community to define a common, but extensible core model for RDF stream processing. This core model can serve as a starting point for RSP engines to be able to talk to each other and interoperate.
The goal of this workshop is to bring together interested members of the community to:
- (1) Demonstrate their latest advances in stream processing systems for RDF;
- (2) Foster discussion for proposing novel RDF stream processing techniques and language extensions, including Complex Event Processing as well as stream reasoning and machine learning over streams;
- (3) Involve and attract people from related research areas to actively participate in the RSP Community Group;
- (4) Discuss and propose usage of RSP in different application domains, including IoT, smart cities, social networks and personalized health, among others;
- (5) Share implementation experience of RSP engines via publication of data streams, and hackathon activities.
Each of these objectives shall intensify interest and participation in the community to ultimately broaden its impact and allow for a wider use of RSP technologies.
Workshop: Third International Workshop at ESWC on Emotions, Modality, Sentiment Analysis and the Semantic Web
Mauro Dragoni and Diego Reforgiato
As the Web rapidly evolves, people are becoming increasingly enthusiastic about interacting, sharing, and collaborating through social networks, online communities, blogs, wikis, and the like. In recent years, this collective intelligence has spread to many different areas, with particular focus on fields related to everyday life such as commerce, tourism, education, and health, causing the size of the social Web to expand exponentially.
This has raised growing interest both within the scientific community, by providing it with new research challenges, as well as in the business world, as applications such as marketing and financial prediction would gain remarkable benefits.
This workshop intends to be a discussion forum gathering researchers and industries from Cognitive Linguistics, NLP, Semantic Web, and related areas for presenting their ideas on the relation between Semantic Web and the study of emotions and modalities.
Workshop: Applications of Semantic Web technologies in Robotics - ANSWER 17
Emanuele Bastianelli, Mathieu d'Aquin and Daniele Nardi
ANSWER is a half-day workshop about the involvement of Semantic Web formalisms and technologies in robotic applications. This background will offer an opportunity to trigger and strengthen the dialogue between the Semantic Web and the Robotics communities. It will then give the opportunity of comparing and debating on problems that have been tackled so far by two communities that worked on such overlapping topics.
Workshop: 1st Workshop on Semantics and Distributed Ledgers
Luis Daniel Ibáñez, Elena Simperl, Fabien Gandon and John Domingue
The workshop aims to explore emerging research topics of interest in applying Semantic Web technologies to the emerging domain of Distributed Ledgers. Distributed Ledgers are being used in a variety of a scenarios where independent DLs are deployed in response to different requirements of trust, privacy and decentralisation, raising concerns about yet another rise of data silos, prompting action for advance on their interoperability.
The main objective of this workshop is to stimulate and foster active exchange, interaction and comparison of approaches on vocabularies, ontologies and methods for the semantic enrichment, representation, management and querying of both DLs and the data and contracts they store.
Workshop: Enabling Decentralised Scholarly Communication
Sarven Capadisli, Amy Guy and David De Roure
The Web is increasingly being used to enable fair access to scholarly work, but bringing this to its full potential requires understanding of, and change in, a number of interrelated areas. Platforms for authoring, publishing, and linking research are only one part of a bigger picture, which also includes feedback and commentary, reputation and impact, searching and linking across projects and domains, and long-term archival of work.
This workshop focuses on how academic researchers can leverage the Web as a technical platform for academic publishing, using existing Web technologies and standards, as well as take advantage of contemporary cultural norms around interacting, sharing and linking through social media. We aim to bring together researchers in Web science and related fields to explore how, and discuss the latest efforts and challenges in creating coherent and interoperable solutions.
We invite contributions with strong emphasis on interoperability, decentralisation, and open access.
Workshop: 3rd international workshop on Semantic Web for Scientific Heritage, SW4SH 2017
Catherine Faron Zucker, Isabelle Draelants, Alexandre Monnin and Arnaud Zucker
The purpose of the SW4SH workshop series is to provide a forum for discussion about the methodological approaches to the specificity of annotating “scientific” texts (in the wide sense of the term, including disciplines such as history, architecture, or rhetoric), and to support a collaborative reflection, on possible guidelines or specific models for building historical ontologies. A key goal of the workshop, focusing on research issues related to pre-modern scientific texts, is to emphasize, through precise projects and up-to-date investigation in digital humanities, the benefit of a multidisciplinary research to create and operate on relevantly structured data. One of the main interests of the very topic of pre-modern historical data management lies in historical semantics, and the opportunity to jointly consider how to identify and express lexical, theoretical and material evolutions.
Workshop: LDQ: 4rth Workshop on Linked Data Quality
Amrapali Zaveri, Anisa Rula, Anastasia Dimou and Wouter Beek
The focus of this workshop is to reveal novel methodologies and frameworks in assessing, monitoring, maintaining, and improving the quality of Linked Data as well as to highlight tools and user interfaces which can effectively assist in its assessment and repair. In addition, the workshop seeks methodologies that help to identify the current impediments in building real-world Linked Data applications leveraging data and ontology quality, as well as use cases that reveal success stories or aspects that have been neglected so far. The benefits of addressing Linked Data quality issues will not only help in detecting inherent data quality problems currently plaguing Linked Data, but also provide the means to fix these problems and maintain the quality in the long run.
Workshop: Semantic Deep Learning, SemDeep-17
Georg Heigold, Dagmar Gromann and Thierry Declerck
This workshop aims to bring together Semantic Web resources and deep learning. Semantic Web technologies and deep learning share the goal of creating intelligent artifacts that emulate human capacities such as reasoning, validating, and predicting. Both fields have been considerably impacting data and knowledge analysis as well as representation. Deep learning represents a set of machine learning algorithms that learn data representations by means of transformations with multiple processing layers. This algorithmic set has frequently been applied to feature learning, such as morphological tagging or speaker verification. Semantic Web technologies and knowledge representation boost the re-use and sharing of knowledge in a structured and machine readable fashion. Semantic resources, such as WikiData or DBpedia, and methods have been successfully applied to semantic data mining.
Machine learning has been successfully applied to (semi-automated) ontology learning, ontology alignment, ontology annotation, duplicate recognition, and ontology prediction. Ontologies have been repeatedly utilized as background knowledge to machine learning tasks. Hybrid approaches, such as knowledge graph embeddings, hold the potential of improving the effectiveness of knowledge-related tasks. This workshop offers a platform for discussing such hybrid approaches and for fostering future collaborations between those two fields.
We thus invite submissions that illustrate how deep learning tasks can benefit from Semantic Web resources and technologies. At the same time, we are interested in submissions that show how knowledge representation can assist in deep learning tasks and how knowledge representation systems can build on top of deep learning results. We believe that now is the right moment for this multi-disciplinary workshop since an increased mutual interest in hybrid approaches from both communities could be observed this year, e.g. with a special issue on machine learning for ontology population by the Semantic Web Journal.