The past decade has seen an enormous growth in the research related to the digital food and cooking recipes underpinnings of data engineering, as food is essential for human life and related health. The ability to collect, store, process, and evaluate cooking recipes has advanced immeasurably, and data science-driven methods have had unprecedented impact on food experience sharing and recommendations at large, mainly because of their success in analysing and predicting human cooking expectation, flavour, and taste preferences. Data Engineering stands to benefit from the food computing and recipe cooking revolution in similar ways, but realizing this vision requires thoughtful and concerted effort. The 4th International Workshop on Data Engineering meets Intelligent Food and COoking Recipes (DECOR@ICDE2021) aims to accelerate research in data science by providing a forum for the latest innovations in the intersection of Data Engineering and Intelligent Food and Cooking Recipes. The workshop is specifically focusing on data science innovations that accelerate the organization, integration, access, and sharing of digital objects in support of the Intelligent Food and Cooking Recipes domain. This domain comprises not only the process of cooking, but also includes intelligent methods for enhancing human-food interactions, ranging from devising technology, playful interactions, multisensory experience design, understanding cross-cultural food eating habits and perception, as well as food choices and their health connections. Consequently, increasing the ability of influencing food eating habits and choices that promote, simultaneously, healthful eating-decisions and creative new human-food interaction experiences.
Organizers: F. Andres (National Institute of Informatics, JP, G. Ghinea (Brunel University, UK), W. Grosky (University Michigan-Dearborn, US) M. Leite (University South Florida St. Petersburg, US)
HardBD: Data properties and hardware characteristics are two key aspects for efficient data management. A clear trend in the first aspect, data properties, is the increasing demand to manage and process Big Data in both enterprise and consumer applications, characterized by the fast evolution of Big Data Systems, such as Key-Value stores, Document stores, Graph stores, Spark, MapReduce/Hadoop, Graph Computation Systems, Tree-Structured Databases, as well as novel extensions to relational database systems. At the same time, the second aspect, hardware characteristics, is undergoing rapid changes, imposing new challenges for the efficient utilization of hardware resources. Recent trends include massive multi-core processing systems, high performance co-processors, very large main memory systems, persistent main memory, fast networking components, big computing clusters, and large data centers that consume massive amounts of energy. Utilizing new hardware technologies for efficient Big Data management is of urgent importance.
Active: Existing approaches to solve data-intensive problems often require data to be moved near the computing resources for processing. These data movement costs can be prohibitive for large data sets. One promising solution is to bring virtualized computing resources closer to data, whether it is at rest or in motion. The premise of active systems is a new holistic view of the system in which every data medium and every communication channel become compute-enabled. The Active workshop aims to study different aspects of the active systems’ stack, understand the impact of active technologies (including but not limited to hardware accelerators such as SSDs, GPUs, FPGAs, and ASICs) on different applications workloads over the lifecycle of data, and revisit the interplay between algorithmic modeling, compiler and programming languages, virtualized runtime systems and environments, and hardware implementations, for effective exploitation of active technologies.
HardBD & Active21: Both HardBD and Active are interested in exploiting hardware technologies for data-intensive systems. The aim of this one-day joint workshop is to bring together researchers, practitioners, system administrators, and others interested in this area to share their perspectives on exploiting new hardware technologies for data-intensive workloads and big data systems, and to discuss and identify future directions and challenges in this area. The workshop aims at providing a forum for academia and industry to exchange ideas through research and position papers.
Organizers: Shimin Chen (Chinese Academy of Sciences, email@example.com), Stefan Manegold (CWI), Mohammad Sadoghi (UC Davis)
Blockchain has been emerging as a potential technology to disrupt traditional database systems. Over the past decade, it has found numerous applications in finance, IoT, healthcare, supply chain, e-commerce, and so on, and is becoming a hot research area. In contrast to traditional database systems, blockchain is decentralized, immutable, and cryptographically secured, where no single entity has full control of the system. This new technology has significantly changed the way of data access, storage, retrieval, and information discovery. It presents many fundamental research challenges for management of data in blockchain.
The aim of this workshop is to bring together researchers and practitioners working on blockchain systems from different data management aspects, including storage management, fault tolerance, query processing, information discovery, transaction management, security and privacy. We encourage papers that apply ideas and techniques from different areas to understand the problems and challenges in blockchain systems and propose innovative solutions. We also welcome papers that report novel systems and applications built with blockchain technology
Organizers: Yuzhe Tang (Syracuse University, USA), Jianliang Xu (Hong Kong Baptist University, HK), Demetris Zeinalipour (University of Cyprus, Cyprus)
During the last forty years, data management systems have grown in scale, complexity, and number of installations. At the same time, administration of these systems has become very expensive with the human factor dominating the total cost of ownership. Current trends like cloud computing make this situation even more problematic for service providers who have to configure and manage thousands of database nodes.
There has been a significant amount of research addressing this problem by providing autonomic or self-* features in database systems to support complex administrative tasks like physical database design, problem diagnosis, and performance tuning. However, new challenges arise from trends like cloud and cluster computing, virtualization, and Software-as-a-Service (SaaS). A major challenge is the need to scale self-management capabilities to the level of hundreds to thousands of nodes while taking economic factors into account.
Autonomic, or self-managing, systems are a promising approach to achieve the goal of systems that are easier to use and maintain. A system is considered to be autonomic if it possesses the capabilities to be self-configuring, self-optimizing, self-healing and self-protecting. The aim of the SMDB workshop is to provide a forum for researchers from both industry and academia to present and discuss ideas related to self-management and self-organization in data management systems ranging from classical databases to data stream engines to large-scale cloud environments that utilize advanced AI, machine learning, and data mining and analysis.
We plan to follow the successful format of previous instances of this workshop: approximately 10 presentations of accepted papers, a keynote address by a well-known speaker and subject matter expert in self-managing database systems, as well as a panel discussion involving experts from industry and academia.
Organizers: Panos K. Chrysanthis (University of Pittsburgh), Meichun Hsu (Oracle Corporation), Herodotos Herodotou (Cyprus University of Technology), Yingjun Wu (Amazon Web Services), Constantinos Costa (University of Pittsburgh)