Research

Trusted data processing in untrusted environments

Distributed database systems store and process data collectively on numerous nodes. The traditional assumption is that these nodes fully trust each other at all times and that all nodes behave uniformly. This assumption, which was quite valid in the past, is however outdated in today's era of omnipresent networking of independent companies, organizations, and digital devices. As the amount of potentially insecure environments will greatly increase due to the advancing digitization of society as a whole, technical solutions for data management that behave robustly in such environments are needed (generally grouped under the term 'blockchain systems'). This research forms the foundation for designing new concepts that will expand the field of trustworthy data processing in terms of speed, functionality, and applicability.

System-level data management and analysis

Classically, there is a strict separation between the data management system and the rest of the system components, such as hardware and operating system. In principle, this makes sense to protect the individual layers from each other and to facilitate the use of components. Unfortunately, in the context of data management and analysis, this strict separation often prevents the overall system from reaching its full potential in terms of processing speed. Specifically, the shielding of memory management by the operating system, as well as limited access to hardware-related components, makes it difficult for the database system to perform internal processing optimally. The goal of this research is to further break down the boundaries between individual system components in order to dramatically increase the speed of data processing and analysis. In particular, the operating system must be adapted so that data processing can reach and manipulate certain components more easily. At the same time, it must be ensured that security-related guarantees of the system remain intact.

Parallel and Scalable Algorithms and Data Structures in the Context of Data Management

In this era of highly parallel processors, which in turn are interconnected in machines replicated hundreds of times, highly parallelized and scalable algorithms and data structures are essential to exploit the available hardware. In particular, when dealing with very large data sets, parallel data processing is often the only means to achieve significant speedup. Therefore, the goal of this research is to adapt data management and analysis procedures to highly parallel hardware. Such adaptation is of great importance for individual system components as well as for the transaction processing of the overall system.