Laura Bassi Center of Expertise
Living Models for Cooperative Systems
Model-Based Security Testing of Clouds
In recent years Cloud computing became one of the most successful computing paradigms. It changed the way we consume IT by unlocking novel uses of software and hardware resulting in a growing rate of outsourcing IT by hardware and software infrastructures.
However, as a recent study of the Ponemon Institute from 2011 shows, security is still a requirement neglected most of the time. This is also confirmed by a 2013 Cloud Security Alliance report, listing the top nine threats to Cloud computing, among them well-known threats like data breaches, account hijacking or insecure application interfaces. This variety of threats results from Cloud computing’s openness and diversity of usage. Thus, security is a core requirement to Cloud services. Besides, assuring the security of a Cloud computing environment is not a onetime task, it is a task to be performed during the complete lifespan of the Cloud. This is motivated by the fact that Clouds undergo daily changes in terms of newly deployed applications and offered services. Tracking such changes at a central point is crucial for assuring security. This tracking of changes is essential for the involved parties, i.e. service providers and service consumers, to accurately test either their cloud infrastructure in case of service providers or their process integration in case of service consumers. Model-based approaches are particularly promising as they are capable of involving different technologies and a high degree of evolution. However, so far, this potential has not been unlocked. Additionally, at the time, due to unspecified negative security requirements of Cloud applications, properly evaluating its security is a precarious task.
The core goal of MOBSTECO is to develop a novel security testing method for cloud deployments, applicable to both, cloud customers and cloud service providers. Our approach will be model based to provide as much independence from frequently changing technologies and to support continuous testing. In using models, we also define a central point, where all information concerning the Cloud Under Test coalesces. In addition, MOBSTECO will be risk and knowledge based to address the problem of negative requirements testing. The approach will incorporate automated risk analysis based on a scalable vulnerability knowledge base to prioritize tests and model analysis to guarantee high-quality test models by tool supported reviewing and checking techniques. The high-level system and security models will be transformed into an executable test model that is directly executed and annotated with test results. For generating effective test data we plan to use a custom fuzzer, supporting the generation of different kinds of test data, depending on specific attacks executed within a negative test.
MOBSTECO will deliver a generic and systematic risk-driven model-based security testing approach for cloud-based applications configurable via fuzzing and a vulnerability knowledge base employing logic programming.
Investigating the Process of Process Modeling
While process modeling has gained increasing importance for documenting business operations and automating workflow execution, process models display a wide range of quality problems impeding their comprehensibility and consequently hampering their maintainability. Literature reports, for example, error rates between 10% and 20% in industrial process model collections. Moreover, non-intention-revealing or inconsistent naming, redundant process fragments, and overly large and unnecessarily complex process models are typical quality problems which can be observed in existing process model collections. These problems have resulted in vivid research on the quality of process models with the goal of obtaining a better understanding of factors influencing the quality of process models. While existing research mostly focuses on the product or outcome of process modeling, the Nautilus project aims at taking a closer look on how process models are created (i.e., the process of process modeling). Obviously, factors influencing this modeling process eventually have an impact on the quality of its outcome (i.e., the resulting process model) and the incurred cost of its creation.
The major goal of the Nautilus project is to systematically investigate the process of process modeling with the ultimate goal of improving the product of process modeling. To achieve our research objectives we will rely on a novel method for capturing and analyzing the process of process modeling. In particular, the Nautilus project will systematically investigate which modeling strategies are applied by both novices and experienced process modelers, trace these strategies back to process model quality, and analyze how novices and experienced process modelers differ in this respect. Moreover, Nautilus will investigate the impact of tool support (i.e., provision of high-level change patterns, refactoring support and automatic layout support) on the process of process modeling. The results of this investigation will then be used to develop novel methods and tools to support process modelers in creating process models through recommendations and consequently improving the product of process modeling. Moreover, the insights into the process of process modeling we will obtain from Nautilus are expected to provide important benefits for teaching.
Modeling Error Analysis and Resolution
Although process modeling has gained increasing importance for documenting business operations and automating workflow execution, process models still display a wide range of quality problems impeding their comprehensibility and consequently hampering their maintainability. Literature reports, for example, on error rates between 10% and 20% in industrial process model collections. These problems have resulted in vivid research on the quality of process models with the goal of obtaining a better understanding of factors influencing the quality of process models. Thereby, existing research mostly focuses on the product or outcome of process modeling. Recently, a new stream of research emerged that aims at obtaining a general understanding of the process followed to create process models—the process of process modeling (PPM). Even though it is known that quality issues frequently arise during the PPM, it is not clear at what point quality issues are introduced, how they can be discovered, and in what way they can be resolved by process modelers.
In their newly acquired project ModErARe, the BPM Research Cluster aims at closing this research gap by systematically investigating quality issues that occur during the process of process modeling. More specifically, ModErARe investigates why quality issues occur, how quality issues are discovered, and how they are resolved by looking at the PPM. ModErARe not only provides a better understanding of typical quality issues during the PPM, but also of their occurrence (e.g., problem patterns frequently resulting in quality issues or reasons for quality issues). As a further outcome, ModErARe provides methods and techniques for predicting quality issues and hence for preventing them. In addition, enabled by better understanding of the processes involved in the discovery and resolution of quality issues, ModErARe contributes methods and techniques that provide guidance to process modelers during the PPM for discovering and resolving quality issues. Ultimately, this leads to improved modeling outcomes through error prevention as well as support for error discovery and resolution.
ModErARe is funded by the Austrian Science fund (FWF) with approximately EUR 300.000. This allows the BPM Reserach Cluster to employ Stefan Zugal as post-doc researcher and hire one PhD student for the project duration of three years. The ModErARe project will start at the beginning of 2014.
Behavior Patterns in Process Modeling
Considering the intense usage of business process modeling in all types of business contexts, the relevance of process models has become obvious. Yet, industrial process models display a wide range of quality problems. These problems have resulted in vivid research on the quality of process models with the goal of obtaining a better understanding of factors influencing the quality of process models. Thereby, existing research mostly focuses on the product of process modeling, i.e., the process model. Recently, a new stream of research emerged that aims at obtaining a general understanding of the process followed to create process models—the process of process modeling (PPM). Even though the PPM is a highly flexible process and PPM instances of modelers differ, existing research on the PPM suggests the existence of patterns of re-occurring behavior (PPM behavior patterns). However, a comprehensive understanding of PPM behavior patterns is missing. Moreover, it is unclear how these patterns relate to process model quality, how the different patterns are combined to modeling styles, and which factors determine the occurrence of PPM behavior patterns.
The Modeling Mind project aims to close this research gap by identifying a comprehensive set of PPM behavior patterns considering the modeler’s interactions with the modeling environment, verbalizations of the modeler’s thoughts, and the modeler’s eye movements while creating a process model. Further, the relation of these patterns to process model quality is examined. In addition, the Modeling Mind project aims at deriving a set of modeling styles by investigating the co-occurrence of PPM behavior patterns. Moreover, the project aims to understand the factors determining the occurrence of PPM behavior patterns covering modeler-specific factors, e.g., working memory capacity and personality, and task-specific factors, e.g., specific model elements and task complexity. A better understanding of PPM behavior patterns and their influencing factors will allow giving advice on the design of better (personalized) modeling environments, but also facilitate the development of tailored training materials, leading to process models of higher quality.
The Modeling Mind is a interdisciplinary project together Pierre Sachse and Marco Furtner working at the Institute of Psychology at the University of Innsbruck. The Modeling Mind is funded by the Austrian Science fund (FWF). This project allows the BPM Reserach Cluster to employ Jakob Pinggera as post-doc researcher for the project duration of three years. The Modeling Mind project will start in July 2014.
Policy and Security Configuration Management
EU – FP7 – Project
The PoSecCo project proposes new methods and tools for configuring a service landscape in a way that security requirements are met. At design time, the Security Decision Support System (SDSS) can be used to refine high-level security requirements into low-level configuration settings enforcing them. The configuration settings generated by the SDSS are stored in the MoVE central repository. Such configurations are “abstract” in the sense that they use vendor/product independent syntax and formats. They are defined as instances of the Configuration meta-model in the form of sets of rules that depend on the control features and directly use the functionalities available at the target control. The last step for the enactment of such configurations is their deployment on the actual system.
Security Engineering for Lifelong Evolvable Systems
EU – FP7 – Project
Software-based systems are becoming increasingly long-living. This was demonstrated strikingly with the occurrence of the year 2000 bug, which occurred because software had been in use for far longer than its expected lifespan. At the same time, software-based systems are getting increasingly security-critical since software now pervades the whole critical infrastructures dealing with critical data of both nations and also private individuals. There is therefore a growing demand for more assurance and more verified security properties of IT systems both during development and at deployment time, in particular also for long living systems. Yet a long lived system also needs to be flexible, to adapt to changes and adjust to evolving requirements, usage and attack models. However, using today’s system engineering techniques we are forced to trade flexibility for assurance or vice versa.
“Real” software development cycle
Our objective is thus to develop techniques and tools that ensure “lifelong” compliance to evolving security, privacy and dependability requirements for a long-running evolving software system. This is challenging because these requirements are not necessarily preserved by system evolution.
The project will develop techniques, tools, and processes that support design techniques for evolution, testing, verification, re-configuration and local analysis of evolving software. The project results will be applied and evaluated in particular in the industrial application domains of mobile devices, digital homes, and large scale air traffic management which all offer both great research challenges and long-term business opportunities.
Eine Initiative der österreichischen Informatik-Universitäten für Schülerinnen und Schüler
You can make IT ist eine Initiative der österreichischen Informatik-Universitäten. Das sind die Universitäten Innsbruck, Klagenfurt, Linz, Salzburg, Wien, die TU Graz, TU Wien und die WU Wien. Gemeinsam wollen sie Jugendliche auf die Informatik als Studienfach aufmerksam machen und das Image der Informatik verbessern. Finanziert wird die Initiative aus Offensivmitteln des Wissenschaftsministeriums für den Hochschulbereich im Rahmen der Ausschreibung „MINT und Masse“.
Mit der GS1 Sync Datenbank wird derzeit eine Wissensbasis standardisierter Lebensmittel-Produktdaten aufgebaut. Eine solche zentralisierte Wissensbasis stellt eine neuartige, äußerst wertvolle Datenquelle dar, aus der sich neue IT-gestützte Services in Bereichen wie Gesundheit und Gastronomie zum Nutzen von Konsumenten entwickeln werden.
Eine wichtige Voraussetzung für die Entwicklung solcher Services ist die Qualität der Produktdaten in der GS1 Sync-Datenbank. Um beispielsweise verlässliche Dienste für Zielgruppen wie Allergiker oder Diabetespatienten zu entwickeln, müssen die Daten eine hohe Qualität aufweisen.
Da die Daten dezentral durch die Produkthersteller gepflegt werden, ergibt sich die Notwendigkeit eines Datenpflegeprozesses. Ein solcher Datenpflegeprozess definiert zum einen die Verantwortlichkeiten und die durchzuführende Tätigkeiten, und unterstützt zum anderen notwendige manuelle Qualitätsprüfungen durch automatisierte Checks und überwacht und wertet den gesamten Qualitätsstatus der Datenbank aus.
Ziel des Projekts ist es, den existierenden Datenpflegeprozess bei GS1 Austria zu verfeinern, ihn durch ein Werkzeug zu unterstützen und ein Maß für die Qualität der Daten zu definieren. Besonderes Augenmerk wird dabei auf die Entwicklung automatisierter Qualitätschecks der Produktdaten gelegt. Begleitend zu diesen Aktivitäten werden in Absprache der Partner prototypische Services auf den Produktdaten entwickelt.
Living Safety & Security Cases for Cyber-Physical Systems Certification
Within SALSA, our goal is to develop a novel tool-supported method of “living” safety & security cases enabling efficient compliance management in settings characterized by heterogeneity, cross-organizational structures, certification with respect to multiple standards and short release cycles.
Core concepts within SALSA are a Workflow-enhanced Knowledge Base supporting collaborative maintenance of security/safety evidence chains, coordination of tasks in multi-standard contexts and efficient handling of system releases.
Safety & security
A novel tool-supported method of “living” safety & security cases.
The SALSA framework will be evaluated in the context of autonomous driving.