App settings:

Short Summary

Machine Learning and simulations are key technologies for Industrie 4.0. How can they be combined to provide even more benefit? This article illustrates different use cases for the combination of both technologies. 

 

Article

1. Introduction

Using simulation techniques have a long tradition in manufacturing and are also a key to success for Industrie 4.0 [1]. Exemplary applications are the prediction of process properties or the simulation of distributed supply chains. With the recent advances in machine learning and computing power, it is natural to raise the question how simulations can be improved by machine learning. 

 

Machine learning and simulation have a similar goal: to predict the behavior of a system (e.g. the energy consumption behavior) with the help of data analysis and physical modeling. However, the approaches to achieve that goal are quite different. The most widespread used approach in the simulation community is the manual creation of behavior models using mathematical-physical modeling in the sense of a theoretical modeling of the subsystems and processes involved. Even when using preconfigured libraries, depending on the desired level of detail, model building for complex systems can be very time-consuming. The greater the model accuracy, the higher the demands on the prior knowledge of the modeler are. The modeler must consider the physical interactions of the system and its environment (e.g. other systems or subsystems) in order to identify the relevant influences on the system properties. 

 

Another possibility is an experimental approach, where the input and output variables of a system are evaluated on a test-bench and the so-captured data is used to parametrize mathematical equations describing the behavior of a system. 

 

For both approaches (theoretical and experimental model building), a human expert has to choose the appropriate equations (e.g. differential or algebraic equations) that are suitable for the problem at hand based on his domain-knowledge. 

 

In contrast to that, the machine learning approach is based on choosing an appropriate inference machine (like neural networks) and provide enough training material to train the underlying model. The following table depicts a qualitative comparison of advantages and disadvantages of generalized approaches on behavior modeling, in the special application domain of virtual validation in autonomous and highly automated driving functions. 

 

Aspect 

Theoretical 

Experimental 

DL-Based 

Domain-Knowledge 

high 

middle 

low 

Manual effort for model creation 

high 

middle 

low 

Amount of Empirical Data 

middle 

high 

Model preciseness 

high 

middle 

middle 

Execution Speed of Model 

low 

high 

high 

IP Protection 

low 

middle 

high 

Effort for automation 

high 

low 

low 

Effort for model training 

middle 

high 

  

As can be seen by that table, simulation and machine learning approaches have pros and cons and hence it is natural to consider cases where both can be combined. In this article we therefore discuss three generic use cases with potential applications that combine simulation and machine learning. 

 

2. Combining Machine Learning & Simulation 

2.1 Integrating Machine Learning Models into Simulations 

World energy consumption has continued to increase in recent years. As a major consumer, industrial activities use about one third of the global energy over the last few decades. In the presence of renewable energies, it is beneficial to shift energy intensive production processes to times where photovoltaics and wind turbines provide enough energy. Hence energy analysis and optimization are essential topics within a sustainable manufacturing strategy. 

 

However, as we’ve outlined in the introduction, predicting the (physical) behavior of individual systems with physical simulation models is already challenging. But giving precise forecasts for whole factories is a notoriously difficult and time-consuming task. Hence, often methods based purely on statistics has been the only means to achieve that goal. For the purpose of energy forecasts, time series analysis with approaches like SARIMA (Seasonal Autoregressive Integrated Moving Average) is an established method that is already used in the industry. Nonetheless, as the name suggests, autoregression assumes that the previous observations in the time series provide a good estimate of future observations. This might be appropriate in a traditional manufacturing environment that produces the same product (with slight modifications) the whole time. In the context of Industrie 4.0 with changeable production and customized products it seems to be completely inappropriate. Moreover, also a naive approach using machine learning alone might be inappropriate: on one hand, a whole factory has just too many parameters that it can be efficiently learned even with the newest advances in deep neural networks. So, a careful pre-selection of the parameters that needs to be learned has to be done by a human expert. Even after that, it will be really hard to provide enough training data giving the changeable production. As a consequence, the machine learning algorithm might suffer from over-fitting: it is very well suited to predict the energy consumption of the products it has seen before, but it will have a bad performance on new products. 

 

Hence, neither simulation nor machine learning alone will be sufficient for accurate energy estimation of Industrie 4.0 plants. However, when we combine both approaches, we might achieve a highly-accurate simulation model for the energy-consumption of a factory. 

 

Clearly, the first step of such a procedure would involve the data acquisition. With FabOS and the asset administration shell, this data acquisition becomes feasible as outlined in Figure 1 in a simplified manner. Here, the MES is controlling the production in the plant by invoking services (like e.g. drilling) which are provided by the asset administration shell. Moreover, relevant properties of the device like the energy consumption are also provided by the device via the asset administration shell to the MES as a feedback. The executed services together with the energy consumption of the device can be collected by a data acquisition system and stored e.g. in a time series database. The acquired data can afterwards be used to train a machine learning algorithm to actually generate an energy model e.g. as a neural network. The crux of that approach is that we estimate the energy consumption based on the executed services which will provide a much more fine-grained energy model in contrast to evaluating the energy consumption of the plant as whole. 

 

Hence the “device energy simulation twin” has to offer the same service interface as the original device and based on the invoked service will predict the current energy consumption. 

 

Note that this encapsulated machine learning model will not suffer from overfitting in the presence of new products as we can now estimate the energy consumption of the new product based on the recipe (e.g. the service invocations) to actually produce it[1]

 

Figure 1: Generating an energy simulation twin 

 

Using these simulation digital twins, we can build up a holistic system of systems simulation for our Industrie 4.0 factory as depicted in Figure 2: 

 

Figure 2: Simulating the Energy Consumption of a Factory 

 

In that setting, the real ERP system is replaced by a data generator that triggers the production process by issuing production orders to the simulated twin of the MES system controlling the simulated production process. Here our machine learned energy models come into play that reports the predicted energy consumptions based on the executed services. 

 

2.2 Machine Learning of existing Simulation Models  

Another branch where simulation and machine learning can be combined is the learning of existing behavior simulation models (sometimes also called learning of white-box models). As discussed in the introduction, behavior models are often manually created by means of mathematical-physical modeling (theoretical modeling). This can lead to highly accurate, but also highly complex models with long simulation runtimes. For example, the simulation of Dynamic Random Access Memories (DRAMs) requires highly accurate models due to the complex timing and power behavior of DRAMs. However, cycle accurate DRAM models often become the bottleneck regarding the overall simulation time. With machine learning, we can achieve significant simulation acceleration with little loss of model accuracy as discussed in [2] where neural networks are used to speedup DRAM simulations. Another field where machine learning is applied for simulation speedup is in the simulation of physics. E.g. simulators for particle physics describe the low-level interactions of particles with matter which are very computationally intensive and consume a significant amount of simulation time. As discussed in the survey article [3], among others, generative adversarial networks have been successfully applied for simulation acceleration. Other examples are given in [4] where different machine learning models are trained to predict the flow over an airfoil using data from a large-scale computational fluid dynamics (CFD) simulation. Another use case in the paper is the reduction of complexity of a high-fidelity finite element model to a low dimension machine learning model. The machine learning models used in this article are neural networks, polynomial linear regression, k-nearest neighbors (kNN) and decision trees. In this paper, all machine learning techniques compute the result much faster than the compute-intensive fluid dynamics but with the price of sometimes very inaccurate results. 

  

Besides simulation acceleration, the transformation into a neural network results additionally in intellectual property (IP) protection. This is gaining importance due to the increasing number of exchanged behavioral models between companies, especially in the automotive sector. 

 

3. Summary 
In this article, we discussed the improvement of simulations using machine learning techniques by means of generic use cases. In FabOS, we will evaluate these generic use cases in concrete applications. Additionally, we will provide necessary components and interfaces to support the FabOS user in applying aforementioned techniques. Obviously, there are more cases thinkable how simulation and machine learning can be combined. For example, given the large number of simulations used in the manufacturing domain, it is desirable to support the selection of proper simulation tools or the parametrization of simulations using machine learning techniques. Another topic that is currently getting a lot of attention is the validation of machine learning components using simulation techniques. However, discussing both topics in details is beyond the scope of this article. 

 

[1]  Gunal, Murat M. Simulation for Industry 4.0. Basel, Switzerland: Springer Nature Switzerland AG, 2019

[2]  Feldmann, Johannes, et al. "Fast and accurate dram simulation: Can we further accelerate it?." 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2020. 

[3]  Guest, Dan, Kyle Cranmer, and Daniel Whiteson. "Deep learning and its application to LHC physics." Annual Review of Nuclear and Particle Science 68 (2018): 161-181. 

[4] Swischuk, Renee, et al. "Projection-based model reduction: Formulations for physics-based machine learning." Computers & Fluids 179 (2019): 704-717. 

 

[1] Obviously, depending on the device under investigation and the needed accuracy even the service alone might not be sufficient. However, the important thing here is that all needed information will be available via the asset administration shell and we can acquire the appropriate training data for machine learning. 

 

Author: Frank Schnicke, Andreas Morgenstern, Oliver Bleisinger, Florian Balduf

Firma: Fraunhofer IESE

 

 

Created by l.demes94 27.04.2022 09:1227.04.2022 09:12.

Modified by l.demes94 27.04.2022 09:1627.04.2022 09:16.

Article: One Stop Shop: From initial information until use in the factory 

FabOS is a highly complex, versatile operating system for production consisting of numerous individual components. The One Stop Shop supports interested parties with the configuration and offers them orientation, e.g. for the following questions:

  • Which components are the right ones to cover my use case?
  • Are all components compatible?
  • Which requirements have to be met?
  • With which configuration can my production be optimally optimized?
  • How does it all get into my factory?

Central point  

 

The One Stop Shop is the central point of contact: from initial information to commissioning and ongoing optimization of the system. In the first step, users enter information about their production processes, their factory, and their goals. The configurator matches these with the software and hardware catalogue in which all FabOS components are listed. From this, the One Stop Shop creates an individual proposal for a FabOS overall system, considering the compatibility and the circumstances in the respective production.

The entire purchase process is mapped via the shop – from product selection and configuration to the shopping cart and the actual ordering and payment process. The delivery and commissioning are carried out both by the shop and by the FabOS partners.

 

One Stop Shop enables continuous improvement of own production 

On a voluntary basis, FabOS users can share and visualize data with each other and with the One Stop Shop. This offers numerous potentials for optimizing their own production. In addition, the shared data can continuously improve both the configurator and their own FabOS: For example, the configurator suggests suitable new services. This mechanism opens new potential for users of FabOS, which can be exploited with the help of suitable FabOS components. The development partners of FabOS recognize new needs based on the data released for them and develop new software services in a targeted manner.

 

FabOS-ecos - More than just a shop 

FabOS-ecos, the companion platform where the One Stop Shop resides, includes:

  • Configuration
  • Handling of the purchase process
  • Continuous implementation
  • Optimization
  • Blog
  • Catalog of all partners in the consortium with the respective core competencies that they bring to the project

Classification into the overall FabOS project

 

The One Stop Shop is the central information and sales channel for the result of the overall project. It brings the results of the research project to industry and helps to make them usable in practice.

The One Stop Shop offers its users low-threshold access to FabOS – in-depth IT knowledge is not required. In addition, it is able to individually optimize the configuration of the operating system for production based on data.

 

Article: FabOS-ecos: Services for users, software and AI-provider 

FabOS-ecos is a comprehensive portal with services for users of FabOS, but also the providers of software, AI components and FabOS itself. It enables networking, the development of a community and more.  

 

innoecos as base for FabOS-ecos  

As base for FabOS-ecos, a software is being used which has already been utilized successfully in numerous clusters and funding projects in industry and companies: innoecos. The flexible collaboration platform is particularly characterized by the fact that it enables groups to work together and network. Thanks to a consistent and comprehensive role and rights concept, it offers the possibility of securely and differentiated sharing of data and information. In addition, the platform can be adapted to customer requirements and various use cases.  

 

Optimized for IIoT-applications  

Innoecos has been developed even further for the use of IIoT applications and enables communication between humans and machines as well as between machines, without restrictions on the basic functions mentioned, such as data security and the role and rights concept. This opens up valuable opportunities for FabOS users to optimize their own business processes.  

 

Use Case Inline-quality control  

If the systems of a production are operated with FabOS, the data being collected by the  systems about tools and workpieces can initially simply be transferred to our secure, independent IIoT cloud via an interface. The physical tools and workpieces receive their digital counterpart - the so-called digital twin (administration shell). This data record can be continuously filled with data over the life cycle of the tool or workpiece (e.g. geometry data, process data, type data, etc.). Ideally, this happens not just in a single company, but across all parties involved in a value chain - for example the manufacturer of a tool and the user who uses the tool to manufacture their own products.  

In this example, both parties would first collect data, feed it into the cloud and decide individually which of the data the other party is allowed to use for its own further processing. This way, the company's know-how remains protected and each partner retains sovereignty of their own data. 

 

Analyse data – improve processes  

The existing data can then be analyzed using machine learning and AI components, which entails a large number of optimizations in planning, production, but also e.g. in customer service and maintenance. For example, a tool manufacturer would now find out what exactly his customer does with the tools and could use this to improve his own products or even implement a new business model such as pay per use. The user of the tools could check the quality of these directly on basis of the data and gain knowledge about how long the tools can actually be used. Thus maintenance of the systems could be planned more precisely. If an end customer later has a complaint, the data would immediately provide information on whether a product left the factory in perfect condition or not.  

 

This is an overview of the advantages:  

  • Increase of productivity  
  • More efficient use of resources and personnel as well as improvement of product development and production  
  • Strengthening the customer service and reducing the effort in quality assurance  
  • Access to data along the entire value chain or from the value network 
  • Access to data from application technology  
  • Simplified data exchange thanks to the use of a standardized interface across all companies involved  

 

Usage of FabOS as entry point to IIoT  

With FabOS, users become part of a growing community and create the conditions to benefit from the possibilities of the Industrial Internet of Things. FabOS-ecos as a platform paves the way for this.  

 

If you are further interested in IIoT applications, then find out more here.  

 

Author: Theresa Höhn

Firma: inno-focus digital gmbh

Created by l.demes94 13.04.2022 10:1313.04.2022 10:13.

Modified by t.hoehn2 29.04.2022 13:4629.04.2022 13:46.

Short Summary

This article compares popular workflow orchestration frameworks, based on their way of defining workflows, supported programming languages, available toolbox and scalability. 

 

Article

Introduction

A workflow is a loosely defined as an organized pattern of activates. There are many terminologies related to this, depending on the domain the workflow is applied to. In the following elements of a workflow will be called tasks. An example workflow is presented in the following figure. 

This figure shows a workflow for for packaging and shipping a product or recycling said product based on the results of the measurement (and the subsequent processing of measurement data). Some fo these can be tasks carried out by automated services or carries out manually, by humans. Such workflows (sometimes called also processed in different domains) are necessary in order to have a standardized approach of dealing with decision making in an organization. 

 

Orchestration vs. Choreography

If we consider the implementation of automated workflows, implemented by software components, Workflow Orchestration and Workflow Choreography are two patterns which aim do assure the interaction if components and build a workflow, i.e., a sequence of tasks implements as software components, which accomplish a given goal. Orchestration is a centralized approach in which one entity is orchestrating the interaction between the components. The central orchestrator manages the interaction between the components explicitly. The components implement logic which can be seen as one working step, or task, in the workflow, but only interact with the orchestrator, not with each other. In this case, in order to make changes to the workflow, only the logic of the orchestrator has to be changed. Orchestration is represented below. 

 

 

Choreography is a decentralized approach where components of a system implement not only the working steps, or tasks, but also the logic based on which the workflow is defined. The components interact with each other and parts of the workflow are implemented scattered in each component in an implicit manner. Choreography is represented below.

 

 

Orchestration or choreography are the two patterns used in decentralized (container based) microservice architecture in order to assure modularity and ease of extensibility of the architecture. Choreography is not necessarily an explicit design decision, since it is currently the most used design pattern for communication between microservices and it is in some cases adopted as the default solution without an explicit design decision. 

Both orchestration and choreography have their advantages and disadvantages. The central orchestrator allows an a priori overview (even graphical overview in many cases) of the workflow and changes to the workflow need to be made only at this central place. In the other hand, this can also be a single point of failure and this approach can lead to more communication overhead. Choreography allows for an easier extension of the functionalities as new components can be added to the system an only localized changes are required (or even no changes at all in some cases). These types of systems tend to have less overhead and not rely on a central single point of failure, however using a messaging broker somewhat erodes this advantage. There are tools which can create a graphical overview of the workflow, but this is only possible post-prior, reconstructed from traces or logs. 

Choreography can be found in many modern microservices infrastructure. The decentralized nature of choreographed system is well correlated to the autonomous small teams approach to the development of microservices. Besides their technical characteristics listed above, this can be one reason for their popularity. Orchestration on the other hand is the basis of most DevOps systems. Furthermore, it is also the basis of other xOps, like MLOps, DataOps or AiOps. These are all essentially workflow orchestration system specialized pipelines for machine learning, data science and AI related tasks. In the following the general-purpose orchestration tools are analyzed (even if, like Argo, have their roots in *Ops). Specialized orchestrators for MLOps (e.g., KubeFlow), and DevOps (Like Github Actions) are not considered in this section. 

Choreography in practice, when implemented in a microservices architecture, in most cases, relies on some type of publish-subscribe communication pattern. As this is a decentralized approach, from the tooling side, it is inherently characterized by the lack of a framework for explicit workflow management or a central software component which enables this pattern, therefor it is hard to compare solutions and implementations. Choreographed microservices, in most cases, use a messaging system (like MQTT, Redis Pub/Sub, AMQP, ZeroMQ, ROS, Kafka) to communicate with each other. 

Orchestration has one distinctive central component, the orchestrator, which explicitly manages the workflow. Examples for workflow orchestration systems are Apache Airflow, Apache Argo, Uber Cadence, Camunda BPM, Netflix Conductor, Lyft Flyte, Apache NiFi, Camunda Zeebe. In this case, since there is an explicit workflow definition, the way this workflow is described is important. (It is interesting to note, that the companies famous for their microservice-based approach, like Uber and Netflix have chosen to develop orchestrators for their microservices) Some orchestrators like Camunda, NiFi, Zeebe use a visual programming language (VPL) to define the workflow (obviously these visual languages are serializable). Some of these, like Camunda and Zebee use standardized languages, in this case BPMN, others, like NiFi use proprietary VPL. Orchestrators which do not use a VPL, use a programming language, like Python (e.g., Airflow, Flyte), Java (e.g., Cadence, Flyte) or in JSON/YAML DSL (e.g., Argo, Conductor) 

In terms of horizontal scalability, most orchestrators support a (micro)services approach for scaling, but there is a difference at the level which is this happens. One end of the spectrum is Apache NiFi, which distributes at the workflow instance level, meaning, that the orchestrator does not make network calls directly (the implemented tasks might, but not the orchestrator) and scaling happens at the workflow instance level. The other end of the spectrum is Zeebe, Conductor or Cadence, which distributes at the task level, the orchestrator only makes network calls and each task is expected to be implemented independently of the orchestrator. Scaling happens both at the task level, as many workers can register to complete the same task and at the orchestration level, as Zeebe supports a built-in complex load balanced scaling model at the orchestrator level. 

 

Workflow types: Pipeline vs. DAG vs. Generic Flow

In a strict sense, a pipeline can be considered the simplest form of a workflow in which the components of the workflow are chained together in a sequence, without any options for bifurcations. Pipelines are popular in DevOps, as in many cases in DevOps a pipeline is expressive enough to describe the intended workflow. If something should go wrong in the DevOps pipeline notification and stopping the workflow instance is the only reasonable action. (Some Workflow engines, e.g. Azure DevOps, which call their workflow a pipeline are more permissive with the pipeline definition and allow some bifurcations of hte pipeline. 

A more complex approach to a workflow definition then a simple pipeline is a DAG (Directed Acyclic Graph). All piplines can be also expressed as a DAG. This approach models the workflow as a directed graph, which however, is not allowed to include cycles. This is the main limitation when it comes to DAGs. DAGs are very popular for stream processing, where a cycle is not needed, but cannot express complex workflows which include cycles. 

As there is no special naming convention for workflows which include cycles, we are calling them Generic Workflows in this article. A generic workflow can describe all workflows which can be described by pipelines or DAGs, furthermore it can include cycles in its definition. BPMN (Business Process Modelling Notation) is a standard to define workflows (or Processes in BPMN terminology) in a visual manner, which can be represented as an XML file. BPMN Workflows can include cycles. 

 

Activiti 

Activiti is a “light-weight workflow and Business Process Management (BPM) Platform”. The project website can be found here 

 

Workflow definition 

The workflows are defined in BPMN, according to the BPMN Standard in XML format. There is no special tool provided for Activity to define, view or edit a workflow visually (Activiti Designer has been depricated).

Pre-defined Toolbox of tasks 

There is no predefined toolbox of ready-made tasks which can be used, other then the logic gates used in BPMN (OR, XOR, PARALLEL, …). 

Task Implementation 

In Activiti Tasks are directly implemented in Java and are called as a Java function call from the BPMN engine instance 

Microservices compatibility 

Activiti itself can be packaged as a microservice. However, as task implementations have to be done in Java, Activity Tasks are not out-of-the-box compatible with Microserices. One could call different microservices form the Java code implementing the Task. 

License 

Apache-2.0 License

 

Apache Airflow 

Apache Airflow is a platform to “programatically author, schedule and monitor workflows”. The project website can be found here 

 

Workflow definition 

The workflows are defined as a DAG using a Python SDK or a REST API. There is no visual editor of the DAG available. 

Pre-defined Toolbox of tasks 

There is a large selection of predifined tasks (instantiated using Operators or Sensors in AirFlow) which deal with various tasks like reading/writing from/to databases, reading/writing to streams, applying transformations to the data, etc. 

Task Implementation 

In Airflow Tasks can be implemented as configurations for operators (e.g. the command for a BAsh Operator with will be executes as Bash script) or implemented in Python which and are called as a Pyhton function call from the AirFlow Executor instance 

Microservices compatibility 

Airflow itself can be packaged as a microsdervifes. However, as task implementations have to be done in Python, AirFlow Tasks are not out-of-the-box compatible with Microserices. One could call different microservices form the Python code implementing the Task. 

License 

Apache-2.0 License 

 

Apache NiFi 

Apache NiFi is not explicitly aimed ad workflow orchestration. It is defined as a “system to process and distribute data”. However, by looking at its capabilities and features it comes very close to a workflow engine. The project website can be found here 

 

Workflow definition 

The workflows are defined as directed graph, called a dataflow using a graphical user interface and a custom visual programming language. The dataflow does not have to be acyclic, loops are explicitly permitted in NiFi, however, the functionality of a loop seems to be error correction and not custom logic. 

Pre-defined Toolbox of tasks 

There is a large selection of predifined tasks (called Component or Processors in NiFi) which deal with various tasks like reading/writing from/to databases, reading/writing to streams, applying transformations to the data, etc. 

Task Implementation 

In NiFi Tasks are implemented in Java. The Java snapshot files (.nar file), written according to a detailed specification, along with metadata description of the task have to ba made available for the NiFi instance running on the system path. 

Microservices compatibility 

NiFi itself can be packaged as a microservices. However, as task implementations have to be done in Java, NiFi Tasks are not out-of-the-box compatible with microservices. One could call different microservices form the Java code implementing the Task. 

License 

Apache 2.0 License 

 

Netflix Conductor 

“Conductor is a workflow orchestration engine that runs in the cloud.” The project website can be found here

 

Workflow definition 

Workflows are defined in a JSON DSL. There is no visual editor for defining workflows and workflows are defined as DAGs. 

Pre-defined Toolbox of tasks 

There is no predefined toolbox of ready-made tasks which can be used. 

Task Implementation 

In Conductor Tasks are implemented as Java or Python applications (or over HTTP REST Calls). The Workers implementing a Task are decoupled from the Workflow engine itself and communicate with the Workflow Engin over HTTP REST. 

Microservices compatibility 

Conductor itself can be packaged as a microservices. Furthermore, as task implementations are meant to be implemented decopupled form the Workflow engine, it is out-of-the-box compatible with microservices. 

License 

Apache 2.0 License 

 

Node-Red 

Node-Red is not explicitly a workflow orchestration engine. It is a “Low-code programming for event-driven applications”. However, by looking at its capabilities and features it comes very close to a workflow engine. The project website can be found here 

 

Workflow definition 

Node-Red is not explicitly a workflow orchestration system, but it is very popular and it is very close, based on its featrues. Workflows are defined in a proprietary visual programming language and are stored as JSON files. 

Pre-defined Toolbox of tasks 

There is a large pre-defined toolbox and many community developed extensions. 

Task Implementation 

The Tasks in Node-Red are implemented as Javascript code and run in Node.js 

Microservices compatibility 

Node-Red itself can be packaged as a microservices. However, as task implementations have to be done in Javascript, Node-Red Tasks are not out-of-the-box compatible with microservices. One could call different microservices from the Javascript code implementing the Task. 

License 

Apache License 2.0

 

Uber Cadence 

“Cadence is a distributed, scalable, durable, and highly available orchestration engine”. The project website can be found here 

 

Workflow definition 

Workflows are defined using a Java SDK There is no visual editor for defining workflows and workflows are defined as DAGs. 

Pre-defined Toolbox of tasks 

There is no predefined toolbox of ready-made tasks which can be used. 

Task Implementation 

In Cadence Tasks are implemented as Java or Go applications (Python and .Net are under development). The Workers implementing a Task are decoupled from the Workflow engine itself and communicate with the Workflow Engin over HTTP REST. 

Microservices compatibility 

Conductor itself can be packaged as a microservices. Furthermore, as task implementations are meant to be implemented decoupled form the Workflow engine, it is out-of-the-box compatible with microservices. 

License 

MIT License 

 

Apache Argo 

The project website can be found here 

 

Workflow definition 

The workflows are defined as a DAG using a YAML configuration files. There is no visual editor of the DAG available, but the DAG can be visualizes at runtime. Special elements of the Argo DAG is the option to iterate over a list of items as a loop and the support for recursivity (although the workflow is explicitly called a DAG).

Pre-defined Toolbox of tasks 

There is no predefined toolbox of ready-made tasks which can be used. 

Task Implementation 

In Argo Tasks are implemented as (Kubernetes) services. 

Microservices compatibility 

Argo is by design intended for use in a Kubernetes environment, therefore it supports containerization. The tasks (which in Argo are called steps) are implemented by Kubernetes Services hence Argo is out-of-the-box compatible with tasks implemented as microservices. Interestingly the input and output for the tasks is done over environmental variables and the standard input/output. 

License 

Apache 2.0 License 

 

Zeebe 

The project website can be found here 

 

Workflow definition 

The workflows are defined in BPMN, according to the BPMN Standard in XML format. There was a Zeebe Modeler tool for visually editing BPMNN diagrams for Zeebe, but it was depricated and exchanged with a more generic BPMN modeler from the same company. 

Pre-defined Toolbox of tasks 

There is no predefined toolbox of ready-made tasks which can be used.

Task Implementation 

In Zeebe Tasks are implemented as Java, Go. .Net or Python applications (or over gRPC Calls). The Workers implementing a Task are decoupled from the Workflow engine itself and communicate with the Workflow Engin over gRPC. 

Microservices compatibility 

Zeebe itself can be packaged as a microservices. Furthermore, as task implementations are meant to be implemented decopupled form the Workflow engine, it is out-of-the-box compatible with microservices. 

License 

Zeebe Community License Version 1.1 

 

Comparison

Name 

Workflow Definition 

Toolbox 

Task Implementation 

Tasks as Microservices 
out of the box 

License 

Activiti 

BPMN 

No 

Java 

No 

Apache 2.0 

Airflow 

Python 

Yes 

Python or Config 

No 

Apache 2.0 

NiFi 

Proprietary VPL 

Yes 

Java 

No 

Apache 2.0 

Conductor 

JSON DSL 

No 

Java 

Yes 

Apache 2.0 

Node-Red 

Proprietary VPL 

Yes 

JavaScript 

No 

Apache 2.0 

Cadence 

Java 

No 

Java or Go 

Yes 

Apache 2.0 

Argo 

YAML 

No 

Any Kubernetes Service 

Yes 

MIT License 

Zeebe 

BPMN 

No 

Java, Python, Go, .net or any gRPC 

Yes 

Zeebe Community 
License 1.1 

 

Conclusions

By looking at the comparison table it becomes clear, that most of the workflow orchestration engines are meant for fully automates processes. Workflows defined in configuration files are not targeted at generic use cases. It is interesting to observe, the low-code movement and how it is gaining traction with Node-Red and NiFi as a flagship example. Visual programming languages, as no-code or low-code platforms are intended to be used by non-programming savvy users to achieve similar results ad their programming savvy counterparts. This is where BPMN excels, as the few standardized ways to represent generic business processes. It is seen as a visual programming language to define a workflow in the case of Activiti and Zeebe, however, in contrast to NiFi and Node-Red, there is no predefined toolbox to interact with other systems. All implementations have to be custom made and without these implementations, Activiti and Zeebe have a very limited applicability. This can be seen as a cooperation mechanisms, where the role of the people implementing the services and those defining the workflows are decoupled. Furthermore, as the workflows are defines in a way which is standardized and used throughout enterprises everywhere, moreover, the visual representation of the workflow is coupled with the actual workflow which the workflow engine uses. When compered to Activiti, Zeebe stands out by its treatment of microservices as first-class citizens. The drawback of Zeebe is its licensing model, which prohibits commercial exploitation in a form of a workflow orchestration service provider. 

 

Author: Dr.-Ing. Akos Csiszar

Firma: ISW Uni Stuttgart

Created by l.demes94 30.03.2022 08:0530.03.2022 08:05.

Modified by l.demes94 30.03.2022 08:1230.03.2022 08:12.

Short Summary

This article discusses the concepts of virtualization for digital representations and how the Industrie 4.0 Asset Administration Shell (AAS) can be used as a Digital Twin for the enablement of dynamic software deployment in FabOS.

 

Article

FabOS is based on the principles of an operating system (OS). An important task of operating systems is the management of system resources and the uniform access to these resources via standardized interfaces using a so-called Hardware Abstraction Layer (HAL). 

The IT infrastructure used on a factory floor can basically be divided into compute components and communication components. In addition, hybrid components resulting from the convergence of IT and OT components are already not uncommon today and will be the common practice in the future. The FabOS domain model primarily distinguishes between hardware nodes, the IT and OT components, and software services, the production services which encapsulate business functionality and data-driven technologies. Hardware nodes, which provide a runtime environment for software services, can take different basic forms in FabOS. Software services are represented as dark blue dots in the following figures and schematically represent the relationship between them and hardware nodes. One of the goals of FabOS is to create standardized patterns and libraries for this, and to utilize emerging standards that form a corresponding HAL. 

The resources and systems used in this way in networked production constitute cyber-physical production systems (CPPS). CPPS possess Self-X capabilities, which have already been discussed in a previous blog entry [1]. Schematically, a CPS or CPPS can be represented as in the following figure, with the layers and colors of the model as depicted. 

Machines thus possess more and more of their own computing capacity or have access to outsourced services in an edge or cloud infrastructure. 

 

Since the available resources are finite, various strategies are used to extend the system boundaries and to use external, distributed resources temporarily. In the industrial context, however, there are increased requirements and limits for this approach in order to ensure the reliability and security of the systems [4][5]. 

The key question here is how resources can be optimally used. Optimally means that there is a trade-off between the available resources. The computing and communications infrastructure within the factory is not arbitrarily scalable and must also be optimally designed from a business point of view, i.e., it must have high utilization and availability. Infrastructure in "the cloud" is in turn dynamically scalable, but on the flip side is also subject to limitations. On the one hand, these can be the costs, which also scale with the resources used. On the other hand, technical factors are relevant here, such as latencies and bandwidths, which may not be maintained in sufficient quality. The specific deployment-relevant software requirements and resource properties are being discussed in detail in [6]. 

 

Flexible architectures of modern systems are primarily enabled by virtualization [4][5]. 

 

Virtualization can be understood in the following two ways: 

 

1. virtualization as a virtual representation of an entity. A digital twin is a digital representation that is sufficiently distinct to meet the requirements of and enable one or more use cases. 

 

2. if virtualization technologies are used. Software-defined Everything (SdX) concepts are becoming more prevalent and also appealing in industrial applications. We will address the specific concepts and use cases in future publications. 

 

This article focuses on the first point, which relates to the virtual representation of resources. Virtualization in the sense of a digital representation corresponds conceptually to a digital twin. 

 

The technical approach of a virtual representation, which FabOS chooses here among others, is the Asset Administration Shell (AAS), which is the continuation from the developments around RAMI 4.0. The AAS functions in this case as a digital representation and thus enables the implementation of an interoperable digital twin [2]. 

Interoperability is one of the most important aspects in today's complex landscape of distributed information technology systems. It can be broken down into four levels, which build on each other and affect different areas of a system or the interfaces between systems: 

 

  • Technical Interoperability 
    • Describes the ability to transfer data from one system to another 
    • Connectivity 
    • Hardware Interfaces 
    • Service Interfaces & API 
    • Communication protocols 
    • This part is mostly solved by established technical standards, which FabOS is adhering to and integrating.
    • Part of this Standard are the technical specifications of the AAS, especially its architectural principles (see functional view) and service infrastructure 

 

  • Syntactical Interoperability 
    • Designates the ability to identify individual (semantically definable) information units and data structures in the transferred data and extract them for further processing. 
    • Here FabOS is also using established approaches 
    • The AAS provides a meta information model and defines the structure for AAS submodels

 

  • Semantical Interoperability 
    • Indicates the ability to interpret the extracted units of information semantically correctly. 
    • This level of interoperability is primarily targeted by the AAS concept and will be described in the following

 

  • Organisational & Process Interoperability 
    • Describes the ability to organize interacting processes effectively and efficiently. 
    • The processes in FabOS are centered around: 
      • Holistic management of the factory infrastructure (OT + IT) 
        • Onboarding and identity management
        • Resource (compute, storage, bandwidth) management 
        • Monitoring of resource state and health (QoS) 
      • (AI) Service Lifecycle Management 
        • Development 
        • Deployment 
        • Operation 
      • Lifecycle Management of Datasets and AI models 
        • Information about which data has been used and under which circumstances an AI model has been trained needs to be traceable 
        • Models need to be retrained and versioned, which also has to be traceable and possibly follow specific regulations depending on the use case or area of application (e.g., GxP) 

The AAS focuses on the level of semantic interoperability with the goal of enabling the fourth level of organizational and process interoperability. Here, machines and services provide their self-description. 

For this purpose, the AAS provides a uniform meta information model [2] that defines a common information structure on the basis of which the self-description is implemented. In addition, a standardized API serves to provide uniform interaction patterns and access mechanisms for data and functions [3]. 

 

Once the first two layers of interoperability have been overcome, the standardized and freely extensible submodels can provide uniformly structured and unambiguously semantically defined information. In this way, the capabilities of hardware nodes and the requirements of software services for these and other factors can be described in an interoperable manner. 

This information can then be passed on to the fourth level of interoperability, i.e., processes and business logics, in order to network more easily across sites and companies and to be able to exchange information in a way that adds value. The focus of FabOS here, however, is primarily within the factory and in the optimal provision of services, taking into account the business and technical requirements of a particular production service of AI-supported applications. The infrastructure required for this is based in part on the formal standards and the informal technical documents of the Industrie 4.0 Platform, which are currently being translated into international standards. 

In terms of use cases, FabOS therefore primarily focuses on the infrastructure and the data and information that can be obtained from the underlying OT and IT components. This is intended to facilitate the development of use case-specific business applications so that human resources can devote themselves to creative and value-adding tasks. 

 

[1] Self-X.and.autonomy.of.intelligent.production.systems 

[2] Barnstedt E, Bedenbender H, Billman M, et al. Details of the Asset Administration Shell - Part 1 - The exchange of information between partners in the value chain of Industrie 4.0 (Version 3.0RC01). Federal Ministry for Economic Affairs and Energy (BMWi), Berlin; 2020.

[3] Bader S, Berres B, Boss B, et al. Details of the Asset Administration Shell - Part 2 - Interoperability at Runtime – Exchanging Information via Application Programming Interfaces (Version 1.0RC01). Federal Ministry for Economic Affairs and Energy (BMWi); 2018.

[4] Stock D, Schneider M, Bauernhansl T. Towards Asset Administration Shell-Based Resource Virtualization in 5G Architecture-Enabled Cyber-Physical Production Systems. Procedia CIRP. vol. 104. pp. 945–950. Jan. 2021. 

[5] Stock D, Bauernhansl T, Weyrich M, et al. System Architectures for Cyber-Physical Production Systems enabling Self-X and Autonomy. 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Austria. 2020. vol. 1. pp. 148–155.

[6] Schneider M, Sophia M, Stock D, Bauernhansl T. Software Deployment in Future Manufacturing Environments: A requirements analysis. Submitted to CIRP CMS 2022 - 55th edition of the CIRP International Conference on Manufacturing Systems. 2022. 

 

Author: Daniel Stock

Firma: Fraunhofer IPA 

 

Created by l.demes94 16.03.2022 08:5116.03.2022 08:51.

Modified by l.demes94 18.03.2022 10:0218.03.2022 10:02.

Short Summary

This article describes the procedure for eliciting requirements in the form of use cases. In addition, the use of morphological matrices per use case is described. The resulting typification enables better handling of the complex overall topic. 

 

Article

Goal and purpose

Without a requirements elicitation, however, we move disoriented in a labyrinth in which requirements are scattered. Without it, we do not obtain complete clarity and transparency of the requirements, no view of a common coherent content map. Particularly in IT projects with a wide variety of technological approaches, such as FabOS, requirements elicitation lays the foundation for discussion between the partners, structuring and joint alignment of the individual projects. 

The chosen method of use cases offers a number of advantages: They are easy to understand as well as relatively simple to create and compare using a uniform scheme.  

The description schema focuses on the representation of the user's view of a system. This understanding is relevant so that the future system fulfills its purpose in a very specific context and supports the system user in his tasks. All relevant scenarios that are of importance in the handling of a user task are specified. Concrete technical solutions are not described. 

For reasons of the large number of use cases created, these are each supplemented by a process-, data-, and technology-related typification based on a morphological matrix, which enables simple and quick content orientation in this use case landscape. 

 

Approach

The following approach was followed for FabOS 

Figure 1: Requirements elicitation process

 

Use case description - structure and summary 

@1 Uniform schema for the documentation of use cases 

Goal 

Consistent description of use cases for FabOS requirements backlog. 

Approach 

Definition of a uniform scheme for describing the content of a use case 

Result 

Uniform description scheme for FabOS use cases 

 

Figure 2: Description structure for use case

 

For further detail, user stories are added to the use cases as needed. 

 

@2 Description of the partner-specific use cases 

Goal 

Description and summary of all use cases for FabOS requirements backlog. 

Approach 

Description of the use cases by FabOS project partners 

Result 

Uniformly described use cases and summarized presentation of the provided use case descriptions 

 

Figure 3: Summary representation of the use cases 

 

Morphological matrix for characterization of use cases 

@3 Development of a morphological matrix to characterize the use cases 

Goal 

Brief characterization of the use cases to identify similar use cases and to quickly convey the content focus and interrelationships of a use case 

Approach 

Compilation of characteristics and characteristic values (incl. their definition) that sufficiently represent the FabOS subject context. 

Result 

Matched morphological matrix 

 

Figure 4: Features characterizing a use case 

 

Data and process-related features characterize the use case from a process and end-user perspective. 

Technology-related features characterize the use case in terms of which technological enablers are planned / necessary for this. 

 

@4 Charakterisierung der partnerspezifischen Anwendungsfälle 

Goal 

Collection of the data-related, process-related and technology-related characterizations of all use cases 

Approach 

Characterizations of the use cases by FabOS project partners 

Result 

Characterized use cases 

 

Figure 5: Example of data-, process- and technology-related characterization of a use case 

 

Comparison and evaluation of the provided short characterizations of the use cases. 

@5 Analysis and documentation of the use cases and their requirements 

Goal 

Collection of the data-related, process-related and technology-related characterizations of all use cases 

Approach 

Characterizations of the use cases by FabOS project partners 

Result 

Characterized use cases 

 

Figure 6: Cross-use case representation of process-, data-, and technology-related characteristic designations

 

Figure 7: Evaluation of the characteristic features of the use cases using the example of the process type: Processes for developing and planning products and production with regard to process- and data-related features. 

 

Summary of all results 

@ 6 Summary of functional and non-functional requirements 

Goal 

Summary of the functional and non-functional requirements as well as the characterization of all use cases 

Approach 

Creating deliverable 

Result 

Deliverable - systematically presented collection of requirements 

 

Conclusion  

This paper describes the procedure for eliciting requirements in the form of use cases. In addition, the use of morphological matrices per use case is described. The resulting typification enables better handling of the complex overall topic.  

 

The result is a value-free characterization of the elaborated use cases that maps all relevant aspects regarding the process context, the data used and the enabling technologies for a comparative view. This provides the viewer with a quick orientation with regard to the process-, data- and technology-specific focal points in the use cases. 

 

Another goal was to enable the creator to express these morphological matrices for his use case as quickly and with as little effort as possible. 

 

Based on these morphological descriptions of the use cases, analyses can be performed on a process-type-specific basis (planning, production, logistics, supporting processes) and differences can be identified with regard to the use of data and the respective underlying technologies. 

 

Author: Jürgen Matthes

Firma: ASCon

Created by l.demes94 02.03.2022 09:1002.03.2022 09:10.

Modified by l.demes94 18.03.2022 10:0318.03.2022 10:03.

Short Summary

This article is about Open Source as driver for innovation in the domain of Industrie 4.0 and some important aspects of software licenses. 

 

Article

Since the last century when computers first found their way onto shop floors, the basic concepts and computational paradigms used in industrial automation haven’t changed much. Most innovations have been confined to vendor-specific, proprietary ecosystems. With the advent of the internet and the digital economy, supply chains and markets did change and consequently industrial automation is in the midst of a transformational process, often characterized by the term Industrie 4.0. Relying on closed and proprietary environments may not be the best way to move forward with this transformation. In fact, it would limit the new production capabilities and business models that become possible with open and highly interconnected systems. Those who are first to recognize the benefits of open collaboration and innovation will gain a competitive advantage through the flexibility that results from open source platforms and participation in active ecosystems of collaboration. 

  

The platform economy is built on Open Source 

Platforms are central in today’s digital economy; in fact, seven out of the ten most valuable companies in the world are big players in the platform economy [1]. It is interesting to note that five out of ten of these companies are strong contributors to Open Source [2]. Unfortunately, none of them is European.  

We believe that the potential for European leadership in domains such as Industrial Automation, Robotics, and AI is highly connected to investments in Open Source Software and Open Collaboration. Open, industry-grade software platforms will allow organisations to collaborate on core technologies and compete on value-added products and services building on Open Source. Open Source enables Open Innovation and Open Collaboration and creates a huge opportunity to strengthen European leadership in Industrial Automation.  

  

Doing Open Source 

The definition of Open Source (OS) implies that source code is distributed under a license in which the copyright holders grant others the power to run, access, modify and re-distribute the software to anyone and for any purpose [3], thus enabling development under an open, collaborative model. Over the last decades, the Open Source Software (OSS) development model has gained more and more popularity around the globe. Nowadays, Open Source components are the core building blocks of application software in most innovative domains, providing developers with an ever-growing selection of off-the-shelf possibilities that they can use for assembling their products faster and more efficiently [4]. Whether you consume or contribute to open source, you need to have a sound knowledge of licensing and IP (Intellectual Property) management. If you are using Open Source in your commercial products, you will need to understand the license conditions of all the third-party libraries you are linking to. Things get even more complicated when contributing to Open Source projects or open sourcing a whole project. Depending on your business model you have to carefully consider which license is the best fit for your plans. 

  

The License Spectrum 

The license spectrum ranges from permissive licenses (e.g. MIT, BSD, Apache) to proprietary licenses which typically don’t allow modification or distribution of the software. 

In between, there are the “Copyleft Licenses”. In contrast to permissive licenses, these are considered protective or reciprocal as they impose more constraints on the users or integrators of the software. Within this share of the spectrum we find both strong (e.g. GPL, AGPL) and weak (e.g. EPL, MPL) copyleft licenses. Both allow free distribution and modification of the software. The devil is in the details: using strong copyleft licensed code usually forces you to put your own code under that same license whereas weak copyleft requires you to only publish changes to the original code under the original license [5].  

In other words, weak copyleft licenses allow free distribution and modification of the software (also in proprietary products) but require that changes made to the original code stay under the original license. Thus, weak copyleft licenses foster collaboration and innovation by ensuring that improvements to the open source project stay open while still allowing its use in your commercial product or providing added-value services on top. 

 

Open Source in FabOS 

The FabOS consortium is committed to contributing to an open Industry 4.0 ecosystem, i.e. a common platform that provides the infrastructure for AI-enabled industrial automation solutions and allows project partners to exploit this platform for their added-value applications and services.  

 

[1] https://www.linkedin.com/pulse/world-becoming-platform-economy-erich-joachimsthaler-ph-d-/ 

[2] https://www.infoworld.com/article/3253948/who-really-contributes-to-open-source.html 

[3] The Open Source Definition (Annotated).” History of the OSI | Open Source Initiative, opensource.org/osd-annotated, https://opensource.org/osd-annotated 

[4] “The Ultimate Guide to Open Source Security.” WhiteSource, resources.whitesourcesoftware.com/white-papers/the-complete-guide-on-open-source-security https://resources.whitesourcesoftware.com/white-papers/the-complete-guide-on-open-source-security 

[5] https://en.wikipedia.org/wiki/Copyleft#Strong_and_weak_copyleft 

 

Author: Marco Jahn

Firma: Eclipse Foundation

Created by l.demes94 16.02.2022 08:4316.02.2022 08:43.

Modified by l.demes94 18.03.2022 10:0418.03.2022 10:04.

Short Summary

Autonomous production is enabled by data-driven technologies, partly in the form of AI. The autonomous capabilities of Cyber-Physical Production Systems (CPPS) are determined by their self-x capabilities. This article gives an overview of the self-x capabilities in CPPS and puts them into relation to defined levels of autonomy.

 

Article:

Cyber-physical production systems (CPPS) are CPS, which are applied in manufacturing environments to carry out production-related tasks. One of the first definitions of CPS has been framed by Lee, who defined CPS already in 2006 as integrations of computation with physical processes [1]. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. This definition is still very much related to industrial control systems, but already reflects the networked nature of CPS components. With further technical progress, the definition of CPS also evolved further. The integrated research agenda Cyber-Physical Systems (agendaCPS) additionally puts emphasis on the global connectivity through the internet, societal impact, a partly autonomous nature, context awareness, and cross-domain applications of CPS, like medical applications, transportation, or smart buildings [2].

 

CPS underwent and still undergo a constant evolution partly driven by technological progress. The VDI has discussed the opportunities and benefits of CPS in automation based on the inherent nature and abilities of CPS [3] which goes beyond the definition of Lee [1] and Geisberger and Broy [2] and attributes additional capabilities regarding local intelligence to CPS. This emphasizes the autonomous nature based on the self-x capabilities of CPS needed to adapt to unforeseen environmental conditions and requirements, known as emergence. Also refined evaluation model is made available for CPS [4]. The model utilizes a set of system characteristics, which defines specific abilities and performance indicators. It is suitable for the characterization of cyber-physical technologies and thereby enabling a technological assessment.

 

Reference [5] provides a comprehensive survey on various concepts that affect the context and evolution of CPPS and lists robustness, autonomy, self-organization, self-maintenance, self-repair, or generally self-x among others as the expected capabilities of CPPS. These self-x capabilities of CPPS derive from the self-CHOP principles of the autonomic computing paradigm described by IBM. Originally self-CHOP encompassed the eponymic properties of self-configuration, self-healing, self-optimization, and self-protection [6]. Organic computing [7], which takes inspiration from the biological paradigm of self-organization of organisms, extends these primary self-x capabilities with self-explaining, respectively self-descriptive, abilities of systems and their components, but is not restricted to only these self-x properties.

 

A select analysis of the available CPS-centric literature allows a comprehensive overview of the essential self-x capabilities that should be provided by an ideal CPS [7]–[12]. These capabilities can be put in a hierarchy emphasizing the dependency of self-x capabilities building upon each other while growing in complexity as depicted in the following figure:

 

The self-x capabilities are grouped into functional blocks and ordered by increasing self-x complexity. The complexity and capabilities of the various self-x stages are progressively increasing while building upon each other. Self-x capabilities are directly related to autonomous features of systems. The working paper “Technology Scenario ‘Artificial Intelligence in Industrie 4.0’” is proposing five levels of autonomy for manufacturing systems [13]. This classification is inspired by the SAE  J3016 standard, which defines six levels of driving automation, going from Level Zero (no automation) to SAE Level 5 (full vehicle autonomy) with the human driver progressively transferring increasingly complex control tasks to the vehicle [14]. Similarly, a human operator will delegate more and more control and decision tasks to the machine in increasing autonomy levels.

 

Recent applications of autonomous systems are moving into level 2 and 3. Levels 4 and 5 are not yet attainable since the current data-driven technologies, respectively, AI algorithms, do not satisfy industrial requirements regarding safety and reliability. This is mainly caused due to the lack of explainability of AI algorithms [15].

 

According to [13], the levels of autonomy depicted in Fig. 1 for CPPS, can be described as follows:


•    Level 0 – No autonomy – human beings have full control without any assistance.
  
 

The ability to (formally) describe oneself using a defined language L.

 

 

•    Level 1 – Assistance with respect to select functions – human beings have full responsibility and make all decisions.
  
 

The basis for a more in-depth perception of one's state is to record and know one's state utilizing monitoring, which sets the state of one's system resources (reflection) in relation to the environment or other systems. For this purpose, the ability to diagnose and assess the consequences of an action is part of self-reflection. In this context, the ability to develop consciousness is also called self-reflection of self-reflections in connection with a system's ability to remember.

 

 

•    Level 2 – Partial autonomy in clearly defined areas – human beings have full responsibility and define (some) goals.
 
 

Self-control and self-regulation, because of a recognized necessary action, is primarily responsible for ensuring that the system can maintain a stable state. The ability to self-configure provides the scope of action in which self-regulation is possible.

 

 

•    Level 3 – Delimited autonomy in larger sub-areas – system warns if problems occur, human beings confirm solutions recommended by the system or functions at a fallback level.
 
 

The ability of self-adaptation serves to achieve an optimal operating state (self-optimization) under continually changing conditions and requirements employing self-generated instructions for action. The optimum state can also be improved system-wide or locally. By acquiring new information or capabilities, the system can continuously go through an evolutionary process by which it provides itself with new capabilities.

 

 

•    Level 4 – System functions autonomously and adaptively within defined boundaries – human beings can supervise or intervene in emergency situations.
 
 

The self-protection of a system is the ability to arm itself against threats or adverse effects that did not exist during the design. To this end, the system uses self-servicing, self-healing, or self-repair to restore or prevent individual system components or their connection to each other in the event of unexpected disruption or failure.

 

 

•    Level 5 – Autonomous functions in all areas including in fluctuating system boundaries – human beings need not be present.
 
 

The self-organization of a system is the ability to alter its internal structure without being influenced by external control elements. This mainly serves to counteract emergence effects, i.e., the occurrence of unforeseen interactions of complex systems. External influences are possible but are considered on a higher level, which reduces the internal complexity of the system to the outside, and only steering influences are allowed.
 

Finally, and beyond the defined autonomy levels is the ability for replication and reproduction. Self-replication or self-reproduction is the ability to duplicate oneself or individual system components. A system which goes beyond the stages of autonomy is currently not achievable and would possible meet requirements the requirements of the technological singularity, which is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence [16].

 

[1]    E. A. Lee, “Cyber-Physical Systems - Are Computing Foundations Adequate?,” presented at the NSF Workshop On Cyber-Physical Systems:Research Motivation, Techniques and Roadmap, 2006.
[2]    E. Geisberger and M. Broy, “Living in a networked world - Integrated research agenda Cyber-Physical Systems (agendaCPS),” acatech - Deutsche Akademie der Technikwissenschaften e. V., Berlin, 2015.
[3]    K. D. Bettenhausen and S. Kowalewski, “Cyber-Physical Systems: Opportunities and Benefits from the Viewpoint of Automation,” VDI - The Association of German Engineers, Düsseldorf, 2013.
[4]    M. Weyrich, M. Klein, J.-P. Schmidt, N. Jazdi, K. D. Bettenhausen, et al., “Evaluation model for assessment of cyber-physical production systems,” Industrial internet of, 2017.
[5]    L. Monostori, “Cyber-physical production systems: roots from manufacturing science and technology,” at - Automatisierungstechnik, vol. 63, no. 10, Jan. 2015.
[6]    P. Lalanda, J. A. McCann, and A. Diaconescu, Autonomic Computing: Principles, Design and Implementation. London: Springer, 2013.
[7]    R. P. Würtz, Organic Computing. Berlin, Heidelberg: Springer, 2008.
[8]    S. Jeschke, C. Brecher, H. Song, and D. B. Rawat, Eds., Industrial Internet of Things: Cybermanufacturing Systems. Cham: Springer, 2017.
[9]    D. Burmeister, B. Gerlach, and A. Schrader, “Formal Definition of the Smart Object Matching Problem,” Procedia Comput. Sci., vol. 130, pp. 302–309, Jan. 2018.
[10]    L. Gurgen, O. Gunalp, Y. Benazzouz, and M. Gallissot, “Self-aware cyber-physical systems and applications in smart buildings and cities,” in 2013 Design, Automation Test in Europe Conference Exhibition (DATE), 2013, pp. 1149–1154.
[11]    J. Bakakeu, F. Schäfer, J. Bauer, M. Michl, and J. Franke, “Building Cyber-Physical Systems - A Smart Building Use Case,” in Smart Cities, vol. 10, H. Song, R. Srinivasan, T. Sookoor, and S. Jeschke, Eds. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017, pp. 605–639.
[12]    L. Monostori, B. Kádár, T. Bauernhansl, S. Kondoh, S. Kumara, et al., “Cyber-physical systems in manufacturing,” CIRP Annals - Manufacturing Technology, vol. 65, no. 2, pp. 621–641, Jan. 2016.
[13]    K. Ahlborn, G. Bachmann, F. Biegel, J. Bienert, S. Falk, et al., “Technology Scenario ‘Artificial Intelligence in Industrie 4.0,’” Bundesministerium für Wirtschaft und Energie (BMWi), Berlin, 2019.
[14]    On-Road Automated Driving (ORAD) committee, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles,” SAE International, 400 Commonwealth Drive, Warrendale, PA, United States, Jun. 2018.
[15]    B. Kosch, M. Heintel, D. Houdeau, W. Klasen, M. Ruppert, et al., “Handling security risks in industrial applications due to lack of explainability of AI results,” Federal Ministry for Economic Affairs and Energy (BMWi), 2019.
[16]     https://en.wikipedia.org/wiki/Technological_singularity

 

 

Author: Daniel Stock

Firma: Fraunhofer IPA 

Created by l.demes94 02.02.2022 09:1202.02.2022 09:12.

Modified by l.demes94 18.03.2022 10:0118.03.2022 10:01.