App settings:

Short Summary

Machine Learning (ML) applications and services are what make ML trained models such as deep neural networks useful. They can be hosted in the Cloud or on individual devices, at the edge. Building, Deploying and Maintaining those smart applications drift away  from standard applications management because of the evolutive nature of AI enabling the software to learn and improve over time. The following article presents the specific problems addressed by applied Machine Learning from the industrial perspective. The main guiding ideas are nowadays grouped under the common term ”MLOps”. Yes, but at Edge please.    

 

Article

When we talk about Machine Learning, we often refer to the process of training a model (e.g: a deep neural network), with batches of data thoughtfully gathered, analysed, pre-processed and occasionally labelled: the training datasets. When the training is achieved, the trained model is evaluated against metrics specifically selected for the use case and measured on a dedicated test dataset. The help the user to assess the quality of the model and the accuracy to expect from him doing the task it has been trained for. If expectations are met the model is usually serialised and stored on disk, ideally versionised in a model registry where future consumers might retrieve it as well as all useful information there is to know about it. If you’ve been that far, well done! But that isn’t yet the end of the ML lifecycle.  

 

In order to be used and perform its mission in the real world, ML models need applications to run into. Those applications are the pieces of code that operationalize the model which in a generic aspect consist of:   

 

  1. collect input data from sensors, camera, microphones, databases, internet, etc.  
  2. pre-process the data in the shape required for the model 
  3. run the inference a.k.a run the model with the previously prepared input data 
  4. gather the model results or predictions 
  5. do something with it e.g.: display or expose the prediction results or trigger some action.  

 

The development of such applications commonly follows a DevOps path of updates, unit tests, and releases. Some well known platforms such as Github or Gitlab help the collaborative work and give a clear, safe management of the application source code among the developer community. Yet it is to be dissociated from the ML model training and evaluation path which has been similarly named by the ML community as MLOps. Unlike developing source code, training ML models involves dealing with large amounts of data that Git-based platforms aren’t designed to work with.. Yet some tools dedicated to handling the case of ML have emerged. Based on the fundamentals of code git versioning, they are also able to track all modifications and updates brought along by the ML training process from datasets generation to model evaluation batches. One example experimented within FabOS is Data Version Control (see https://dvc.org/) .  

 

Ultimately the DevOps and MLOps processes need to hook in order to create a ready to deploy ML-Application.  We can easily see two obvious cases when it would be time to rebuild an ML-Apps: after a new release of the application source code, or a new release of the ML model. Things become a little bit more mixed up when the user decides, in its most genuine right, to operate different models yet with the same generic application.  Having an HTTPs REST API able to generically serve any tensorflow made image classification models is a wise approach to centralise the application development effort around a common source code base.  This also means for us that the best time to join a trained model and application code is at deployment time.  

 

The FabOS MLOps initiative proposes to solve that problem by designing a deployment service based on Docker Container technology. In its essence, it consists of mounting the serialised ML Model in a pre-build docker base image of the application. Both components, the model and the base application images, are picked up from there relative stores and registry, the very nature of docker technology make it then easy to deploy and orchestrate the newly built ML-Application container on any system hosting a container runtime such as docker, docker-compose or kubernetes.  

 

 

 

Here come now several options regarding the deployment: 

  • make the ML-App available, for instance in a shared container registry or a marketplace,  to potential users  who are then left responsible to run it on their infrastructure  
  • Deploy the ML-App in the Cloud as for example a hosted web application 
  • Deploy at Edge, remotely, over-the-air, the ML-Apps to target devices managed from a centralised administration layer.  

 

The latter case corresponds more to the industrial scenario that FabOS aims to provide a solution for. It requires, however, a robust, secured and scalable device management system able to administer and manage the fleet of connected devices.  Additionally, it shall grant the user the  possibility to know at any time where ML model and applications versions are deployed, what is the status of the development and monitor over time its ML performances. 

 

Author: Rémi Ang

Firma: SOTEC GmbH & Co KG

Created by l.demes94 05.10.2022 09:5105.10.2022 09:51.

Modified by l.demes94 05.10.2022 09:5205.10.2022 09:52.

Short Summary

This article describes how Machine Learning can be used in productive systems within the shopfloor by combining edge devices specialised for AI with a robust  and versatile implementation of a software stack to embed Machine Learning applications. 

 

Article

It has been several years since Artificial Intelligence (AI) and its subsection Machine Learning (ML) has overcome the status of being a potentially useful technology to now being a proven strategic technology to be used within production environments. Driven by both considerable progress in developing more complex and more versatile ML algorithms as well as the development of hardware components that enable the resource intensive execution of those algorithms, FabOS intends on closing the gap to enable AI for the widest range of industrial use cases. 

Making AI useful is not just about evaluating any type of data structures but the interaction of many services and components that need to communicate on robust (edge) devices efficiently.  

In the case of SOTEC the answer to those challenges is the ML-stack running on a hardware specifically designed to execute those complex ML algorithms - the CloudPlug edge+ (CPe+).   

The CPe+ resembles a powerful edge device equipped with an edge TPU that can execute an inference with complex ML models within several milliseconds.  Furthermore this device comes with a range of interfaces that can directly interact with production systems in order to gather, evaluate and also push data to the Cloud if desired. 

 

 

Combining all those functionalities within one monolithic application does not go in line with modern software development and is furthermore not applicable in the highly dynamic production environment. Reusability and exchangeability of software components are the foundation to serve both a broad range of Machine Learning use cases and adapt to changing conditions. 

As a perfect fit for this versatile edge device, within the scope of FabOS, SOTEC developed the ML-stack. Connected through suitable interfaces, containerised microservices communicate with each other serving the end to end use case of Machine Learning within the production environment. 

Starting with the acquisition of both structured and unstructured data through sensor systems, these data will be evaluated by the ML model, whose complex mathematical operations are being mapped to the integrated edgeTPU. After the inference results have been obtained from the ML model, they need to be fed into a logic unit that decides which further actions are to be taken by an actuator that finally closes the loop between production machine and the ML service. Due to MQTT-based communication within the ML-stack it is possible to flexibly extend by adding further services or removing unnecessary services from the stack. 

 

 

By combining the suitable edge device with a generically applicable software stack FabOS and SOTEC close the gap between the mere research of complex algorithms in laboratory environments and the actual usage of those algorithms within production environments and thereby making Artificial Intelligence accessible for a broad spectrum of use cases and users.  

 

Author: Daniel Hartmann, Rémi Ang

Firma: SOTEC GmbH & Co KG

Created by l.demes94 26.09.2022 08:4926.09.2022 08:49.

Modified by l.demes94 26.09.2022 08:5326.09.2022 08:53.

During production processes, tool wear is a constantly occurring factor influencing the quality of the workpiece. Tool wear affects the manufacturing costs, workpiece quality and process safety. At the current state, tools are replaced either on specific service intervals or based on the machine operator’s experience [1]. As the wear of the tool leads to constant changes of the manufacturing behavior, the machine operator needs to attend the manufacturing process nearly continually.  By its nature, tool wear is one of the cost drivers of machining industries. Additionally, to the costs of the replacement tools, further costs emerge because of downtime of equipment for tool change, the rejection rate of already produced workpieces due to the tool’s state and the personnel costs of the machine operator [2].  

 

From a technical point of view tool wear depends on many different process parameters, which include cutting velocity, chip thickness and the properties of workpiece and tool material. Due to the strongly varying machining conditions for different processes, general solutions for tool wear detection/prediction are difficult to develop [2]. 

 

For the measuring of tool wear, there can be distinguished between two different methods:  direct and indirect measuring [3]. Indirect methods require the integration of additional sensor systems into the machine tool most of the time. These so-called retrofit solutions often include dynamometers, accelerometers, acoustic emission, current/voltage sensors. In the field of machine tool condition monitoring these solutions are already well established. By using Deep Learning techniques on the derived sensor data, the prediction of tool-wear was already implemented in different research studies [2]. One of the biggest flaws of these techniques is their applicability to only one experimental setup. By changing process parameters like the geometry of the to be manufactured workpiece the developed model is rarely still applicable. A general solution for varying manufacturing processes has not been developed yet. 

 

Direct measurement methods use optical sensors for tool-wear detection. The biggest flaw of these solution tends to be the evaluation of the pixel data by human operators. They are responsible for choosing the relevant wear form and take measures “by hand”, which means evaluation of the digital images. An exemplary measurement size could be the width of flank wear land [4]. The combination of classical techniques from computer vision, which are efficient, transparent and optimized for their specific purposes, and methods from the area of deep learning presents a promising approach to integrate reliable tool wear detection into machine tools. This approach offers both, generality via deep-learning and specialization by computer vision [5].  

 

Tool wear detection is a texture-based recognition, rather than an object identification task. In industrial surroundings, different influences must be taken into consideration, for example: changing light exposure, different coating colors, changes in tool orientation, dirt on the camera lens caused by cooling lubricants and different refractions because of varying tool-geometries.  

 

A frequently used technique for detection in computer vision is feature detectors. Features are described as parts of the image, which contain some information of the content. Examples for features could be single points, which inhibit a specific property, like the color, or whole edges and objects which are recognizable in the image.  Examples for feature detector techniques in image processing are “sobel”, “canny” and the “active contour method” [6-8]. These techniques are often used in tool wear detection. By extracting the contour edges of the cutting tool and measuring the differences between different states of the tool’s lifecycle. To give an enhanced technical in-depth view, the following figures show a milling tool, which was used during the experiments for a first version of a FabOS tool-wear prediction model.   

 

Figure 1 shows the lower end of the used milling tool, taken by a camera, which is integrated into the machine room. The depicted red frame shows the relevant edge of the cutting tool for tool wear detection. 

 

 

The following images show the enlarged tool edge and the resulting contour after applying the sobel technique on the image. This is an early version which is not yet optimized, therefore there is still recognizable noise around the tool-edge’s contour. 

 

 

 

The next paragraph deals with the detailed use-case integration of the tool-wear prediction application in context of FabOS. 

 

With a focus on the trend of digital transformation in production technology, FabOS aims to exploit the potential to make machines intelligent through real-time sensor connectivity and data processing. To define the requirements and test the FabOS platform, use cases such as online monitoring of tool wear are implemented in the FabOS project. 

 

 

In Figure 4 the integration of the machine and the vBox [9] into the FabOS ecosystem is depicted. The vBox serves here as an edge system, complementary to the machine tool, to connect the machine and sensors to the FabOS platform via uniform interfaces. The platform is used to offload data- and computation-intensive applications related to machine learning methods to off-premise or cloud compute facilities. At the same time providing latency-critical applications, such as control algorithms, on edge hardware close to the process with short transmission paths is enabled. Cloud integration provides the ability to build cross-site databases so that analytics can be transferred to other sites, production steps and plants in conjunction with their results. In addition, FabOS offers the possibility of providing services for data-based analysis depending on the requirements pro-file of the most suitable hardware. FabOS offers networked system solutions for this purpose and manages the software and hardware components with the help of Asset Administration Shell based self-descriptions.  

 

These effective connectivity solutions allow model building of complex and computationally challenging AI-solutions like tool wear detection. The service-based architecture enables fast and uncomplicated deployment of new/improved models.  

 

[1] Schwenzer M., Miura K., Bergs T., 2019- Machine Learning for Tool Wear Classification in Milling based on Force and Current Sensors, IOP Conf. Series: Materials Science and Engineering 520 

[2] Bergs T., Holst C., Gupta P., Augspurger T., 2020. Digital image processing with deep learning for automated cutting tool wear detection, 48th SME American Manufacturing Research Conference, NAMRC 48 

[3] Jeon, J.U., Kim, S.W., 1988. Optical flank wear monitoring of cutting tools by image processing. Wear 127 (2), 207–217. 

[4] International Standard. ISO 8868-2: Tool Life Testing in Milling - Part 2: End Milling. International Organization for Standardization, 32 pp. 

[5] Mahony, N.O.', Campbell, S., Carvalho, A., Harapanahalli, S., Velasco-Hernandez, G., Krpalkova, L., Riordan, D., Walsh, J., 2020. Deep Learning vs. Traditional Computer Vision. 2194-5357 943. 

[6] Canny, J., 1986. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8 (6), 679–698. 

[7] Kanopoulos, N., Vasanthavada, N., Baker, R.L., 1988. Design of an image edge detection filter using the Sobel operator. IEEE J. Solid-State Circuits 23 (2), 358–367. 

[8] Kass, M., Witkin, A., Terzopoulos, D., 1988. Snakes: Active contour models. Int J Comput Vision 1 (4), 321–331. 

[9] Fraunhofer vBox [Online] https://www.ipt.fraunhofer.de/de/kompetenzen/Produktionsmaschinen/praezisionstechnik-und-kunststoffreplikation/vbox.html 

 

Author: Pierre Kehl, Tim Geerken

Firma: Fraunhofer IPT

Created by l.demes94 07.09.2022 11:4207.09.2022 11:42.

Modified by l.demes94 07.09.2022 11:5007.09.2022 11:50.

Short Summary

Use of modern techniques from the field of machine learning in industrial manufacturing processes. This article describes the concept of anomaly detection in production and introduces first steps of integrating these solutions into the FabOS environment.  

 

Article

Today's industrial production faces various tasks and challenges, such as increasing quality requirements and complexity of the product, constant cost and innovation pressure, and the change from mass to customer-specific products. An efficient method to solve these tasks is the use of artificial intelligence (AI) [1] [2] [3]. AI can already be sensibly used in today’s production: for condition monitoring tasks especially for applications in predictive maintenance and for support in decision making for adaptive process optimization, for example through the integration of pattern recognition algorithms or neural networks.  However, for AI to be used effectively, the production infrastructure must fulfil various requirements. First, the data availability needs to be ensured [4]. Access to sensors, machines and processes must be granted, reliable and synchronized. Further, the data must be available in high-quality, therefore semantic descriptions are needed for an easy integration of new data sources. This is where standardized interfaces and data structures build the backbone for a larger use of AI in industrial production. Within this blog post, a short introduction to one of the industrial applications of FabOS shall be presented. This use-case comprises an anomaly detection within an industrial milling application process. 

 

A possible definition of an anomaly is given by Douglas Hawkins: “an [anomaly] is an observation which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism.” [5] The identification of these deviations describes a central problem in machine learning techniques within the field of industrial applications.  

 

Since the occurring anomalies are highly connected to the corresponding applications, it is nearly impossible to find consistent definitions or create universal models across different tasks/domains/machines, this aspect sets anomaly detection apart from other machine learning problems [6]. Additionally, the data inherent noise is a common difficulty in the application of anomaly detection. 

 

Within the industrial use cases, the most common type of anomaly detection is the so-called point anomaly detection. Here anomalies occur as points within the data, which do not conform to the accepted normal behavior. In the context of Machine Learning there exist different methods to execute/implement anomaly detection procedures. The most common techniques for unsupervised anomaly detection problems, which means problems where the ground truth for training of the models is not known, are nearest neighbor-based, clustering-based and statistical methods. Nearest neighbor techniques use criteria of neighborhood properties of the data points to assign an anomaly score, the basic assumption is that normal datapoints lie in dense neighborhoods, while anomalies/ outliers find themselves in sparse neighborhoods. Clustering methods learn clusters from the given datasets and give an anomaly score based on the relationship compared to the nearest cluster. The general assumption here is that anomalous points do not belong to a cluster or are very distant from the nearest cluster representative. Statistical methods estimate a model from the data and apply statistical evaluations on the probability. These kinds of methods are applicable if the normal instances without anomalies can be modeled via statistical distributions [6]. 

 

If process knowledge is already available, classification methods have shown to be very effective techniques to learn classifiers from the training data and apply labels or scores to test data. These methods can be distinct between one-class methods, which classify points either to one class or to none if an anomaly is detected, and multi-class models where points which do not belong to the normal classes are classified as anomalous. Basic models for classifications are support vector machines (SVM), neural networks (i.e., autoencoders), Bayesian models or rule-based systems [6].  

 

Modern machine tools for metal cutting are used in industrial production chains for turning, milling, and drilling operations. Depending on the respective area of application, they appear in different degrees of automation. To produce single parts and small series, a standard CNC machine with automatic tool change is usually sufficient. As the number of pieces increases, further expansion stages, for example to a machine tool center or -cell, are economical. Multi-machine systems, i.e., flexible manufacturing systems, are mostly used in mass production, as they offer significant economic advantages due to a high degree of automation. These can produce workpieces efficiently with multi-machine systems in a 24/7 series production. However, the higher the degree of automation, the less flexibly the systems can respond to changes. This means that especially in mass production the need for data-driven systems is high to be able to react autonomously to process-dependent changes across machines. Figure 1 shows an example of a milling tool. Sensor-based systems that use recorded process data for condition monitoring or adaptive optimization purposes add significant value to automatic process analysis and manufacturing, thus contributing to increased productivity to meet the requirements of today's production processes 

 

 

The represented state-of-the-art machine tools are designed for productivity, functionality, and accuracy. The mechanical development as well as the machine control is technically advanced. It is designed for functionality but not for adaptivity and connectivity. Therefore, this allows only limited subsequent addition of adaptive solutions or control of further complementary solutions. For automatisms, there are usually only a few solutions available, which is why in-depth expert knowledge of production is still required. Finally, this is also because the machine-integrated sensor technology is usually only trivial, cannot be addressed or read out externally to a large extent, and is subject to severe limitations in terms of sampling rates and accuracy. 

 

The presented use-case within the FabOS project is a three-axis machine tool of the type DMG HSC-55 at the Fraunhofer IPT. It is equipped with additional vibrations sensors, acoustic emission sensors and an industrial microphone. The following figure shows the positions of the exemplary vibration and acoustic emission sensors on the machine’s spindle axis. 

 

 

For data acquisition the Fraunhofer vBox is used. This is a sensor data acquisition unit which inhibits different connectors for various sensor types. By its internal electronics it is capable of sampling rates up to 100 kHz. The sampled sensor data is transferred to an additionally connected IPC, which is stored within the machine cabinet. 

 

As specific use case, the manufacturing process of one small turbine blade like model was chosen. By its nature this process tends to show high vibrations when the manufacturing process is not optimized. This represents an optimal use-case for the application of anomaly detection, since the manufacturing of normal parts, which fulfill the required workpiece quality and the manufacturing of “bad” parts can easily be adjusted. 

 

For the creation of the machine learning model for anomaly detection, already acquired data of the manufacturing processes will be used. During the course of the FabOS project, the model shall be integrated into a ML pipeline to provide online anomaly detection during running manufacturing processes. In an initial version, active process interaction will not be possible, therefore detected anomalies will only be shown via a warning message to the operator on the screen mounted next to the machine. At the current stage the model is still under development and trained using offline data.  

 

[1]  K. Ahlborn, G. Bachmann, F. Biegel, J. Bienert, S. Falk, A. Fay, T. Gamer, K. Garrels, J. Grotepass, A. Heindl und J. Heizmann, „Technology Scenario ‘Artificial Intelligence in Industrie 4.0',“ 2019. [Online]. Available: https://www.plattform-i40.de/IP/Redaktion/EN/Downloads/Publikation/AI-in-Industrie4.0.pdf?__blob=publicationFile&v=5. 

[2] T. Wuest, D. Weimer, C. Irgens und K.-D. Thoben, „Machine learning in manufacturing: advantages, challenges, and applications,“ Production & Manufacturing Research, Bd. 4, Nr. 1, p. 23–45, 2016. 

[3] A. Diez-Olivan, J. Del Ser, D. Galar und B. Sierra, „Data fusion and machine learning for industrial prognosis: Trends and perspectives towards Industry 4.0,“ Information Fusion, Bd. 50, Nr. 2, p. 92–111, 2019. 

[4] S. Jeschke, C. Brecher, H. Song und D. B. Rawat, Hrsg., Industrial internet of things, Cham: Springer, 2017, p. 715. 

[5] D. M. Hawkins, “Identification of Outliers” – Monographs on Statistics and Applied Probability, p.1, 1980 

[6] V. Chandola, A. Banerjee, V. Kumar, “Anomaly Detection”, (ed) Encyclopedia of Machine Learning and Data Mining, 2016 

 

Author: Pierre Kehl, Tim Geerken

Firma: Fraunhofer IPT

Created by l.demes94 28.07.2022 13:2228.07.2022 13:22.

Modified by l.demes94 28.07.2022 13:2828.07.2022 13:28.

Short Summary

A wizard that helps you automate the generation of Data Driven Services. 

 

Article

Shortage of skilled workers in mechanical and plant engineering  

According to Handelsblatt, HR managers in mechanical and plant engineering complain about a shortage of academics in 81% of cases and a shortage of skilled workers in 90% of cases. Technological change, driven by digitization and the mobility transition is said to create attractive jobs, and many employees are going to retire [1]. 

 

One approach of keeping the burden of the shortage of skilled workers on the competitiveness and ability of companies to act to a minimum is the automated monitoring of production processes and the prediction of maintenance work. For this purpose, machine data is recorded, and models are trained with their sensor values, the so-called Data Driven Services.  

 

What can Data Driven Services do?  

Data Driven Services can monitor the condition of a machine and thus detect tool wear at an early stage based on abnormalities in the sensor values in order to minimize defect production, ensure quality and protect the machine. Cutting tools and faulty components can thus be detected automatically. With a Predictive Maintenance Service, maintenance work can even be predicted in order to avoid downtimes and to plan maintenance work in such a way that production is hindered as little as possible. With the Predictive Quality Service, the production parameters are monitored and, if necessary, optimized. In this way, the productivity of a machine can be increased. In addition, predictions can be made as to how the machine condition will affect product quality.  

 

With the increasing number of sensors that have been installed in industrial machines for several years, enormous amounts of data are already being accumulated, which are all too often not used at all or must be prepared for emerging questions.  

 

Will the problem of the shortage of skilled workers be shifted from machine construction and engineering to the field of data science and data engineering, where qualified personnel are also desperately being sought? This is exactly where we start with our goals for FabOS:  

 

Automated generation of Data Driven Services  

With our wizard, we bring the AI to your data, virtually. The aim of the wizard is that you can use your data profitably even without a data science department. You will also not need additional support from IT. The wizard is designed in such a way that it allows your domain experts, i.e., machine builders or machine operators, to independently create high-quality Data Driven Services and then put them directly into operation.  

 

Data integration via the FabOS operating system  

You can use the FabOS operating system to connect your machines and integrate the data. The wizard offers a graphical user interface via which the user first selects the respective machine and the desired Data Driven Service. The Data Driven Services are described in the user interface to make it easier for you to choose one. Now all relevant data sets are suggested. The user is a subject matter expert for his machine and therefore knows for himself which sensor data is necessary to monitor the condition of the machine for the respective application and selects it accordingly. In addition, the length of the history can be adjusted to exclude characteristic changes in the data set. These can be caused, for example, by changed environmental conditions or changes in production. This is a crucial step in data cleansing. After selecting the data, further automated preprocessing takes place. Here, regulations resulting from data integration via FabOS are exploited.

  

Wizard powered by Auto-ML  

Meta-learning analyzes the available data and preselects the algorithms that have provided promising results for similar data sets. Thus, the actual AutoML procedure able to build complex pipelines for the given problem very efficiently [2]. It will determine the best possible model with the ideal hyperparameters after just a few iterations via different algorithms. If you are interested, all steps carried out in AutoML can be traced using the XAutoML tool newly developed by USU [3]. Additional transparency to the reasoning of the model is provided by an automatically generated decision tree that describes the overall model. In addition, it is possible to explain predictions for individual data points. These arrangements increase the human acceptance of the Data Driven Service and reduce the risk of misbehavior of the algorithm.  

 

Service Lifecycle Management  

The user can now choose whether to accept the model with the best metric or opt for another one. In order to reduce the time for go-live of the Data Driven Service, the wizard offers further assistance functions. It supports the user during deployment, here you can choose between cloud and on edge. Finally, the live data of the machine is fed into the deployed model. The predictions are visualized on a dashboard. Here, the user also has explanations for the predictions available. The service is monitored by an AI supervisor who can provide extra resources under load. In addition, the AI supervisor issues an alert if the prediction quality drops. This can very often be due to changes in environmental conditions or a change in production control. Here it is necessary to retrain the model, whereby the wizard can support again.  

 

[1] https://www.handelsblatt.com/politik/konjunktur/nachrichten/fachkraeftemangel-maschinenbauer-wollen-personal-aufstocken/27843750.html?ticket=ST-340117-T6GmkIysmEDH4uXTcWeg-ap6  

[2] Zöller, M.-A., Nguyen, T.-D., & Huber, M. F. (2021). Incremental Search Space Construction for Machine Learning Pipeline Synthesis. International Symposium on Intelligent Data Analysis, 103–115. https://doi.org/10.1007/978-3-030-74251-5_9  

 [3] Zöller, M.-A., Titov W., Schlegel T. & Huber, M. F. (2022). XAutoML: A Visual Analytics Tool for Establishing Trust in Automated Machine Learning. https://doi.org/10.48550/arXiv.2202.11954 

 

Author: Carolin Walter

Firma: USU Software AG

Created by l.demes94 06.07.2022 09:5006.07.2022 09:50.

Modified by l.demes94 28.07.2022 13:2728.07.2022 13:27.

Article

With Industry 4.0, intelligent and connected factories, so-called smart factories, are a bright vision of the future. Currently, however, companies are still often faced with the challenge of digitizing workflows and processes in a way that promotes efficiency without sacrificing flexibility and usability.

 

This problem is easily explained in the case of components from the manufacturing industry. At the moment, many companies are not yet talking about smart machines and processes. However, flexible production systems are often operated, which means that the components produced by the system change on a daily, hourly or even minute-by-minute basis. What initially sounds good and flexible, however, also harbors problems.  

 

On the one hand, employees often do not know which component they are dealing with because the construction plans change so quickly, which significantly increases the risk of confusion. On the other hand, the finished components then have to be assigned to the correct customers or projects in a time-consuming process using paper lists, or markings have to be applied to the components from the outside (e.g. using a laser) in order to be able to recognize which component it is afterwards. This procedure is also error-prone and should no longer be necessary. Artificial intelligence should enable part identification that recognizes the respective parts in real time and assigns them to the appropriate blueprint. 

 

Currently, there is no solution on the market that allows the identification of any free-form parts. At the moment, there are only partial solutions for specific, pre-defined components. However, these have the disadvantage, that it is very time-consuming and cost-intensive to add and train further components, which makes these solutions extremely inflexible. A solution that remains flexible and can identify any free-form parts without further training of the data does not yet exist on the market, but is essential for the concept of a full digitization or smart factory and an important point for the competitiveness of German companies. 

 

Content based similarity comparison for component identification 

This innovation is being developed by Compaile Solutions GmbH within the FabOS research project. The aim of the project is the content-based (not optical) similarity comparison of unknown components based on neural networks. This enables the assignment of any components to corresponding construction plans as well as the use of this and similar technologies with networking of edge and cloud computing. It makes production more flexible, less prone to errors and enables individual components to be manufactured much more cheaply and quickly. A machine that is flexible and can act in real time as a result saves companies valuable warehouse space, as components are created when they are needed and there is no need for pre-production. Furthermore, this technology can also be extended to quality assurance issues. 

 

 

Increased quality through AI in industry 

With the help of an AI, more reliable quality checks can be performed during and after production. The advantage compared to currently used camera systems or manual quality control is speed and flexibility. Unlike a classic camera system, an AI is not dependent on a specific position or orientation of the components in order to be able to detect defects. Even complex or new components can be easily taught to the AI and subsequently analyzed by it within seconds. In conjunction with fully automated production, the necessary steps can be taken directly to rectify the detected defects before the affected parts are processed further and major damage occurs. 

 

Author: Kaja Wehner

Firma: COMPAILE Solutions GmbH

 

Created by l.demes94 22.06.2022 09:3122.06.2022 09:31.

Modified by l.demes94 27.06.2022 10:1827.06.2022 10:18.

Short Summary

This article gives a short overview over the topic of safety in industrial applications, highlighting legal aspects and corresponding requirements for products and their development process. Furthermore, problems of industry 4.0 and machine learning regarding adherence to current legal regulations are discussed, and possible solutions under development as part of the FabOS project are introduced.  

 

Article

Since its very beginning, the industrialization has been characterized by steady change. Important milestones were the change from pure manual labor to the use of simple machines as well as the employment of e.g. steam power, the beginning of the mass production, driven by electrification and the assembly line, and the still ongoing automation through the use of computer systems. However, the next industrial milestone is with Industry 4.0 already at hand. It is characterized by the increasing interconnection of production assets, human-robot-collaboration and the use of artificial intelligence. 

  

 
Human and robot collaborating on an assembly task ©KIT 

  

The serious changes to the production itself during these ‘industrial revolutions’ came with changing risks for the factory workers. Especially the later half of the 19th century is painfully remembered for its lack of occupational safety which lead to frequent and severe injuries of workers and child labourers alike. Consequently, occupational safety became an increasingly important subject of discussion at the end of the 19th century, and the foundation for today’s strict occupational safety laws was layed. Nowadays, there is a broad range of laws and standards regarding factory and machine safety (e.g. Machinery Directive 2006/42/EG, IEC 61508, ISO 13849) with the goal to avoid injuries of workers. With the upcoming changes induced by Industry 4.0, it will be a demanding challenge to keep fulfilling the requirements of these laws and standards - a purpose FabOS is going to contribute to. 

 

As mentioned in the previous section, there exists a wide variety of requirements that have to be fulfilled to achieve safety for workers in an industrial environment. To avoid potential danger from machinery - so to speak, achieving machine safety - it is necessary to obey the european Machine Directive, which has been transferred into german national law in form of the Product Safety Act (‘ProdSG’) and the Machinery Ordinance (‘9. ProdSV’). To simplify the adherence to the safety-related laws, so-called harmonised standards can be applied for the development and deployment of machines. In simple words, these standards can be seen as manuals that describe the requirements and necessary actions for the development and deployment process of safety-related machines. The most general safety standard is IEC 61508, while more specific standards like ISO 13849 (‘Safety of machinery - Safety-related parts of control systems’) are derived from it. When applying a harmonised standard successfully (Keep in mind: Not all standards are harmonised) it can be assumed that the legal regulations are met. This simplifies the development of safe machines significantly. 

  

Safety-related laws and selection of safety-related standards, together with their relations ©KIT 

  

It is important to note that safety is not an isolated property, but rather a holistic process that spans the whole product life cycle. It already starts with the selection of suitable and qualified individuals for the development and requirements for the organizational structure. However, the primary goal of this process is (among others) that the developed product will be highly reliable, meaning the almost complete absence of dangerous malfunctions. The reliability of the product is measured with the “Probability of dangerous Failure per Hours, PFH”, which applies to dangerous malfunctions of all safety-relevant functionalities. As an example, a PFH value below 10-6 is required for applications from the field of human-robot-collaboration - this equals a maximum of one dangerous error per 1.000.000 hours of operation. Even if the development and production of a machine is finished and all requirements for safety were met, there is still more to obey, as the final application context is also relevant for safety. Thus, the final deployment in a production cell must undergo a risk assessment, and suitable counter measures for identified risk factors must be employed, before commissioning can take place.

 

Particularly in view of the main characteristics of Industry 4.0, it becomes increasingly difficult to adhere to current safety regulations while embracing recent technological advancements. Today’s AI applications have no provably correct behaviour, they function based on large amounts of data and associated desired results, making mathematical proofs almost impossible. Significant differences between the available training data and the data capture at the final operation site can also lead to unpredictable behaviour. Thus, the application of AI for safety-relevant functionalities can not be done without additional safety measures, as humans could be injured due to AI malfunction (e.g. when AI is controlling a robot). Furthermore, the desired modularity and changeability as part of Industry 4.0 conflicts with current safety regulations: Today, a risk assessment is needed for each specific production cell configuration, and even the slightest changes make a repeated risk assessment necessary. This contrasts with the demand for a freely changeable production. 

 

FabOS is going to contribute to the solution of the mentioned problems by investigating possibilities for the application of AI in safety-related functionalities as well as the employment of a changeable production without repeated risk assessments. To enable the application of AI, a so called safety supervisor shall become part of FabOS. The supervisor will be tasked with validating AI decisions with respect to fixed safety criteria based upon the current state of production devices. In the case of an AI that controls a robot, the safety supervisor would monitor the current position of all parts of the robot and make sure that it does not leave its predefined working space to avoid collisions with humans. If the AI would make a decision to leave the working space, the safety supervisor would either discard the move command before being executed by the robot, or cause a safety stop of the robot shortly before it would leave the working space. Thus, the responsibility for human safety is transferred from the AI to the safety supervisor, who makes decisions based on clear rules whose correctness can be proven. To ensure safety in a changeable production, so-called Conditional Safety Certificates (ConSerts[1]) will be used. The basic idea behind the approach is to abstract from specific hardware and software by formulating requirements for the offered functionalities of these devices. The risk assessment shall be done to the functionality-level by formulating requirements for functionalities and devices offering these functionalities. Using a laser scanner as an example, such requirements could be the installment at position X, a sampling frequency of at least 50 Hz and at least performance level d (measure for the reliability of a safety-relevant device or function, defined in EN ISO 13849). When the laser scanner is exchanged, there will be an automated or partially automated check if the safety requirements formulated in the associated ConSert are fulfilled, to make a repeated risk assessment unnecessary.

 

It is the goal of the research on the safety supervisor and ConSerts to highlight possibilities and approaches to deal with the different aspects of safety in Industry 4.0 applications in the future. Obviously, the full-fledged safety certified development of components is too costly for research projects like FabOS. However, there will be prototypes being developed, which can serve as a starting point for further safety-related work in FabOS and Industry 4.0 alike. 

 

Author: Patrick Schlosser

Firma: KIT

Created by l.demes94 08.06.2022 09:0408.06.2022 09:04.

Modified by l.demes94 28.07.2022 14:3528.07.2022 14:35.

Short Summary

An efficient lot size one requires a changeable factory. However, safeguarding a changeable factory needs a new approach for safety engineering. In this article, we will explore the possibility of employing a modular and model-driven approach for safety engineering by leveraging Conditional Safety Certificates. 

 

Article

1. Introduction 

Industrie 4.0 promises to revolutionize production by making plants more open and flexible, which subsequently allows to manufacture highly individualized products as well as a large amount of production variants, directly tailored to the customer’s needs. This individualization and alteration potential through Industrie 4.0 also contributes to the UN Sustainable Development Goal 9: “Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation” [1]. The high degree of flexibility in Industrie 4.0 is partially achieved through the employment of Human-Robot-Collaboration, which marks a step away from the rigid separation of automation and manual labour. Humans and robots now share a common workspace, where the robot for example assists the human worker in physically demanding and simple, repetitive tasks, while the human herself can perform the product individualization and more challenging tasks. Though this cooperation increases the flexibility of the production while maintaining high productivity, it comes with its own challenges to worker safety, as not only no separation between human and robot exists, but also that production environment and devices involved change frequently to account for the currently assembled product variant. Therefore, regulations for machine and worker safety in production environments, like EU Directive 2006/42/EC, remain highly relevant in Industrie 4.0. 

  

2. The Safety Engineering Process in Industrie 4.0 

Based on these challenges, FabOS work package 3.2 investigates how a changeable production plant can be safeguarded. A safety engineer is traditionally tasked with manually inspecting and approving the plant (along e.g. checklists). Changeability of the plant would either a) cause regular extensive audits or b) irregular complex audits to check a consolidated safety concept for all plant variants. To avoid both, we plan to employ the concept of “Conditional Safety Certificates” [2] (ConSerts), which allows safety arguments to be model-driven and modular. A production planner and a safety engineer could change the plant and check the safety at the same time in a model-driven fashion. The use of asset administration shells (AAS) and their submodels makes it possible to develop manufacturer-independent safety concepts and to integrate them in production in a semi-automated manner to create a safe plant. 

  

3. Modular Safety Assurance for a Robotic Bin-Picking Use Case 

Fig. 1 Two scanners (black circle-cuts) with two safety zones (yellow and red) inform a robot to slow-down or stop if a human approaches it (© 2020 KIT). 

  

Not all safety concepts are static and inherently safe; instead technical protection systems are used to transmit, evaluate, and use safety-critical information at runtime, e.g. to trigger an emergency stop. ConSerts can be used at runtime to abstract from the specific protection system by synthesizing runtime monitors on their basis. For example, a set of ConSerts describes a safety concept to safeguard a pick-and-place application, consisting of a robot and laser scanners (cf. Fig. 1). The scanner provides the occupation of different safety zones and the robot demands that a certain space around it is free, depending among other factors on its current speed. The ConSert-based monitors (not depicted) act as mediators at this point, collecting information from the laser scanner and providing guarantees to the robot about the occupation of the workspace. 

Fig. 2 Each component comes with a Platform I4.0 [3] compatible ConSert submodel and a runtime monitor, containing structure and logic of the safety argumentation. 

  

The structure of the safety concept and the system components are visible in Fig. 2. Here it is also clear what role asset administration shells and submodels play. They allow runtime data (such as safety zone occupancy) to be used to make safe decisions (such as allowing robot movements). Here, the “Safe Robot Application” submodel can be seen as a system service with a safety concept that can be developed independently of specific components. Similarly, the production assets (robots, laser scanners) are developed independently of the safety strategy and merely provide their guarantees and requirements in the form of jointly defined ConSerts. 
It should be mentioned at this point that the focus of the work package is on modeling the safety concepts and making these concepts usable at runtime. The safety of runtime environments (OS, containers) and communication networks (protocols, transmission methods) is considered, since it leads to a safe overall system, but is not the focus. Here, technologies such as Time-Sensitive Networking (TSN) and real-time-capable containers are to be seen as potential enablers [4]. 

  

4. Explainable AI as an Enabler for Dynamic Risk Management 

Closely related to the topic of ensuring worker safety is the challenge of avoiding property damage through employed robot systems. In contrast to worker safety, this involves avoiding collisions of the robot with the environment and inappropriate actions of the robot potentially leading to damage on the handled workpiece. Especially AI applications – an integral part of Industry 4.0 - with their sometimes unpredictable behavior can be seen as a major risk factor here. For example, using an AI for determining grasp positions for workpieces can lead to severe property damage when the determined position is incorrect: Then the robot can for example drop the workpiece when the grasp is not stable or collide with the environment when the grasp position is predicted inside some of the surroundings. To detect and/or avoid such defective and potentially harmful AI decision, extensive research is performed on explainability of AI and determination of uncertainties of AI decisions [5]. 

 

In this use case, the above-mentioned grasp position algorithm is to be extended by a component for determining uncertainty. This then serves as an additional input for the ConSert-based monitor and thus influences the robot speed. Thus, both classical safety mechanisms (human in proximity causes an emergency stop) and mechanisms of dynamic risk management are combined. In the latter case, the approaching speed is coupled to the uncertainty of the decision, in order to keep the risk of material damage low, while at the same time ensuring efficiency.

 

5. Outlook 

In work package 3.2 of the FabOS project, these concepts are going to be implemented by the end of the project and integrated into demonstrators. As concrete results, various safety submodels will be implemented and a safety supervisor component is to be developed. In addition, the delivery of asset administration shells and submodels linked with runtime behavior is considered. Through this combination, an easily manageable AASX file can be provided and directly integrated into the off-the-shell components of the I4.0 middleware BaSyx [6]. This provides a contribution to link safety engineering with asset administration shells – so that future open, distributed, and flexible Industry 4.0 plants do not have to compromise on safety. 

 

[1] https://sdgs.un.org/goals/goal9 

[2]  Daniel Schneider and Mario Trapp. „Engineering conditional safety certificates for open adaptive systems.” IFAC Proceedings Volumes 46.22 (2013): 139-144. 

[3]  https://drops.dagstuhl.de/opus/volltexte/2020/12001/pdf/OASIcs-Fog-IoT-2020-7.pdf 

[4] https://drops.dagstuhl.de/opus/volltexte/2020/12001/pdf/OASIcs-Fog-IoT-2020-7.pdf 

[5] Kläs, Michael, and Lena Sembach. "Uncertainty wrappers for data-driven models." International Conference on Computer Safety, Reliability, and Security. Springer, Cham, 2019. 

[6]  https://www.eclipse.org/basyx/ 

 

Author: Andreas Schmidt (Fraunhofer IESE), Denis Uecker (Fraunhofer IESE), Tom Huck (KIT), Christoph Ledermann (KIT), Frank Schnicke (Fraunhofer IESE)

Firma: Frauenhofer IESEKIT

Created by l.demes94 25.05.2022 09:3925.05.2022 09:39.

Modified by l.demes94 25.05.2022 09:4225.05.2022 09:42.

Short Summary

This article introduces the concept of Collaborative Robots (“Cobots”) and discusses advantages of Cobots compared to traditional industrial robots as well as open challenges for Cobot deployment. Herein, the main focuses lies safety as a central issue in collaborative robots. Various safety challenges are discussed, especially with regard to to combination of artificial intelligence and collaborative robotics. Finally, the article highlights how FabOS can contribute to the safe deplyoment of Cobots in a production environment. 

 

Article

“Cobots”: Collaborative robots 

  

 

Cobots are specially designed to interact with humans, for example as seen here, through hand guidance.  ©KIT 

  

If you have visited any automation fair within the last few years, you have probably seen a “Cobot” (short for “collaborative robot”). Almost all robot manufacturers, from established companies to newcomers, have introduced robots specifically designed for human-robot collaboration (HRC). The trend towards collaborative robotics is driven by a number of factors that are crucial for the production of the future: Due to individualized products and smaller lot sizes, traditional fully-automated robot systems are often not flexible enough. Due to increasing labor costs, on the other hand, manual labor is also unattractive. HRC presents a trade-off between these two opposites and offers a low-cost entry into the world of robotic automation, while maintaining some of the flexibility that is inherent to manual labor. Furthermore, collaborative robots can assist human workers with tasks that are physically or ergonomically challenging - a point that is especially important when considering demographic trends in many industrialized countries. 

 

Safety: A major challenge in HRC 

 

However, all these advantages should not hide the fact that deploying a collaborative robot is far from easy: Several issues need to be addressed before commissioning of the system. The most important issue is safety: Robots, even if they are relatively small and lightweight cobots, can move fast and exert great forces in case of a collision. Thus, safety is paramount when humans and robots share a workspace. However, ensuring safety is not trivial. Even though cobots usually come with a wide variety of safety functions such as velocity limitation, workspace limitation, or collision detection, they are not inherently safe. Robot safety is a property which is not inherent to the robot. Safety also depends on the application and on the system environment where the robot is used. Thus, the robot safety standard ISO 10218 requires that a risk assessment is performed to identify and assess potential hazards and configure the robot’s safety functions accordingly. The risk assessment procedure itself is specified by ISO 12100. Nowadays, risk assessments are typically performed on the basis of expert knowledge, experience, and simple tools such as checklists. However, current research aims to develop support tools that are based on simulation and intelligent expert systems. 

 

Although a proper risk assessment is important, it is not the only challenge. HRC is also very demanding with regards to the components and communication channels that are used. Safety-critical robot functions (e.g. measuring human-robot distance and transmitting that information to the robot), must fulfil the safety requirements expressed by Performance Level (PL) d according to ISO 13849 or Safety Integrity Level (SIL) 2 according to IEC 62061. This requirement significantly reduces the choice of components and communication channels and increases costs. System Designers need to consider carefully which functions are safety-critical and which are not. To avoid errors and keep costs low, safety-critical functions should - if possible - be separated and implemented locally. When it is not possible to implement safety-critical functions locally, and a network has to be used to transmit safety-critical signals, users should assess carefully whether the network infrastructure can fulfil the strict safety requirements. 

  

Safety-rated components do not necessarily make a safe system. Safety is a system-wide property. ©KIT 

  

Collaborative robots and artificial intelligence: A good combination? 

  

At first glance, artificial intelligence (AI) and HRC are a perfect combination: Machine Learning can enable robots to adapt to their human counterparts or to changes in the production environment. But again, one must consider the safety challenge: In current industrial practice, programs are typically hard-coded on programmable logic controllers (PLCs). In contrast, AI adapts its behavior by learning from data. Thus, AI-driven components might act in an unforeseen way. This makes it very hard to provide safety guarantees as required by the aforementioned safety standards. AI-based systems with safety guarantees are an active research issue. Although there are promising approaches, it will probably take quite a while to deploy “safe” AI in a real-world industrial environment. 

 

Thus, in a short- to medium term timeframe, it will be the most practical solution to deploy AI only in non-safety-critical robot functions. If, nevertheless, an AI system requires access to safety-critical robot functions (such as motion control), there should be another, non AI-based component to supervise AI decisions with respect to certain boundaries (e.g. workspace or velocity limits). 

 

How can FabOS support the deployment of collaborative robots in a production environment? 

 

As we have made clear in this article, safety is a crucial challenge for HRC applications in a production environment, especially when AI-components are involved. Planning and risk assessment are time-intensive and costs of safety-rated components are relatively high and the choice of components must be considered carefully. 

 

Motivated by this challenge, FabOS conducts several research activities related to the safety of AI-based systems, including HRC applications. The FabOS Safety Supervisor will provide a generic component to supervise AI-driven systems at runtime, which - among other benefits - is expected to simplify the deployment of collaborative robots with AI-components. 

 

Furthermore, FabOS will investigate the integration of so-called “Conditional Safety Certificates”[1] (ConSerts). The use of ConSerts will simplify the process of checking whether the current safety configuration of a production system is valid and the used components fulfil the relevant safety requirements. After exchanging or modifying a component that is part of the safety configuration, for instance, ConSerts help to determine whether the system still fulfills the original safety requirements. More details about Safety-Supervisor and ConSerts can be found in our blog article about functional safety. 

 

Finally, FabOS will also simplify collaborative robot use by providing interfaces to various simulation tools. These interfaces are beneficial because an increasing number of HRC applications are planned and tested in simulation. The integration of simulation tools in FabOS is expected to facilitate the building of digital twins, simulation-based risk assessment, and virtual commissioning of HRC applications. 

 

[1] Schneider, Daniel, and Mario Trapp. "Engineering conditional safety certificates for open adaptive systems." IFAC Proceedings Volumes 46.22 (2013): 139-144. 

 

Author: Patrick Schlosser

Firma: KIT

Created by l.demes94 11.05.2022 10:2911.05.2022 10:29.

Modified by l.demes94 11.05.2022 10:3211.05.2022 10:32.