App-Einstellungen:

06.12.2022 10:4806.12.2022 10:48 von l.demes94 erstellt.

Short Summary

Introducing a new device to an industrial plant system is costly, as it in-volves a lot of planning as well as a lot of human resources. Additional-ly, the plant or at least the production line has to be stopped during the modification of the plant system in order to perform the device integra-tion. The longer an integration takes, the more expensive it becomes overall.  

In this blog post, we want to exemplify the application of Asset Admin-istration Shells to facilitate the commissioning of Industry 4.0 system components by means of evidence-based engineering through integra-tive simulation. Additionally, we will showcase the benefit that the components developed in FabOS can provide for virtual commissioning. 

 

Article

In order to successfully adapt a running industrial system, possible issues need to be identified and addressed beforehand. If an unforeseen issue emerges during device integration, the cost of tackling it increases tremendously in contrast to resolving it upfront. For instance, overlooking a possible networking issue may lead to a safety-off of the plant if timing constraints can’t be met any more. The same issues may pop up during the (re-)deployment of applications due to potential changes in network requirements as a result of the change. Today, these kinds of issues are typically addressed by the experience of the various plant engineers involved. However, this still leaves gaps that might lead to missing critical issues. In contrast, evidence-based engineering provides facts and measures to identify where issues may arise and can give recommendations on how to tackle these issues. 

 

Creating confidence with network simulations in FabOS 

A plant simulation can provide comprehensive evidences and enable the virtual commissioning of industrial devices with the required level of confidence. Thus, challenges can be addressed before the changes are carried out in the real plant. Additionally, a plant simulation is a central building block for enabling Continuous Engineering (CE) practices in the context of Industry 4.0 [1,2]. However, there exists practically no tooling tailored to the specific concepts of CE for Industry 4.0, which, first of all, requires integration with the Industry 4.0 Asset Administration Shell (AAS). 

In FabOS, we will address this gap by creating a simulation base service based on the data provided by AASs. Since the developed architecture enables the integration of different kinds of tooling, this simulation base service will be able to support multiple simulation approaches like network simulation or simulation of physical interactions. Additionally, multiple submodel types will be defined. The data provided by them will be used as input for simulation tooling.  

As previously described, missing networking issues may lead to the degradation of system qualities and, in the worst case, to safety-offs. Thus, identifying these issues beforehand using, for instance, a network simulation, is key. However, such a network simulation has to be supplied with the necessary system details. These include, but are not limited to: 

 

  • Plant network topology: How are devices interconnected? 
  • Application requirements: What are the application requirements in regard to quality criteria such as real-time support? 
  • IT system capabilities: Which bandwidth do interconnection devices like hubs, switches, or routers provide? 
  • Data source behavior: What is the data rate at the data sources in terms of message size and transmission frequency? 

 

In Industry 4.0, the AAS and its submodels are used to describe core aspects of entities. Thus, FabOS is creating submodel templates detailing the needed information as described above. The simulation component of FabOS will be able to access these predefined submodels and thus automatically retrieve the necessary information needed for creating evidences and thus confidence in the successful outcome of a change. 

Integrating the FabOS simulator component will therefore only require describing the necessary information in submodels. 

 

Asset Administration Shells and submodels for network simulation 

For the specification of a network simulation scenario, corresponding AASs and corresponding AAS submodels are defined. For each existing (and desired) device or application in the production plant, there is at least one corresponding AAS and one AAS submodel describing the properties of this device or application. In a scenario where the network is to be simulated, some properties such as “data rate” and “network address” need to be defined for each device and application that are part of the network.  

 

Additionally, the relationship between two nodes has to be specified. In this scenario, a relationship between nodes means specifying which two devices, or two applications, or one device and one application, exchange data. Such a relationship is defined in a Topology submodel. In order to specify service quality attributes, one could also utilize a specific submodel. In such a submodel, expected quality values would be defined together with corresponding assets. For example, one could determine that the maximum delay accepted by a SCADA application to send a package to a PLC would be 10 milliseconds, as illustrated in Figure 1. In this way, the simulated values could be compared after the simulation with the expected values. Therefore, simulated scenarios could support quality engineers in identifying possible ways for improving, for example, the performance of a network.  

 

 

Figure 1 – AAS submodel of a SCADA application including quality attributes 

 

Integrating simulation tools with the FabOS simulation components 

The benefits of the simulation component developed in FabOS will not be limited to the integration of submodels. The FabOS component and its interfaces will be defined with reusability in mind. Thus, partners can provide their own simulation tooling and integrate it with corresponding application programming interfaces. 

For the realization of the Simulation Base Service (SBS) and the required service architecture-relevant facilities it will be based on, we will deal with the following work items in the context of FabOS: 

 

1. Definition of the interfaces between the SBS and the simulator component 

2. Design of a data model for exchanging required information between the SBS and the simulator component 

3. Concept for the unified description of quality properties and application requirement criteria 

4. Concept for the integration of and access to executable simulation models in AASs (e.g., FMU) 

 

Based on the corresponding specifications, we will implement and test the SBS in combination with simulator components in actual application contexts as provided by our FabOS industry partners. This allows us to consider real-world data produced by industrial plants for the purpose of evaluating quality aspects. 

With respect to a SBS-compatible simulator component, there are some general requirements that need to be met: 

 

  • The simulator must support the inclusion of simulation configurations that describe the architecture, parameters, and quality properties of the system. 
  • The simulator must support the execution of simulation scenarios based on variable configurations in order to perform simulation runs to produce simulation results. 
  • The simulator must support the generation and provision of results in a portable format in order to enable the simulation base service to retrieve them for post-processing. 
  • The simulator can support the integration of heterogeneous simulation models provided by external entities such as co-simulators in order to perform holistic simulation scenarios. 

 

Conclusion 

The simulation base server will enable users of FabOS to leverage simulations for evidence-based engineering. To support the SBS, various submodel types will be defined in FabOS, describing relevant properties of the system to be analyzed. Additionally, the simulation base server and its interface will be defined in order to enable FabOS users to integrate their own simulation solutions with the help of adapters. 

 

Author: Frank Schnicke, Adam Bachorek, Tagline Treichel

Firma: Fraunhofer IESE

06.12.2022 10:3706.12.2022 10:37 von l.demes94 erstellt.

06.12.2022 10:4506.12.2022 10:45 von l.demes94 bearbeitet.

Short Summary

This article describes a method for the development of business models (BM) within a platform-based eco system. The method was developed as part of the FabOS research project and validated in workshops with the project partners. The focus of this article is on the application of the developed approach within the industrial research project, which combines several already established business model methods. 

 

Article

Introduction 

The successes of large platform providers illustrate the enormous economic potential of platforms in the business-to-customer (B2C) area. Measured in terms of company values, the seven largest listed platform companies are more valuable than the sum of all listed companies from the Euro Stoxx 50 [1-3]. Traditionally product-oriented BM are increasingly evolving into data-centric approaches enabled by technologies of digitalization. [4, 5] 

Classically, the value proposition consisted of the transfer of ownership of the physical product through selling it to the customer. The customer was responsible for the costs of operating and maintaining the machine or system. The networking and collection of product data in smart products is enabling a radical transformation of this long-standing business model. With the additional information, manufacturers can predict, reduce and optimize failures, creating entirely new product performance and holistic service offerings, such as the product-as-a-service model. 

The platform economy is considered key to the implementation of data-centric approaches in a digitalized economy and thus enables this transformation process by building digital ecosystems [1, 4].  

A special feature of the FabOS project is the high number of 25 project partners in the interdisciplinary consortium. The consortium consists of potential users from mechanical and plant engineering, various research institutions, and software development companies. Already in the development phase, the FabOS project aims at establishing a continuity and a future economic operation. For this reason, the question arises as to how the later economic operation of the FabOS platform can be brought into line with the individual utilization possibilities of the partners. To answer this question, a structured and customer-oriented approach was developed. This takes into account both the development of BM at the specific product and service level and at the generic, overarching industry level.  

 

Common definitions describe a BM as the way a company creates value for its customers and makes money in the process. In the literature, there are a variety of methods and approaches for BM development. For example, Osterwalder [6] uses nine elements to describe a BM in the so-called Business Model Canvas (BMC), while Gassmann's Business Model Navigator [7] reduces these to four elements. 

Due to limitations of the BMC, such as the exclusion of external forces and market factors from ecosystems or the strong focus on only one dimension of the BM (value proposition), this tool is not sufficient for use in platform business model development alone. The Business Model Navigator by Gassmann [7] also has limitations. Its focus is strongly concentrated on the customer. Other stakeholders that are necessary in platform ecosystems are not considered in BM development. For this reason, the tool is also not sufficient for use in platform business model development alone. The limitations at the time of the method development of both instruments for BM development require a new, separate procedure, which is explained in the following part. 

 

Method 

The method follows an approach of two parts. First, the development of the individual BM for the products of the individual ecosystem participants and the development of an operator model and BM for the future platform FabOS. The elaborated results of each phase are used to drive the content of the respective next phase and to adapt it individually to the partners  

 

1. Individual interviews 

In the first phase, a questionnaire is developed to record the requirements of the individual partners and to capture the individual product and service ideas. Partners have the opportunity to contribute targeted content to the BM workshop. 

 

2. Individual BM workshops for partners - customers and BM patterns 

The individual participants may have different functions in the companies and knowledge backgrounds. In order to create a common basis for understanding BM, the first part of the workshop provides an introduction to the topic based on the work on BM levels [8] and the Business Model Canvas [6], as well as present BM examples from the industry. Subsequently, the role in which they find themselves in the ecosystem and who their potential customers are is discussed. After this classification, the customer value proposition [9] method is used to discuss the partner’s own product and its added value for the target customer. Subsequently, the participants discuss which BM they consider implementable. The 55 BM patterns of the Business Model Navigator [7], extended by five additional BM patterns for the digital market, serve as a basis for discussion here [10]. The results of the individual interviews from no. 1 support the pre-selection of 10 BM patterns, which are discussed in dialog with the participants regarding their feasibility. Finally, the pre-selected BM patterns are evaluated in a structured way with respect to the fulfillment of the requirements of the Business Model Navigator [7]. 

 

3. BM Workshop (FabOS) including the legal operator model - Part 1. 

The operator model of an ecosystem describes the organization of the ecosystem participants into one or more legal entities, e.g. associations or cooperatives. These legal entities perform various tasks, such as license management of the products, standardization of technical solutions, organization of further work, etc. Thus, the operating model of the ecosystem is to be seen as the basis of the BM of both the individual partners and for the platform of the ecosystem. Therefore, the operator model must first be developed for the ecosystem before finalizing the BM of the ecosystem. The results from no. 2 are used to identify which BM patterns are of interest to the individual partners of the ecosystem. Based on this, different operator models and legal forms for the ecosystem are described. The exemplary operator models are presented to the ecosystem participants. For each example advantages, disadvantages and obstacles of a possible implementation of the operator models for the ecosystem participants are discussed.  

 

4. Individual BM workshops for partners - further dimensions 

As an introduction, the findings of the first workshop from no. 2 is discussed and compared with how the products, services or customer requirements have changed in the meantime. The evaluated list of possible BM patterns from part no. 2 serves as a basis for discussion. Subsequently, the missing dimensions of the Business Model Canvas [6] are worked out with the participants. Based on these missing dimensions of the BMC, the five topic blocks (Necessary Resources, Cost Calculation, Partner Selection, Sales Concept, and Pricing and Revenue Models) are presented in modular form, as short keynotes and finally discussed with the help of guiding questions. For each participant the focus will be on different modules. The selection of focus topics is based on the results of the individual interviews from no. 1 and the first part of the workshop from no. 2. Finally, the BMC is completed together with the participants. 

 

5. BM Workshop (FabOS) including the legal operator model - Part 2. 

The second part of the workshop for the ecosystem participants on the topic of operator model and BM is based on the results of the workshops from no. 2 and no. 3.  Within this workshop a self-defined operator model canvas is to be filled in the consortium. This canvas provides assistance to consider four central elements for the definition of an operator model: Offer & task (purpose), actors & competencies, statutes and financing. A superordinate BM for the overall project will be created based on this. In addition, a concept will be developed on how the BM of the individual partners can integrate into the final operator model and BM. The operator model and BM will be presented to the consortium in a workshop and final discrepancies between the requirements of the individual partners and the proposal will be discussed in a subsequent moderated discussion. 

 

Conclusion 

The method presented in this article is used for the development of individual business models for a wide range of partners in the consortium and the project itself. The first part of the method shows that individualized BM developments for single ecosystem participants and for the whole platform are possible even in heterogeneous research projects, through modularized combination of common BM development methods. Thus, the advantage of the developed method lies in linking the individual BM requirements, of the participating companies, within the overall business model for FabOS. The modular approach to discussing the BMC dimensions and the fact that the customer perspective was given priority offered the possibility of responding more intensively to the needs of the participants. The initial individual interviews proved to be an essential part of the methodology and the method provides a structured approach and integrates all partners in a targeted manner. 

 

[1] Hildebrandt, A.; Landhäußer, W.: CSR und Digitalisierung. Der digitale Wandel als Chance und Herausforderung für Wirtschaft und Gesellschaft. Berlin, Heidelberg: Springer Gabler 2021 

[2] Statista: Größte Internetunternehmen nach Marktwert weltweit 2019. Internet: https://de.statista.com/statistik/daten/studie/217485/umfrage/marktwert-der-groessten-internet-firmen-weltweit/. Zugriff am 04.03.2022 

[3] finanzen.net: EURO STOXX 50 Marktkapitalisierung Liste. Internet: https://www.finanzen.net/index/euro_stoxx_50/marktkapitalisierung. Zugriff am 04.03.2022 

[4] Pflaum, A.; Schulz, E.: Auf dem Weg zum digitalen Geschäftsmodell. HMD Praxis der Wirtschaftsinformatik 55 (2018) 2, S. 234–251 

[5] Michael E. Porter; James E. Heppelmann: Wie smarte Produkte den Wettbewerb verändern. Harvard Business Manager (2014) 12 

[6] Alexander Osterwalder: The Business Model Ontology: a proposition in a design science approach 2004 

[7] Gassmann, O.; Frankenberger, K.; Choudury, M.: Geschäftsmodelle entwickeln. 55 innovative Konzepte mit dem St. Galler Business Model Navigator. München: Hanser 2017 

[8] Schallmo, D. R. A.: Theoretische Grundlagen der Geschäftsmodell-Innovation – Definitionen, Ansätze, Beschreibungsraster und Leitfragen. In: Schallmo, D. R. (Hrsg.): Kompendium Geschäftsmodell-Innovation. Wiesbaden: Springer Fachmedien Wiesbaden 2014, S. 1–30 

[9] Osterwalder, A.; Pigneur, Y.; Bernarda, G. et al.: Value proposition design. How to create products and services customers want. Hoboken: John Wiley & Sons 2014 

[10] Altenfelder, K.; Schönfeld, D.; Krenkler, W. (Hrsg.): Services Management und digitale Transformation. Wiesbaden: Springer Fachmedien Wiesbaden 2021​ 

 

Author: Stephan Nebauer

Firma: Fraunhofer IPA 

02.11.2022 08:3402.11.2022 08:34 von l.demes94 erstellt.

02.11.2022 08:3502.11.2022 08:35 von l.demes94 bearbeitet.

Short Summary

In the "FabOS" research project, Fraunhofer IPA is working with partners to develop, among other things, a bin picking application that enables improved detection, gripping, and defined placement of sheet metal parts. 

 

Article

Bin picking is considered the supreme discipline of robotics and is a sought-after option in many productions. However, the challenges are considerable and the application is often not implemented. There are two typical reasons for this. In many cases, bin picking cells are the first link in a linked production or assembly line, so they must provide a guaranteed cycle time. Often, however, the robot system does not detect all the parts, so that employees have to remove the remnants manually. This throws the line out of sync. In addition, the emptier the crate, the longer it often takes the robotic system to detect and grip parts in it. The fluctuations in the cycle time can be compensated for either by worst-case design or buffers. Neither is ideal. 

 

In order to solve these problems, Fraunhofer IPA has been further developing the technologies surrounding the bin picking for many years. The researchers are particularly focusing on solutions for workpieces that are difficult for the robot system's image processing to detect. The newly emerging demonstrator therefore implements the bin picking application with sheet metal parts. The use case was defined with the industrial partner in the project, the company Trumpf, which also provides the workpieces. The IPA experts are developing the solution together with the company Compaile. This demonstrator is part of the "FabOS" research project. 

 

Sheet metal parts in view 

The task of the IPA experts is to adapt their algorithms for object localization in bin picking to the challenges of sheet metal part detection. To do this, they are using the existing bp3™ software, which is already in use in some productions in three-shift operation and which companies can purchase via a license. In order to recognize the flat sheet metal parts well, cameras are first used to generate 3D data of the workpieces. The algorithms then focus on surfaces and edges to better recognize the workpieces and handle them more robustly and quickly overall. This also includes defined placement so that the component can be fed directly to the next process step. 

 

In the future, it is planned to use AI methods to enable continuous learning. This means, for example, that the software would learn from missteps. If there are several bin picking cells, the data from all the cells could be processed centrally and insights from this fed back to the cells. There are also plans to train the robot system using simulated data in a virtual environment. 

 

Identifying workpieces automatically 

Compaile complements the application with AI-based workpiece identification. This is not based on conventional image processing, but on a content-based similarity comparison of the workpieces. Based on neural networks, the workpieces can be mapped to existing drawings. In addition, the neural networks indicate how likely it is that they are correct with their estimation. By using this technology, the system can adjust itself fully automatically to a new production batch without a worker having to specify the current workpiece. In contrast to the usual classification with neural networks, the content-based similarity comparison does not require any adjustments for new, previously unknown workpiece. 

 

The described technology of AI-based workpiece identification was already on display at Hannover Messe 2022. Furthermore, the project partners plan to present the entire demonstrator with all associated technologies from Fraunhofer IPA and Compaile in the coming year.  

 

Author: Dr.-Ing. Dipl.-Inf. Felix Spenrath

Firma: Fraunhofer IPA 

19.10.2022 09:5119.10.2022 09:51 von l.demes94 erstellt.

19.10.2022 09:5719.10.2022 09:57 von l.demes94 bearbeitet.

Short Summary

Machine Learning (ML) applications and services are what make ML trained models such as deep neural networks useful. They can be hosted in the Cloud or on individual devices, at the edge. Building, Deploying and Maintaining those smart applications drift away  from standard applications management because of the evolutive nature of AI enabling the software to learn and improve over time. The following article presents the specific problems addressed by applied Machine Learning from the industrial perspective. The main guiding ideas are nowadays grouped under the common term ”MLOps”. Yes, but at Edge please.    

 

Article

When we talk about Machine Learning, we often refer to the process of training a model (e.g: a deep neural network), with batches of data thoughtfully gathered, analysed, pre-processed and occasionally labelled: the training datasets. When the training is achieved, the trained model is evaluated against metrics specifically selected for the use case and measured on a dedicated test dataset. The help the user to assess the quality of the model and the accuracy to expect from him doing the task it has been trained for. If expectations are met the model is usually serialised and stored on disk, ideally versionised in a model registry where future consumers might retrieve it as well as all useful information there is to know about it. If you’ve been that far, well done! But that isn’t yet the end of the ML lifecycle.  

 

In order to be used and perform its mission in the real world, ML models need applications to run into. Those applications are the pieces of code that operationalize the model which in a generic aspect consist of:   

 

  1. collect input data from sensors, camera, microphones, databases, internet, etc.  
  2. pre-process the data in the shape required for the model 
  3. run the inference a.k.a run the model with the previously prepared input data 
  4. gather the model results or predictions 
  5. do something with it e.g.: display or expose the prediction results or trigger some action.  

 

The development of such applications commonly follows a DevOps path of updates, unit tests, and releases. Some well known platforms such as Github or Gitlab help the collaborative work and give a clear, safe management of the application source code among the developer community. Yet it is to be dissociated from the ML model training and evaluation path which has been similarly named by the ML community as MLOps. Unlike developing source code, training ML models involves dealing with large amounts of data that Git-based platforms aren’t designed to work with.. Yet some tools dedicated to handling the case of ML have emerged. Based on the fundamentals of code git versioning, they are also able to track all modifications and updates brought along by the ML training process from datasets generation to model evaluation batches. One example experimented within FabOS is Data Version Control (see https://dvc.org/) .  

 

Ultimately the DevOps and MLOps processes need to hook in order to create a ready to deploy ML-Application.  We can easily see two obvious cases when it would be time to rebuild an ML-Apps: after a new release of the application source code, or a new release of the ML model. Things become a little bit more mixed up when the user decides, in its most genuine right, to operate different models yet with the same generic application.  Having an HTTPs REST API able to generically serve any tensorflow made image classification models is a wise approach to centralise the application development effort around a common source code base.  This also means for us that the best time to join a trained model and application code is at deployment time.  

 

The FabOS MLOps initiative proposes to solve that problem by designing a deployment service based on Docker Container technology. In its essence, it consists of mounting the serialised ML Model in a pre-build docker base image of the application. Both components, the model and the base application images, are picked up from there relative stores and registry, the very nature of docker technology make it then easy to deploy and orchestrate the newly built ML-Application container on any system hosting a container runtime such as docker, docker-compose or kubernetes.  

 

 

 

Here come now several options regarding the deployment: 

  • make the ML-App available, for instance in a shared container registry or a marketplace,  to potential users  who are then left responsible to run it on their infrastructure  
  • Deploy the ML-App in the Cloud as for example a hosted web application 
  • Deploy at Edge, remotely, over-the-air, the ML-Apps to target devices managed from a centralised administration layer.  

 

The latter case corresponds more to the industrial scenario that FabOS aims to provide a solution for. It requires, however, a robust, secured and scalable device management system able to administer and manage the fleet of connected devices.  Additionally, it shall grant the user the  possibility to know at any time where ML model and applications versions are deployed, what is the status of the development and monitor over time its ML performances. 

 

Author: Rémi Ang

Firma: SOTEC GmbH & Co KG

05.10.2022 09:5105.10.2022 09:51 von l.demes94 erstellt.

05.10.2022 09:5205.10.2022 09:52 von l.demes94 bearbeitet.

Short Summary

This article describes how Machine Learning can be used in productive systems within the shopfloor by combining edge devices specialised for AI with a robust  and versatile implementation of a software stack to embed Machine Learning applications. 

 

Article

It has been several years since Artificial Intelligence (AI) and its subsection Machine Learning (ML) has overcome the status of being a potentially useful technology to now being a proven strategic technology to be used within production environments. Driven by both considerable progress in developing more complex and more versatile ML algorithms as well as the development of hardware components that enable the resource intensive execution of those algorithms, FabOS intends on closing the gap to enable AI for the widest range of industrial use cases. 

Making AI useful is not just about evaluating any type of data structures but the interaction of many services and components that need to communicate on robust (edge) devices efficiently.  

In the case of SOTEC the answer to those challenges is the ML-stack running on a hardware specifically designed to execute those complex ML algorithms - the CloudPlug edge+ (CPe+).   

The CPe+ resembles a powerful edge device equipped with an edge TPU that can execute an inference with complex ML models within several milliseconds.  Furthermore this device comes with a range of interfaces that can directly interact with production systems in order to gather, evaluate and also push data to the Cloud if desired. 

 

 

Combining all those functionalities within one monolithic application does not go in line with modern software development and is furthermore not applicable in the highly dynamic production environment. Reusability and exchangeability of software components are the foundation to serve both a broad range of Machine Learning use cases and adapt to changing conditions. 

As a perfect fit for this versatile edge device, within the scope of FabOS, SOTEC developed the ML-stack. Connected through suitable interfaces, containerised microservices communicate with each other serving the end to end use case of Machine Learning within the production environment. 

Starting with the acquisition of both structured and unstructured data through sensor systems, these data will be evaluated by the ML model, whose complex mathematical operations are being mapped to the integrated edgeTPU. After the inference results have been obtained from the ML model, they need to be fed into a logic unit that decides which further actions are to be taken by an actuator that finally closes the loop between production machine and the ML service. Due to MQTT-based communication within the ML-stack it is possible to flexibly extend by adding further services or removing unnecessary services from the stack. 

 

 

By combining the suitable edge device with a generically applicable software stack FabOS and SOTEC close the gap between the mere research of complex algorithms in laboratory environments and the actual usage of those algorithms within production environments and thereby making Artificial Intelligence accessible for a broad spectrum of use cases and users.  

 

Author: Daniel Hartmann, Rémi Ang

Firma: SOTEC GmbH & Co KG

26.09.2022 08:4926.09.2022 08:49 von l.demes94 erstellt.

26.09.2022 08:5326.09.2022 08:53 von l.demes94 bearbeitet.

During production processes, tool wear is a constantly occurring factor influencing the quality of the workpiece. Tool wear affects the manufacturing costs, workpiece quality and process safety. At the current state, tools are replaced either on specific service intervals or based on the machine operator’s experience [1]. As the wear of the tool leads to constant changes of the manufacturing behavior, the machine operator needs to attend the manufacturing process nearly continually.  By its nature, tool wear is one of the cost drivers of machining industries. Additionally, to the costs of the replacement tools, further costs emerge because of downtime of equipment for tool change, the rejection rate of already produced workpieces due to the tool’s state and the personnel costs of the machine operator [2].  

 

From a technical point of view tool wear depends on many different process parameters, which include cutting velocity, chip thickness and the properties of workpiece and tool material. Due to the strongly varying machining conditions for different processes, general solutions for tool wear detection/prediction are difficult to develop [2]. 

 

For the measuring of tool wear, there can be distinguished between two different methods:  direct and indirect measuring [3]. Indirect methods require the integration of additional sensor systems into the machine tool most of the time. These so-called retrofit solutions often include dynamometers, accelerometers, acoustic emission, current/voltage sensors. In the field of machine tool condition monitoring these solutions are already well established. By using Deep Learning techniques on the derived sensor data, the prediction of tool-wear was already implemented in different research studies [2]. One of the biggest flaws of these techniques is their applicability to only one experimental setup. By changing process parameters like the geometry of the to be manufactured workpiece the developed model is rarely still applicable. A general solution for varying manufacturing processes has not been developed yet. 

 

Direct measurement methods use optical sensors for tool-wear detection. The biggest flaw of these solution tends to be the evaluation of the pixel data by human operators. They are responsible for choosing the relevant wear form and take measures “by hand”, which means evaluation of the digital images. An exemplary measurement size could be the width of flank wear land [4]. The combination of classical techniques from computer vision, which are efficient, transparent and optimized for their specific purposes, and methods from the area of deep learning presents a promising approach to integrate reliable tool wear detection into machine tools. This approach offers both, generality via deep-learning and specialization by computer vision [5].  

 

Tool wear detection is a texture-based recognition, rather than an object identification task. In industrial surroundings, different influences must be taken into consideration, for example: changing light exposure, different coating colors, changes in tool orientation, dirt on the camera lens caused by cooling lubricants and different refractions because of varying tool-geometries.  

 

A frequently used technique for detection in computer vision is feature detectors. Features are described as parts of the image, which contain some information of the content. Examples for features could be single points, which inhibit a specific property, like the color, or whole edges and objects which are recognizable in the image.  Examples for feature detector techniques in image processing are “sobel”, “canny” and the “active contour method” [6-8]. These techniques are often used in tool wear detection. By extracting the contour edges of the cutting tool and measuring the differences between different states of the tool’s lifecycle. To give an enhanced technical in-depth view, the following figures show a milling tool, which was used during the experiments for a first version of a FabOS tool-wear prediction model.   

 

Figure 1 shows the lower end of the used milling tool, taken by a camera, which is integrated into the machine room. The depicted red frame shows the relevant edge of the cutting tool for tool wear detection. 

 

 

The following images show the enlarged tool edge and the resulting contour after applying the sobel technique on the image. This is an early version which is not yet optimized, therefore there is still recognizable noise around the tool-edge’s contour. 

 

 

 

The next paragraph deals with the detailed use-case integration of the tool-wear prediction application in context of FabOS. 

 

With a focus on the trend of digital transformation in production technology, FabOS aims to exploit the potential to make machines intelligent through real-time sensor connectivity and data processing. To define the requirements and test the FabOS platform, use cases such as online monitoring of tool wear are implemented in the FabOS project. 

 

 

In Figure 4 the integration of the machine and the vBox [9] into the FabOS ecosystem is depicted. The vBox serves here as an edge system, complementary to the machine tool, to connect the machine and sensors to the FabOS platform via uniform interfaces. The platform is used to offload data- and computation-intensive applications related to machine learning methods to off-premise or cloud compute facilities. At the same time providing latency-critical applications, such as control algorithms, on edge hardware close to the process with short transmission paths is enabled. Cloud integration provides the ability to build cross-site databases so that analytics can be transferred to other sites, production steps and plants in conjunction with their results. In addition, FabOS offers the possibility of providing services for data-based analysis depending on the requirements pro-file of the most suitable hardware. FabOS offers networked system solutions for this purpose and manages the software and hardware components with the help of Asset Administration Shell based self-descriptions.  

 

These effective connectivity solutions allow model building of complex and computationally challenging AI-solutions like tool wear detection. The service-based architecture enables fast and uncomplicated deployment of new/improved models.  

 

[1] Schwenzer M., Miura K., Bergs T., 2019- Machine Learning for Tool Wear Classification in Milling based on Force and Current Sensors, IOP Conf. Series: Materials Science and Engineering 520 

[2] Bergs T., Holst C., Gupta P., Augspurger T., 2020. Digital image processing with deep learning for automated cutting tool wear detection, 48th SME American Manufacturing Research Conference, NAMRC 48 

[3] Jeon, J.U., Kim, S.W., 1988. Optical flank wear monitoring of cutting tools by image processing. Wear 127 (2), 207–217. 

[4] International Standard. ISO 8868-2: Tool Life Testing in Milling - Part 2: End Milling. International Organization for Standardization, 32 pp. 

[5] Mahony, N.O.', Campbell, S., Carvalho, A., Harapanahalli, S., Velasco-Hernandez, G., Krpalkova, L., Riordan, D., Walsh, J., 2020. Deep Learning vs. Traditional Computer Vision. 2194-5357 943. 

[6] Canny, J., 1986. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8 (6), 679–698. 

[7] Kanopoulos, N., Vasanthavada, N., Baker, R.L., 1988. Design of an image edge detection filter using the Sobel operator. IEEE J. Solid-State Circuits 23 (2), 358–367. 

[8] Kass, M., Witkin, A., Terzopoulos, D., 1988. Snakes: Active contour models. Int J Comput Vision 1 (4), 321–331. 

[9] Fraunhofer vBox [Online] https://www.ipt.fraunhofer.de/de/kompetenzen/Produktionsmaschinen/praezisionstechnik-und-kunststoffreplikation/vbox.html 

 

Author: Pierre Kehl, Tim Geerken

Firma: Fraunhofer IPT

07.09.2022 11:4207.09.2022 11:42 von l.demes94 erstellt.

07.09.2022 11:5007.09.2022 11:50 von l.demes94 bearbeitet.

Short Summary

Use of modern techniques from the field of machine learning in industrial manufacturing processes. This article describes the concept of anomaly detection in production and introduces first steps of integrating these solutions into the FabOS environment.  

 

Article

Today's industrial production faces various tasks and challenges, such as increasing quality requirements and complexity of the product, constant cost and innovation pressure, and the change from mass to customer-specific products. An efficient method to solve these tasks is the use of artificial intelligence (AI) [1] [2] [3]. AI can already be sensibly used in today’s production: for condition monitoring tasks especially for applications in predictive maintenance and for support in decision making for adaptive process optimization, for example through the integration of pattern recognition algorithms or neural networks.  However, for AI to be used effectively, the production infrastructure must fulfil various requirements. First, the data availability needs to be ensured [4]. Access to sensors, machines and processes must be granted, reliable and synchronized. Further, the data must be available in high-quality, therefore semantic descriptions are needed for an easy integration of new data sources. This is where standardized interfaces and data structures build the backbone for a larger use of AI in industrial production. Within this blog post, a short introduction to one of the industrial applications of FabOS shall be presented. This use-case comprises an anomaly detection within an industrial milling application process. 

 

A possible definition of an anomaly is given by Douglas Hawkins: “an [anomaly] is an observation which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism.” [5] The identification of these deviations describes a central problem in machine learning techniques within the field of industrial applications.  

 

Since the occurring anomalies are highly connected to the corresponding applications, it is nearly impossible to find consistent definitions or create universal models across different tasks/domains/machines, this aspect sets anomaly detection apart from other machine learning problems [6]. Additionally, the data inherent noise is a common difficulty in the application of anomaly detection. 

 

Within the industrial use cases, the most common type of anomaly detection is the so-called point anomaly detection. Here anomalies occur as points within the data, which do not conform to the accepted normal behavior. In the context of Machine Learning there exist different methods to execute/implement anomaly detection procedures. The most common techniques for unsupervised anomaly detection problems, which means problems where the ground truth for training of the models is not known, are nearest neighbor-based, clustering-based and statistical methods. Nearest neighbor techniques use criteria of neighborhood properties of the data points to assign an anomaly score, the basic assumption is that normal datapoints lie in dense neighborhoods, while anomalies/ outliers find themselves in sparse neighborhoods. Clustering methods learn clusters from the given datasets and give an anomaly score based on the relationship compared to the nearest cluster. The general assumption here is that anomalous points do not belong to a cluster or are very distant from the nearest cluster representative. Statistical methods estimate a model from the data and apply statistical evaluations on the probability. These kinds of methods are applicable if the normal instances without anomalies can be modeled via statistical distributions [6]. 

 

If process knowledge is already available, classification methods have shown to be very effective techniques to learn classifiers from the training data and apply labels or scores to test data. These methods can be distinct between one-class methods, which classify points either to one class or to none if an anomaly is detected, and multi-class models where points which do not belong to the normal classes are classified as anomalous. Basic models for classifications are support vector machines (SVM), neural networks (i.e., autoencoders), Bayesian models or rule-based systems [6].  

 

Modern machine tools for metal cutting are used in industrial production chains for turning, milling, and drilling operations. Depending on the respective area of application, they appear in different degrees of automation. To produce single parts and small series, a standard CNC machine with automatic tool change is usually sufficient. As the number of pieces increases, further expansion stages, for example to a machine tool center or -cell, are economical. Multi-machine systems, i.e., flexible manufacturing systems, are mostly used in mass production, as they offer significant economic advantages due to a high degree of automation. These can produce workpieces efficiently with multi-machine systems in a 24/7 series production. However, the higher the degree of automation, the less flexibly the systems can respond to changes. This means that especially in mass production the need for data-driven systems is high to be able to react autonomously to process-dependent changes across machines. Figure 1 shows an example of a milling tool. Sensor-based systems that use recorded process data for condition monitoring or adaptive optimization purposes add significant value to automatic process analysis and manufacturing, thus contributing to increased productivity to meet the requirements of today's production processes 

 

 

The represented state-of-the-art machine tools are designed for productivity, functionality, and accuracy. The mechanical development as well as the machine control is technically advanced. It is designed for functionality but not for adaptivity and connectivity. Therefore, this allows only limited subsequent addition of adaptive solutions or control of further complementary solutions. For automatisms, there are usually only a few solutions available, which is why in-depth expert knowledge of production is still required. Finally, this is also because the machine-integrated sensor technology is usually only trivial, cannot be addressed or read out externally to a large extent, and is subject to severe limitations in terms of sampling rates and accuracy. 

 

The presented use-case within the FabOS project is a three-axis machine tool of the type DMG HSC-55 at the Fraunhofer IPT. It is equipped with additional vibrations sensors, acoustic emission sensors and an industrial microphone. The following figure shows the positions of the exemplary vibration and acoustic emission sensors on the machine’s spindle axis. 

 

 

For data acquisition the Fraunhofer vBox is used. This is a sensor data acquisition unit which inhibits different connectors for various sensor types. By its internal electronics it is capable of sampling rates up to 100 kHz. The sampled sensor data is transferred to an additionally connected IPC, which is stored within the machine cabinet. 

 

As specific use case, the manufacturing process of one small turbine blade like model was chosen. By its nature this process tends to show high vibrations when the manufacturing process is not optimized. This represents an optimal use-case for the application of anomaly detection, since the manufacturing of normal parts, which fulfill the required workpiece quality and the manufacturing of “bad” parts can easily be adjusted. 

 

For the creation of the machine learning model for anomaly detection, already acquired data of the manufacturing processes will be used. During the course of the FabOS project, the model shall be integrated into a ML pipeline to provide online anomaly detection during running manufacturing processes. In an initial version, active process interaction will not be possible, therefore detected anomalies will only be shown via a warning message to the operator on the screen mounted next to the machine. At the current stage the model is still under development and trained using offline data.  

 

[1]  K. Ahlborn, G. Bachmann, F. Biegel, J. Bienert, S. Falk, A. Fay, T. Gamer, K. Garrels, J. Grotepass, A. Heindl und J. Heizmann, „Technology Scenario ‘Artificial Intelligence in Industrie 4.0',“ 2019. [Online]. Available: https://www.plattform-i40.de/IP/Redaktion/EN/Downloads/Publikation/AI-in-Industrie4.0.pdf?__blob=publicationFile&v=5. 

[2] T. Wuest, D. Weimer, C. Irgens und K.-D. Thoben, „Machine learning in manufacturing: advantages, challenges, and applications,“ Production & Manufacturing Research, Bd. 4, Nr. 1, p. 23–45, 2016. 

[3] A. Diez-Olivan, J. Del Ser, D. Galar und B. Sierra, „Data fusion and machine learning for industrial prognosis: Trends and perspectives towards Industry 4.0,“ Information Fusion, Bd. 50, Nr. 2, p. 92–111, 2019. 

[4] S. Jeschke, C. Brecher, H. Song und D. B. Rawat, Hrsg., Industrial internet of things, Cham: Springer, 2017, p. 715. 

[5] D. M. Hawkins, “Identification of Outliers” – Monographs on Statistics and Applied Probability, p.1, 1980 

[6] V. Chandola, A. Banerjee, V. Kumar, “Anomaly Detection”, (ed) Encyclopedia of Machine Learning and Data Mining, 2016 

 

Author: Pierre Kehl, Tim Geerken

Firma: Fraunhofer IPT

28.07.2022 13:2228.07.2022 13:22 von l.demes94 erstellt.

28.07.2022 13:2828.07.2022 13:28 von l.demes94 bearbeitet.

Short Summary

A wizard that helps you automate the generation of Data Driven Services. 

 

Article

Shortage of skilled workers in mechanical and plant engineering  

According to Handelsblatt, HR managers in mechanical and plant engineering complain about a shortage of academics in 81% of cases and a shortage of skilled workers in 90% of cases. Technological change, driven by digitization and the mobility transition is said to create attractive jobs, and many employees are going to retire [1]. 

 

One approach of keeping the burden of the shortage of skilled workers on the competitiveness and ability of companies to act to a minimum is the automated monitoring of production processes and the prediction of maintenance work. For this purpose, machine data is recorded, and models are trained with their sensor values, the so-called Data Driven Services.  

 

What can Data Driven Services do?  

Data Driven Services can monitor the condition of a machine and thus detect tool wear at an early stage based on abnormalities in the sensor values in order to minimize defect production, ensure quality and protect the machine. Cutting tools and faulty components can thus be detected automatically. With a Predictive Maintenance Service, maintenance work can even be predicted in order to avoid downtimes and to plan maintenance work in such a way that production is hindered as little as possible. With the Predictive Quality Service, the production parameters are monitored and, if necessary, optimized. In this way, the productivity of a machine can be increased. In addition, predictions can be made as to how the machine condition will affect product quality.  

 

With the increasing number of sensors that have been installed in industrial machines for several years, enormous amounts of data are already being accumulated, which are all too often not used at all or must be prepared for emerging questions.  

 

Will the problem of the shortage of skilled workers be shifted from machine construction and engineering to the field of data science and data engineering, where qualified personnel are also desperately being sought? This is exactly where we start with our goals for FabOS:  

 

Automated generation of Data Driven Services  

With our wizard, we bring the AI to your data, virtually. The aim of the wizard is that you can use your data profitably even without a data science department. You will also not need additional support from IT. The wizard is designed in such a way that it allows your domain experts, i.e., machine builders or machine operators, to independently create high-quality Data Driven Services and then put them directly into operation.  

 

Data integration via the FabOS operating system  

You can use the FabOS operating system to connect your machines and integrate the data. The wizard offers a graphical user interface via which the user first selects the respective machine and the desired Data Driven Service. The Data Driven Services are described in the user interface to make it easier for you to choose one. Now all relevant data sets are suggested. The user is a subject matter expert for his machine and therefore knows for himself which sensor data is necessary to monitor the condition of the machine for the respective application and selects it accordingly. In addition, the length of the history can be adjusted to exclude characteristic changes in the data set. These can be caused, for example, by changed environmental conditions or changes in production. This is a crucial step in data cleansing. After selecting the data, further automated preprocessing takes place. Here, regulations resulting from data integration via FabOS are exploited.

  

Wizard powered by Auto-ML  

Meta-learning analyzes the available data and preselects the algorithms that have provided promising results for similar data sets. Thus, the actual AutoML procedure able to build complex pipelines for the given problem very efficiently [2]. It will determine the best possible model with the ideal hyperparameters after just a few iterations via different algorithms. If you are interested, all steps carried out in AutoML can be traced using the XAutoML tool newly developed by USU [3]. Additional transparency to the reasoning of the model is provided by an automatically generated decision tree that describes the overall model. In addition, it is possible to explain predictions for individual data points. These arrangements increase the human acceptance of the Data Driven Service and reduce the risk of misbehavior of the algorithm.  

 

Service Lifecycle Management  

The user can now choose whether to accept the model with the best metric or opt for another one. In order to reduce the time for go-live of the Data Driven Service, the wizard offers further assistance functions. It supports the user during deployment, here you can choose between cloud and on edge. Finally, the live data of the machine is fed into the deployed model. The predictions are visualized on a dashboard. Here, the user also has explanations for the predictions available. The service is monitored by an AI supervisor who can provide extra resources under load. In addition, the AI supervisor issues an alert if the prediction quality drops. This can very often be due to changes in environmental conditions or a change in production control. Here it is necessary to retrain the model, whereby the wizard can support again.  

 

[1] https://www.handelsblatt.com/politik/konjunktur/nachrichten/fachkraeftemangel-maschinenbauer-wollen-personal-aufstocken/27843750.html?ticket=ST-340117-T6GmkIysmEDH4uXTcWeg-ap6  

[2] Zöller, M.-A., Nguyen, T.-D., & Huber, M. F. (2021). Incremental Search Space Construction for Machine Learning Pipeline Synthesis. International Symposium on Intelligent Data Analysis, 103–115. https://doi.org/10.1007/978-3-030-74251-5_9  

 [3] Zöller, M.-A., Titov W., Schlegel T. & Huber, M. F. (2022). XAutoML: A Visual Analytics Tool for Establishing Trust in Automated Machine Learning. https://doi.org/10.48550/arXiv.2202.11954 

 

Author: Carolin Walter

Firma: USU Software AG

06.07.2022 09:5006.07.2022 09:50 von l.demes94 erstellt.

28.07.2022 13:2728.07.2022 13:27 von l.demes94 bearbeitet.

Article

With Industry 4.0, intelligent and connected factories, so-called smart factories, are a bright vision of the future. Currently, however, companies are still often faced with the challenge of digitizing workflows and processes in a way that promotes efficiency without sacrificing flexibility and usability.

 

This problem is easily explained in the case of components from the manufacturing industry. At the moment, many companies are not yet talking about smart machines and processes. However, flexible production systems are often operated, which means that the components produced by the system change on a daily, hourly or even minute-by-minute basis. What initially sounds good and flexible, however, also harbors problems.  

 

On the one hand, employees often do not know which component they are dealing with because the construction plans change so quickly, which significantly increases the risk of confusion. On the other hand, the finished components then have to be assigned to the correct customers or projects in a time-consuming process using paper lists, or markings have to be applied to the components from the outside (e.g. using a laser) in order to be able to recognize which component it is afterwards. This procedure is also error-prone and should no longer be necessary. Artificial intelligence should enable part identification that recognizes the respective parts in real time and assigns them to the appropriate blueprint. 

 

Currently, there is no solution on the market that allows the identification of any free-form parts. At the moment, there are only partial solutions for specific, pre-defined components. However, these have the disadvantage, that it is very time-consuming and cost-intensive to add and train further components, which makes these solutions extremely inflexible. A solution that remains flexible and can identify any free-form parts without further training of the data does not yet exist on the market, but is essential for the concept of a full digitization or smart factory and an important point for the competitiveness of German companies. 

 

Content based similarity comparison for component identification 

This innovation is being developed by Compaile Solutions GmbH within the FabOS research project. The aim of the project is the content-based (not optical) similarity comparison of unknown components based on neural networks. This enables the assignment of any components to corresponding construction plans as well as the use of this and similar technologies with networking of edge and cloud computing. It makes production more flexible, less prone to errors and enables individual components to be manufactured much more cheaply and quickly. A machine that is flexible and can act in real time as a result saves companies valuable warehouse space, as components are created when they are needed and there is no need for pre-production. Furthermore, this technology can also be extended to quality assurance issues. 

 

 

Increased quality through AI in industry 

With the help of an AI, more reliable quality checks can be performed during and after production. The advantage compared to currently used camera systems or manual quality control is speed and flexibility. Unlike a classic camera system, an AI is not dependent on a specific position or orientation of the components in order to be able to detect defects. Even complex or new components can be easily taught to the AI and subsequently analyzed by it within seconds. In conjunction with fully automated production, the necessary steps can be taken directly to rectify the detected defects before the affected parts are processed further and major damage occurs. 

 

Author: Kaja Wehner

Firma: COMPAILE Solutions GmbH

 

22.06.2022 09:3122.06.2022 09:31 von l.demes94 erstellt.

27.06.2022 10:1827.06.2022 10:18 von l.demes94 bearbeitet.