Modeling Randomness
Updated: Jun 22, 2020


We are all presented with new technologies or the repurposing of a technology daily. As engineers, manufacturers, or decision makers, we must decide if this technology will enhance what we do and if it is worth the investment. In our last post we discussed virtual commissioning. This was a vertical look at a technology with very specific applications. Today, we will take a horizontal look across the spectrum of the Digital Twin to discuss how randomness impacts many aspects of our manufacturing view.
Modeling Randomness
As engineers we abhor randomness but, at the same time, we realize it is one of the reasons we all have jobs. If everything fit neatly into our spreadsheets or equations, the solutions would be much simpler, and we would need fewer engineers. Given that, much of our job is trying to limit randomness and understand how to manage its impact. Examples of randomness include the direction and magnitude of forces that can be exerted on a part, the random occurrence of equipment breakdowns in our manufacturing systems, and the variability of purchase demand from our customers.
As our modeling of these situations becomes better, we can provide more efficient solutions rather than relying on safety factors. In short, we are trying to minimize and manage the uncertainty in our processes. To do this, we must have data that we can quantify from a quality perspective.
History
Earlier I alluded to finite element analysis, discrete event simulation, and production scheduling as examples of engineers dealing with randomness. They each have a long history in engineering with FEA first referenced in the 1950s, discrete event simulation gaining prominence in the late 1960s and production scheduling methods going back over one hundred years. Each of these disciplines requires years of study and are very different with respect to their methods. The important thing to realize is that these fields has existed largely separate from the others until the last decade when new toolsets have emerged that allow for consistent, quality data to exist in a common framework.
How do I Model Randomness?
It is easy to think of a Digital Twin as a static replica of a planned future or existing system but, it is not static at all. It is an ever-changing model of our design and manufacturing concepts. I like to think of these systems as being alive and breathing. The depth and frequencies of the breath can be analogous to the amount of change. Think of a production line that is required to make 60 parts per hour. At its best it can make 70 parts per hour but there are occasional interruptions in production.
We introduce some in-process buffers to let portions of the line run while others are being repaired. This will not only increase our production but also make it more consistent. The consistency of the production is like its pace of breathing. If I monitor the hourly production in a simulation, I may see a range from 52 to 68 parts per hour with an average of 60 across 1000 simulated hours. In a different configuration of the same process, I could see the same average of 60 jobs per hour, but the range could be a much tighter 58 to 62 jobs per hour. In our analogy, the first system is more frantic, breathing harder while the second system is strolling through the park.
We use simulations to measure the probability of the variation. These models to drive us to the points in our systems where a reduction in variation will help us the most. Whether it is the size of a buffer that levels off hourly variation or the production batch size used to help us hit a scheduled delivery date, or the property of a specific material that helps dampen a wider range of vibration, we use simulation models to measure and reduce variation by understanding the random events and forces that act upon our systems.

The Reality
What are the challenges with modeling randomness? Often the errors we make in engineering are not related to the exactness of our answers but rather the inaccuracy of our assumptions. We are so enamored with how cool our software is, we forget the basic mantra of garbage-in, garbage-out. This leads us to make what I call exact mistakes.
Our detailed analysis is correct. We can quantify our level of confidence how many forklifts we need, we know the amount of force required for the product to fail, and we can say what day that large order of custom parts will be done, but what we have failed to realize is that the forklift specifications have changed, the material used to construct the part has been updated, and the customer changed the size of the custom part order. We succeed on the microlevel but often fail with respect to the big picture. For the carpenters out there, its like cutting a piece of wood and being off by exactly one inch.
Having a common data model or database helps us avoid data expiration. When we are given the data to build our models, the data starts to age from the instant we get it. If we take too long or do not stay in touch with our sources, the likelihood of the data being inaccurate increases. Having everyone work from the same data means any data update has a possibility to help everyone. A material change in a product design ripples through a manufacturing process that now requires more time for welding. This dictates the need for an additional welding robot and reduce the floor space available for in-process buffering. A design cadence that used to take weeks to wend its way through different engineering disciplines is now near instantaneous. This efficient data communication is what drives the productivity all companies desire.
The Practical View
To tie together the conversation, here is an example of a basic simulation model with 5 serial processors all trying to make 60 jobs per hour with a buffer size of one between the operations.

In order to study the effect of randomness, we did a series of 1000 hour simulations where we varied the range of each process cycle time using a normal distribution (bell curve). Each simulation was run 10 times for a total of 100 simulation. The shape of the curve is defined by the standard deviation. In this case we varied the standard deviation from 1% to 10%.
The chart below shows the dramatic effect on throughput caused by varying the standard deviation. For reference, a 5% standard deviation equals 3 seconds on a 60 second cycle. This equates to a range of 54 to 66 seconds for two standard deviations. This will cover 95% of the sampled cycle times in the simulation.

As you can see, even though we had the identical mean value for every simulation, the higher the variation the lower the throughput. Our 5% standard deviation scenario made 58.9 jobs per hour versus the target of 60 jobs per hour with no variation.
Summary
Randomness challenges us to understand our data and make sure we stay current with the data we are using in our analysis. The Digital Twin environment naturally keeps data up to date and makes sure that our efforts are not wasted. Using a common data model for very different forms of analysis may seem unnecessary but the fact that we are all sharing the single source of truth gives us confidence to make more aggressive and efficient designs.
If you would like more information, please reach out to us to continue the detailed conversation. #DigitalTwin #ForwardVision #Pseudorandom
