Risk Management

Risk Management or 'Asset Integrity Management' is a mature concept, but it is not necessarily done with modern and automated methods that produce objective and optimised solutions.  

Risk Management is becoming a more complex topic as organisations need to consider health & safety, societal & environmental impact, brand & share price impacts alongside traditional asset management.

Clients have invested heavily in ERP solutions with bolted on AI/ML features. However, the base systems and processes are unable to provide the true Dynamic Digital Twin features or help explore the uncertainties challenging the organisation. 

Our consultancy can guide you to solutions that provide cost optimal and optimum operational risk profiles, implementing a modern practice that removes subjectivity and heavy human interaction, achieves corporate risk profiles, and exceeds legislative expectations.
 

Risk Management with a Dynamic Digital Twin

Organisations that have safety critical systems, such as Petrochem, Aviation and Rail, typically also have a regulator looking over their shoulder to ensure safety is properly managed. However this provides a difficult dilemma as operations are focussed on throughput and revenue. Expenditure is targeted at revenue generation and reactive maintenance occurs, even where investment has been made in state-of-the-art Enterprise Resource Planning systems with built in predictive maintenance.

What goes wrong?

Risk Management and holistic Asset Integrity Management are not supported with operationally focussed ERP systems and simplistic predictive maintenance executed per asset. In addition where Risk Management is implemented events occur that override addressing the many and small assets in favour of the  perceived safety critical asset. Especially where that safety critical asset is also critical to revenue.

A solution?

1. It’s a probabilistic model  Risk = Probability x Consequence

  • Capture probabilistic elements as well and deterministic elements
  • Capture system/process connections – systems in series fail differently to those with backups, redundancies or parallels
  • Use existing KPIs/measurement tools to ensure business understands outputs (KPI framework)

2. Probability analysis

  • Use AI/ML to identify lifetime characteristics - Probability of failure vs likely time to failure or duty/use to failure curves
  • Use Expert opinion and Markov Chain/Transition matrix models where data does not exist

3. Consequence analysis

  • Asset database plus expert opinion to define consequence (KPI framework)

4. Build Operational digital twin

  • Asset database or Asset Digital Twin populates model
  • Assign each asset its Probability & Consequence from analysis
  • ‘Age’ each asset to forecast future Risk & probability of failure

5. Test

  • Monte Carlo analysis to ensure model is stable
  • Optimisation with Tridyn Operational Digital Twin to plan interventions
     

Data rich and data poor

 

Know which KPIs are important 

When creating predictive and prescriptive models the foundations need to be solid. This means that a clear understanding of the KPIs and value drivers within the organisation is needed, ad well as which data feeds these KPIs. These need to be expressed, and their impact understood, at each level in the organisation. Staff performing operations need their KPIs to be linked back to Directors & Managers using strategic & tactical KPIs. Data collection and analytics then needs to collect data at all of these KPI levels so that progress can be monitored and Predictive models populated.

Historic data analysis for predictive allocation

Analysis of historic data using AI/ML techniques exposes insights hidden within large data sets.  For Predictive models to be useful it is typical to join many data sets together and generate multiple outputs. These are expressed as charts and graphs on dashboards and is only descriptive. To make them prescriptive they must be 'run forward' in some manner. To do this the descriptive output is  converted into a mathematical model. This model when applied to say an asset, job or resource, and re-calculated will predict the likely future state. So, how long a job is likely to take, when an asset is likely to fail, how many resources are needed.

Dynamic model

A predictive model by itself only provides answers for what will happen with this KPI if we 'carry on as we are'. The organisations problem is 'what can we do' to improve? This is especially acute when there is limited resources, time or money! This is where a Dynamic Digital Twin comes in. It uses the mathematical outputs of Predictive models, the descriptions of Resources, Assets, costs and Jobs to be done, linked together to describe their dependencies and the processes undertaken to deliver them. The Dynamic Digital Twin can then simulate the business processes and calculate how many jobs can be done, what resource is needed where, how much it will cost and what any shortfall may be. This can be calculated for the current business load or simulated for different loads or conditions.
 

Optimisation and scenarios

The Dynamic Digital Twin enables the organisation to understand the levers it can pull to achieve its aims. Can resource be moved around, or retrained. What lag time on retraining is there and how much will back filling cost? What happens if we have a different priority to fixing jobs or maintaining assets. These questions and more 'what if' scenarios can be asked of the Twin by using Optimisation AIs and Monte Carlo/Latin Hypercube analysis. These techniques use iterative AI/ML techniques to search through the available actions with the question defined as a set of goals or targets. This Prescriptive analysis builds on all previous steps so reports the predicted KPIs for the scenario, which jobs to address and in what order, and how much it will cost or generate in revenue should that set of actions be taken. By running several scenarios a comparison of different strategies can be presented, and the sensitivity of the outputs presented. This provides a rich and powerful picture for Mangers to make their decisions.

Operational Digital Twin

How can this be used in an operational setting?

Firstly the time period of the simulations and predictions can be anything! minutes, hours, days, months, years. Work our founders have been involved with has predicted the impact of rainfall in the coming hours to the impact of water availability over 120 years!

To operationalise the Dynamic Digital Twin we need to;

  1. Understand the operational levers 
  2. Ensure we have a regular supply of up to date data 
  3. Run the Twin to math the pace of the business 
  4. Use the outputs or record where we have deviated

For example, a Twin that is managing scheduling of Vans to visit properties would need to be run for the week to understand overall load and 2-4 times a day to deal with intra-day issues. It would be linked to live journey times to help re-schedule visits based on traffic. Manual overrides fed in through the day would show where subsequent days would need more support, or different skills in different areas. Integrated with operational dispatch systems automatic re-prioritisation of visits could be managed.  This results in an optimal deployment of limited resource and a forewarning of likely shortfall so that operational managers can take key resourcing decisions earlier.

What are all these acronyms anyway?

Digital Twins, Artificial Intelligence/Machine Learning and Analytics technologies are not off the shelf or pre-packaged solutions. Their very definition is unclear - ranging from 3D building visualisation through to complex simulation and to dashboards.  Providing clarity is part of our mission.

Logo

©  Tridyn Consulting LLP/ Copyright.       All rights reserved.