A Project Managers Guide

Implementing a Governed OT/IT Data Pipeline for Norwegian Offshore Assets
AUTHOR: Google Gemini 2.5 Preview Deep Reseach
EDITOR - HARALD BLIKØ - Digitalisation Specialist
Introduction
Executive Summary
The strategic integration of Operational Technology (OT) and Information Technology (IT) represents a pivotal transformation for the Norwegian offshore industry. This guide provides a comprehensive framework for project managers tasked with implementing a real-time OT/IT data pipeline, designed to stream process data from offshore assets to cloud-based services for advanced analytics. The primary objective is to unlock significant business value by enhancing operational efficiency, enabling predictive maintenance, improving safety outcomes, and reducing the environmental footprint of offshore operations.

The success of such a high-stakes initiative hinges on a strategy built upon several foundational pillars.

  • First and foremost is an unwavering commitment to industry standards, with OPC Unified Architecture (OPC UA) serving as the core communication and information modeling backbone. This ensures interoperability, security, and scalability.
  • Second is the establishment of a robust governance framework that defines clear ownership, roles, and processes for managing the infrastructure, systems, and the data itself, thereby building digital trust.
  • Third, the architecture must be designed for high availability and reliability, incorporating redundancy at every layer, from the OT data source to the cloud infrastructure. This is complemented by a comprehensive Business Continuity and Disaster Recovery (BCDR) plan to ensure operational resilience.
  • Finally, a proactive change management program is essential to navigate the human and organisational shifts inherent in such a transformation.
This report is structured to guide the project manager through the entire project lifecycle. It begins by establishing the strategic context and the critical importance of a robust governance and quality management foundation, grounded in the regulatory landscape of the Norwegian Continental Shelf, including directives from the Petroleum Safety Authority (PSA) and adherence to NORSOK and DNV recommended practices. It then moves into the practical phases of project initiation, planning, architectural design, and data modeling. Subsequent sections provide detailed guidance on implementing a defense-in-depth cybersecurity strategy, a comprehensive BCDR plan, and a structured approach to operational assurance and continual improvement. By following this guide, project managers can navigate the complexities of OT/IT convergence, mitigate risks, and deliver a secure, reliable, and value-generating data pipeline that serves as a cornerstone for the future of digital asset management.
Part I
Foundational Strategy and Governance
1.1 - From Process Data to Business Value

The fundamental objective of bridging the OT/IT divide is to transform raw, real-time process data into a strategic asset that drives tangible business value. This initiative is not merely a technological upgrade; it is a strategic enabler that facilitates a shift from reactive and preventative operational models to proactive and predictive ones. By establishing a seamless, secure, and contextualised flow of data from offshore assets to cloud analytics platforms, an organisation can unlock significant improvements across its entire value chain.

The key value drivers for this project are:
  • Production Optimisation: Real-time visibility into process parameters allows for continuous analysis and optimisation, maximising throughput and efficiency. Advanced models can identify subtle performance deviations and recommend adjustments to maintain optimal production levels.
  • Predictive Maintenance: Moving beyond scheduled maintenance, the data pipeline enables the use of machine learning models to predict equipment failures before they occur. This reduces costly unplanned downtime, minimises unnecessary maintenance activities, and extends asset life. This aligns with the strategic goals of operators like Aker BP, which explicitly targets "less maintenance" as a key outcome of its digital twin initiatives for assets like Yggdrasil.1
  • Enhanced Safety and Barrier Management: The ability to detect process anomalies and equipment degradation in real-time provides an early warning system for potential safety incidents. This directly supports the core mandate of the Petroleum Safety Authority (PSA) to ensure and improve safety levels through the adoption of new technology.2 Analytics can be used to monitor the health and performance of safety-critical barriers, providing a new layer of assurance.
  • Reduced Emissions: Process optimisation directly correlates with energy efficiency. By fine-tuning operations based on real-time data analytics, an organisation can significantly reduce its energy consumption and, consequently, its carbon footprint. This is a primary strategic goal for major Norwegian operators like Equinor and AkerBP.4
The successful implementation of this strategy is demonstrated by leading operators on the Norwegian Continental Shelf. Equinor's OMNIA platform, built on Microsoft Azure, serves as a powerful blueprint. Their core principle is the creation of a common data platform to break down historical data silos, emphasising that "data without any context is quite useless".4 This highlights the absolute necessity of the data modeling phase of this project, ensuring that raw tag values are enriched with semantic meaning before they reach the cloud. Equinor's use of OPC UA as a standard for contextualisation and their aggregation of19 OPC UA servers on the Johan Sverdrup field into a single gateway validates the architecture proposed in this guide.4

Similarly, Aker BP's digital twin initiatives for the Yggdrasil and Ærfugl fields, developed in partnership with Cognite and Aize, showcase the end-state application this data pipeline will empower. Their strategy involves creating a "single source of truth" by integrating static engineering data (like 3D models and P&IDs) with the live, real-time OT data stream.5 This living digital representation of the physical asset enables advanced visualisation, collaboration, and data-driven decision-making, fulfilling the ambition of creating a safer, more efficient, and lower-emission operation.1 The explicit use of DNV-RP-A204 in the Yggdrasil project further validates the adoption of the assurance frameworks outlined in this guide as industry best practice.1

1.2 Navigating the Regulatory and Standards Landscape
Executing a project of this nature within the Norwegian offshore sector requires strict adherence to a multi-layered framework of regulations and standards. This framework is not an obstacle but a guide to ensuring the final solution is safe, reliable, and robust. The project plan must be built upon this foundation to ensure compliance and quality from its inception.

  • Petroleum Safety Authority (PSA) Norway: The PSA's regulatory regime is performance-based and technology-neutral, meaning it does not prescribe specific technologies but sets functional requirements for safety. A key principle is that any digitalisation initiative must contribute to improving safety, and the operator must take full "ownership of risk" associated with the new technology.3 For this project, this means demonstrating through a rigorous risk management process how the OT/IT data pipeline enhances safety and how potential new risks (e.g., cybersecurity, data integrity) are controlled. The PSA has a strong focus on ICT safety and expects companies to have robust processes for qualifying and adopting new digital solutions.2
  • NORSOK Standards: The NORSOK standards represent the collective best practices of the Norwegian petroleum industry and are a cornerstone of offshore engineering. While no single standard governs the entire OT/IT data pipeline, several are relevant to the source systems. Standards such as NORSOK I-002 (Safety and Automation Systems), I-005 (System Control Diagrams), and Z-008 (Risk-based maintenance) define the design, operation, and safety context of the OT environment from which the data originates.17 The project's data acquisition strategy must respect the integrity and performance of these safety and control systems, ensuring that data extraction is non-intrusive and does not compromise their primary functions.
  • DNV Recommended Practices: DNV provides the critical frameworks for assurance and quality in the digital domain.
  • DNV-RP-A204 (Assurance of digital twins): This document provides a comprehensive framework for building "digital trust" in complex systems like the one proposed. It outlines auditable requirements for governance, data quality, computational model qualification, and operational assurance.38 Its adoption by Aker BP for the Yggdrasil project confirms its status as a key industry guideline.1
  • DNVGL-RP-0497 (Data quality assessment): This practice provides the methodology for defining, measuring, and managing data quality across syntactic, semantic, and pragmatic dimensions. It is the foundation for the project's data quality assurance program.38
  • ISO Standards: The International Organisation for Standardisation provides the overarching frameworks for quality and data management.
  • ISO 9000/9001: These standards define the principles of a Quality Management System (QMS), including the process approach, risk-based thinking, and continual improvement, which will govern the management of this project.38
  • ISO 8000: Referenced within the DNV practices, this standard provides the formal definitions for data quality, underpinning the entire data quality framework.38
The combined regulatory landscape, encompassing directives from the PSA, recommended practices from DNV, and quality frameworks from ISO, establishes a stringent and non-negotiable requirement for a formal, auditable governance structure. This is not merely a matter of best practice but a foundational component for demonstrating risk ownership and maintaining a license to operate in the Norwegian digital offshore environment.

1.3 Establishing a Robust Governance Framework
A robust governance framework is the cornerstone of this digital strategy, ensuring that data is managed as a critical asset throughout its lifecycle. It provides the structure for accountability, quality control, and security. The project SHALL implement a formal governance model based on the requirements of DNV-RP-A204, Section 10.3.38

1.3.1 Data Governance Policies and Roles
The project charter must include the establishment of formal data governance policies and procedures dedicated to managing the data pipeline.38 Central to this is the definition of clear roles and responsibilities, which bridges the organisational gap between OT and IT:

  • Data Owner: This role is accountable for a specific data domain (e.g., data from a specific process unit or asset). This accountability must reside within the OT/Operations department, as the Data Owner is ultimately responsible for the physical asset and the operational decisions made based on its data. They are accountable for the data's fitness for purpose.
  • Data Steward: Reporting to the Data Owner, the Data Steward is responsible for the day-to-day management of the data. This includes defining data quality rules, establishing the semantic context of data points (e.g., what a specific tag represents, its engineering units), and managing metadata. This is a crucial, hands-on role that requires deep domain expertise.
  • Data Custodian: This is a traditional IT role responsible for the technical management of the data infrastructure, such as the cloud storage, databases, and transport mechanisms. They ensure the infrastructure is secure, available, and performs as required, but they are not accountable for the data content itself.
  • DataOps Unit: This is a cross-functional team, comprising members from OT, IT, and analytics, responsible for the end-to-end operational management of the data pipeline. Their duties include monitoring data flow and quality, responding to incidents, and managing the deployment of new data sources or analytics applications.38

1.3.2 Formalised Procedures
The governance plan must formalise procedures to manage the entire data lifecycle, as required by DNV-RP-A204 (ID 10.3-4).38 These procedures will form the core of the operational run books for the data pipeline:

  • Dataset Configuration: A standardised process for onboarding new data sources. This includes how to control and verify that a new dataset is properly configured in the data platform, from the edge publisher to the cloud consumer.
  • Data Quality Assessment: A continuous process for monitoring, reporting, and remediating data quality issues. This involves establishing data quality dashboards with notification functionality to alert the DataOps unit when quality metrics for a dataset drop below predefined thresholds.38
  • Application Integration: A controlled process for integrating new cloud services or analytics applications. This ensures that new consumers are correctly integrated without compromising the data quality or security of the platform.
  • Verification and Validation (V&V): A formal V&V plan to test and approve any changes to the data sources, pipeline components, or consuming applications before they are put into operation. This prevents unintended consequences and maintains the integrity of the production environment.
  • Cybersecurity: The data management procedures must include specifications for cybersecurity at every stage of implementation.38

1.4 Integrating Quality Management and Assurance
To ensure the project delivers a solution that is fit for purpose and meets all stakeholder requirements, a formal Quality Management System (QMS) must be integrated into the project management methodology. This QMS will be based on the principles of the ISO 9000 family of standards.38

The project will be managed according to the core principles of ISO 9001:2015, which are highly synergistic with modern project management practices.38

  • Process Approach: The entire project lifecycle, from planning and design to implementation and operation, will be viewed as a set of interrelated processes. Each process will have defined inputs, outputs, controls, and resources, allowing for systematic management and improvement.
  • Risk-Based Thinking: The project will proactively identify, analyze, and mitigate risks at every stage. This is not a one-time activity at the project's start but an ongoing process integrated into all decision-making. This aligns perfectly with the risk management frameworks of both the PSA and DNV.
  • Continual Improvement (PDCA): The Plan-Do-Check-Act cycle will be applied to all project activities. This iterative approach fosters organizational learning, allowing the project team to refine processes, improve outcomes, and adapt to changing requirements throughout the project's duration.38
A critical aspect of the QMS is treating documentation as a key quality deliverable. All project artefacts - from requirement specifications and architectural designs to test plans, V&V reports, and operational procedures—will be formally controlled documents. They will be subject to versioning, review, and approval cycles. This approach, which aligns with the documentation requirements in DNV-RP-A204, Section 12, ensures full traceability from requirements to implementation and provides the body of evidence necessary for final system assurance and regulatory acceptance.38
Part II
Project Initiation and Planning
2.1 Organisational Maturity and Readiness Assessment

Before embarking on a complex and transformative project such as this, it is imperative to establish a clear and objective baseline of the organisation's current capabilities. A formal Organisational Maturity Assessment provides this baseline, enabling the identification of potential gaps, risks, and areas requiring targeted investment. This proactive step is crucial for setting realistic expectations and ensuring the organisation is prepared for the journey ahead. The project SHALL conduct this assessment using the structured framework provided in Appendix A of DNV-RP-A204.38

The assessment process, as defined in the DNV framework, involves four key steps 38:
  1. Define Scope: The scope of the assessment is the end-to-end process of developing, implementing, and operating the OT/IT data pipeline. This scope spans multiple departments, including offshore operations, maintenance, IT infrastructure, and the new data analytics function.
  2. Assess Core Capability Areas: The project team, including subject matter experts from each affected domain, will evaluate the organisation's current maturity level on a scale of 1 (Initial) to 5 (Optimised). This evaluation will be conducted against the detailed descriptions for the six core capability areas provided in DNV-RP-A204, A.6.38
  3. Identify Gaps: The current maturity level for each capability area will be compared against a pre-defined target maturity level. For a project of this strategic importance and complexity, a minimum target of Level 3 (Defined) is recommended. An organization operating below this level lacks the formalized processes and governance required for predictable success.
  4. Create Remediation Plan: For each identified gap, a formal remediation plan SHALL be created. This plan must detail the specific actions required to close the gap, assign clear responsibilities, establish a timeline, and incorporate the necessary change management activities to ensure successful adoption.38
To facilitate this process, the following checklist, derived from the DNV framework, should be used as a key project deliverable.
DNV-RP-A204, Appendix A.38
Table 1: Organizational Maturity Assessment Checklist, based on DNV-RP-A204, Appendix A.38
2.2 Holistic Risk Management for OT/IT Convergence

A systematic and holistic approach to risk management is fundamental to the success of this project and a core requirement of the PSA.3 The project will adopt the formal risk management methodology detailed in Appendix D of DNV-RP-A204, which is aligned with the internationally recognised ISO 31000 standard.38

This process moves beyond simple risk registers to a comprehensive analysis of threats, consequences, and the barriers designed to control them. The process begins with Establishing Context, where the project boundaries, objectives, and system interfaces are clearly defined. This is crucial for understanding the potential for cascading failures between the OT and IT domains.38

The core of the process is the Risk Assessment, which should be conducted as a series of workshops involving a multi-disciplinary team of OT, IT, safety, and business experts. The Bow-Tie Model is a highly effective tool for this analysis, as it visually structures the relationship between causes and consequences 38:

  • Threats: The potential causes of failure. For this project, threats include technical failures (e.g., sensor drift, network outage), process failures (e.g., misconfigured data model), and malicious actions (e.g., cybersecurity attack on the edge gateway).
  • Top Event: The critical failure that the barriers are designed to prevent. A key top event for this project is: “Corrupted, incomplete, or untimely real-time data is delivered to and used by the cloud analytics platform.”
  • Consequences: The ultimate business impact of the top event. These can be severe, including flawed production optimisation leading to financial loss, incorrect predictive maintenance advice causing catastrophic equipment failure, or a loss of trust in the entire digital system, rendering it useless.
  • Preventive Barriers: These are the controls implemented to prevent the top event from occurring. Examples include implementing OPC UA server redundancy, data quality validation routines at the edge gateway, robust cybersecurity controls on the OT/IT boundary, and rigorous V&V of data models.
  • Mitigating Barriers: These are the controls designed to reduce the impact after the top event has occurred. Examples include automated alerts to the DataOps team for data quality drops, a well-rehearsed BCDR plan for the cloud platform, and clear manual override procedures for operators to disregard analytics-based advice if the data feed is suspect.
To guide the risk treatment process, the project must establish formal Risk Acceptance Criteria. This involves creating a risk matrix (Likelihood vs. Consequence) and defining what constitutes an unacceptable (High), acceptable with mitigation (Medium/ALARP), and broadly acceptable (Low) risk. This provides an objective basis for prioritising risk reduction efforts and allocating resources.38

The outputs of the maturity assessment must serve as a direct input to this risk assessment. A capability gap identified in the maturity assessment (e.g., lack of defined processes for data quality monitoring) represents a significant vulnerability. When combined with a credible threat (e.g., a critical sensor starting to drift), this vulnerability creates a high-risk scenario. The remediation plan from the maturity assessment (e.g., "Develop and implement a data quality monitoring process") thus becomes a formal and mandatory risk treatment action in the risk management plan. This linkage creates a powerful, evidence-based argument for securing resources for foundational activities like process definition and training, framing them as essential risk mitigation activities.

2.3 A Proactive Change Management Strategy
The introduction of an OT/IT data pipeline is as much an organisational and cultural change as it is a technological one. It will alter workflows, introduce new responsibilities, and require new skills. A proactive change management strategy is therefore not an optional add-on but a critical success factor for ensuring the project's benefits are realised. The strategy must address the "people side" of the change to foster adoption, build competence, and minimise resistance.

The first step is a thorough Stakeholder Analysis to identify all individuals and groups who will be affected by the project. This includes offshore operators and maintenance technicians who will interact with data-driven recommendations, onshore process engineers who will use new analytics tools, IT infrastructure teams managing the new pipeline, data scientists developing the models, and all levels of management who will be making decisions based on the new insights.
Following this, a formal Impact Assessment, as part of a Management of Change (MOC) process, must be conducted to analyse how the new system will change existing work processes, roles, and responsibilities.38 This assessment will identify specific changes, such as maintenance technicians needing to trust and act on predictive alerts rather than fixed schedules, or process engineers needing to understand the confidence levels of AI-driven recommendations.

Based on this analysis, a multi-faceted change management plan will be developed.

Communication Plan: A targeted and continuous communication plan will be executed to share the project's vision, benefits, and progress with all stakeholder groups. The messaging will be tailored to address the specific concerns and perspectives of each group, building buy-in and managing expectations.

Training and Competence Development: A comprehensive training plan is essential to close skill gaps and build confidence. This is a direct expectation of the PSA, which requires companies to ensure employees have the necessary expertise to operate new technology safely and effectively.3 The plan must be twofold:
  • Upskilling OT Personnel: Training offshore operators and engineers on their new data-centric responsibilities, including how to interpret analytics dashboards and understand data quality indicators.
  • Domain Education for IT/Analytics Personnel: Educating IT and data science teams on the context and criticality of offshore process data, ensuring they understand the operational environment their models are intended to support.
This proactive approach ensures that the organisation is prepared for the new ways of working, transforming potential resistance into engaged support and ensuring the long-term success of the digital strategy.
Part III
Architectural Design and Data Modeling
3.1 Architecture for High Availability and Reliability
The data pipeline must be designed for exceptional availability and reliability, as it will underpin mission-critical operational decisions. A failure in data flow could lead to significant production losses or compromise safety monitoring. Therefore, the architecture must be resilient to single points of failure at every layer, from the data source in the OT environment to the consuming applications in the IT cloud.

3.1.1 Source Redundancy (OT Layer)
The first line of defense is ensuring the availability of the data at its source. The primary mechanism for this is the native server redundancy feature defined within the OPC UA standard.39 This involves deploying a redundant pair or set of OPC UA servers that can fail over automatically. The project must select an appropriate redundancy mode based on the criticality of the data and the tolerance for failover time.40
OPC UA Standard
Table 2: Comparison of OPC UA Redundancy Modes.40
A critical requirement for any redundancy mode is that all servers in the redundant set SHALL have an identical AddressSpace. This includes identical NodeIds and browse paths. This ensures that when a client fails over, it can seamlessly continue its operations without needing to re-browse or re-map data points, which is essential for rapid and reliable recovery.40

3.1.2 Transit and Cloud Redundancy (IT Layer)
Resilience must extend beyond the OT source into the network and cloud infrastructure.
  • Network Path Redundancy: The communication path from the offshore asset to the onshore data center should utilise physically diverse network links (e.g., separate fiber optic cables, redundant satellite links) to mitigate the risk of a single path failure.
  • Cloud Geo-Redundancy: The cloud architecture must leverage the provider's native resilience features. This involves deploying infrastructure across multiple, geographically separate locations to protect against data center or even entire regional outages. Key strategies include continuous data replication and automated failover between regions.44
3.2 Blueprint for the Real-Time OT/IT Data Pipeline
The data pipeline architecture is a multi-stage system designed to securely and efficiently transport, contextualise, and process data from the offshore asset to the cloud.

Edge Architecture (Offshore)
  • Data Acquisition: Data originates from OPC UA servers, which may be embedded directly within modern Process Control Systems (PCS) or act as wrappers for older systems.
  • Edge Gateway/Publisher: A hardened industrial PC or server located on the asset will function as an Edge Gateway. This gateway will run software (e.g., Prediktor MAP Gateway, as used by Equinor4) to aggregate data from multiple source OPC UA servers. This central gateway will act as the single
  • OPC UA Publisher for the asset, responsible for creating and sending NetworkMessages to the cloud. This architecture simplifies management and security by creating a single, controlled egress point for OT data. The gateway can also perform initial data buffering, quality checks, and timestamping.
Communication Protocol - OPC UA Publish-Subscribe (PubSub):
  • The project will utilise the OPC UA PubSub communication model, as defined in OPC 10000-14.38 This model is fundamentally different from the traditional Client/Server model and is purpose-built for scalable, one-to-many data distribution. Its primary advantage is the
  • decoupling of publishers and subscribers through a Message Oriented Middleware (MOM). The OT Publisher sends its data to the MOM without any knowledge of who will consume it, which is ideal for bridging the OT/IT security boundary and enabling cloud-native, event-driven architectures.38
  • The recommended transport for bridging the OT/IT divide is a broker-based approach using a standard messaging protocol like MQTT or AMQP.38 A message broker, either self-hosted in a secure network zone (DMZ) or, preferably, a managed cloud service like Azure IoT Hub or AWS IoT Core, will serve as the ingestion point. This architecture provides critical benefits for offshore connectivity, including queuing, store-and-forward capabilities to handle intermittent network disruptions, and a secure, authenticated endpoint for the Publisher.38
Cloud Architecture (Onshore/Cloud):
  • Ingestion: The message broker securely forwards the OPC UA PubSub NetworkMessages to a cloud-native ingestion service (e.g., Azure Event Hubs).
  • Stream Processing: A stream processing engine (e.g., Azure Stream Analytics, Apache Flink) subscribes to the ingestion service. It is here that the binary UADP payload is decoded, data is enriched with context from other sources (e.g., maintenance systems), transformations are applied, and real-time alerts can be generated.48
  • Storage: The processed data is routed to multiple storage destinations based on its intended use:
  • Data Lake (e.g., Azure Data Lake Storage): The raw, contextualised data is stored here for long-term archival and for use by data scientists for historical analysis and training machine learning models.
  • Time-Series Database (e.g., Azure Data Explorer, InfluxDB): A high-performance database optimised for storing and querying time-stamped data, used to power real-time dashboards and operational applications.48
3.3 Data Modeling: The Key to Contextualised Analytics
The ultimate value of the data pipeline is determined not by the volume of data it can transport, but by the quality of the information it delivers. Raw data points, such as tag names and numerical values, are insufficient for advanced analytics. They must be enriched with semantic context to become meaningful information. The OPC UA Information Model, defined in OPC 10000-5, provides the standardised framework to achieve this.38 As demonstrated by Equinor's success, this is the "most powerful game changer" for providing context.4

The data modeling process involves creating a digital representation of the offshore asset within the OPC UA Server's AddressSpace:
  • Source Identification: The process begins by analysing engineering documents such as Piping and Instrumentation Diagrams (P&IDs), instrument lists, and control narratives to identify all critical data points and understand their functional relationships.
  • Type Definition: Instead of representing assets as a flat list of tags, the model will use object-oriented principles. Custom ObjectTypes will be defined to represent classes of equipment (e.g., a PumpType, a CompressorType, a WellheadType). Each ObjectType will contain a standardized set of VariableTypes (e.g., Flow, Pressure, Temperature, Vibration, RunningState) and MethodTypes (e.g., Start, Stop). This ensures that every pump on every asset is represented in a consistent way.
  • Instantiation: The OPC UA Server's AddressSpace will be populated with instances of these defined types, creating a structured, digital representation of the physical plant. For example, the physical pump "P-101" will be represented by an Object instance of PumpType.
  • Hierarchy and Relationships: OPC UA References are used to build a browsable hierarchy that mirrors the physical or functional structure of the asset. A HasComponent reference can link a compressor skid to its individual components (motor, cooler, scrubber). An Organizes reference can group all assets within a specific process area. This rich, self-describing model allows any consuming application to understand not just the value of a data point, but what it is, where it is, and how it relates to the rest of the system.
Where possible, the project should leverage existing OPC UA Companion Specifications. These are pre-defined information models for specific industries or equipment types (e.g., pumps, analysers), which further promote industry-wide interoperability.

3.4 Ensuring Data Quality by Design
High-quality analytics depend on high-quality data. Therefore, data quality assurance cannot be an afterthought; it must be designed into the pipeline from the very beginning. The project will implement the framework provided by DNVGL-RP-0497 to define, measure, and manage data quality.38

For each critical data point in the model, quality metrics will be defined across three dimensions:
  • Syntactic Quality: This measures conformance to the defined format. In the context of OPC UA, this means verifying that the data value matches its defined DataType in the information model (e.g., a value for a Float Variable is indeed a floating-point number).38
  • Semantic Quality: This measures the degree to which the data accurately represents the real-world state. Checks can include range validation (e.g., a temperature reading is within its expected engineering limits) and rate-of-change validation (e.g., a pressure value is not changing faster than is physically possible).38
  • Pragmatic Quality: This assesses whether the data is suitable for its intended use. This includes metrics for timeliness (is the data arriving with sufficiently low latency?), completeness (are there gaps in the data stream?), and freshness (is the timestamp recent?).
These quality checks will be implemented as a distributed function. Basic syntactic and semantic checks can be performed at the edge gateway before data is even published. More complex pragmatic and cross-variable consistency checks can be performed in the cloud stream processing layer. The results of these checks will generate their own metadata stream, allowing consuming applications to understand the quality and trustworthiness of the data they are using.
Part IV
Security, Business Continuity, and Disaster Recovery
4.1 A Defense-in-Depth Cybersecurity Strategy for Converged Systems

The convergence of OT and IT systems introduces new attack vectors and elevates the importance of a comprehensive, multi-layered cybersecurity strategy. A "defense-in-depth" approach is required, implementing controls at every stage of the data pipeline to protect against unauthorised access, manipulation, or disruption.

  • OT Network Security: The foundation of security is robust network segmentation. Process Control Networks (PCN) must be strictly isolated from business and enterprise networks using industrial firewalls. All traffic between these zones must be explicitly permitted, following the principle of "deny by default." Where possible, unidirectional gateways should be considered to enforce a one-way flow of data out of the most critical control networks.
  • Secure Gateway: The OT/IT edge gateway is a critical security chokepoint. It must be a physically and logically hardened device. Its operating system must be securely configured, all unnecessary ports and services must be disabled, and it must be subject to rigorous patch and vulnerability management. Physical access to the gateway on the offshore asset must be strictly controlled.49
  • Cloud Security: The project will leverage the cloud provider's native security services to protect the IT portion of the pipeline. This includes the use of Network Security Groups (NSGs) or Virtual Private Cloud (VPC) firewalls to control traffic flow, robust Identity and Access Management (IAM) policies, and advanced threat detection services that monitor for anomalous activity.49
  • Access Control: The principle of least privilege must be enforced throughout the system. This applies to human users (administrators, operators, data scientists) and machine identities (applications, services). Each entity should only be granted the minimum permissions necessary to perform its function.49 Role-Based Access Control (RBAC) will be used to manage these permissions systematically.
4.2 Securing the Data Stream with OPC UA PubSub
In addition to network and infrastructure security, the data itself must be protected in transit. OPC UA PubSub provides a robust, built-in security model for this purpose, offering end-to-end protection from the publisher to the subscriber, independent of the underlying transport protocol.38
The project SHALL mandate the use of the SignAndEncrypt security mode for all NetworkMessages published from the offshore asset. This ensures both:
  • Integrity (Signing): A digital signature guarantees that the message has not been tampered with in transit.
  • Confidentiality (Encryption): The message payload is encrypted, preventing unauthorised parties from viewing the sensitive process data.
The management and distribution of the cryptographic keys required for this process are handled by a critical component of the PubSub architecture: the Security Key Service (SKS).38
  • Role of the SKS: The SKS is a dedicated service responsible for generating, managing, and securely distributing the symmetric keys used for signing and encrypting NetworkMessages. It acts as a central point of trust for the PubSub security domain.38
  • Deployment: For a large-scale deployment, a central, highly available SKS should be established. This SKS can be an OPC UA server itself, providing the standard SKS methods.
Key Management Process: The process for key acquisition and use is standardised:
  1. A system administrator defines a SecurityGroup on the SKS. A SecurityGroup is a logical grouping that defines a common security policy (e.g., Aes256_Sha256_RsaPss) and manages a set of security keys for a specific set of Publishers and Subscribers.38
  2. The administrator sets permissions on the SecurityGroup object within the SKS's AddressSpace. This controls which applications (identified by their application instance certificates) are authorised to retrieve the keys for that group.51
  3. The Publisher and Subscriber applications are configured with the EndpointUrl of the SKS and the SecurityGroupId they need to use.
  4. Upon startup, the Publisher and Subscriber establish a secure OPC UA Client/Server session with the SKS. They authenticate themselves using their application instance certificates.
  5. They then invoke the standard GetSecurityKeys method on the SKS, requesting the keys for their configured SecurityGroupId.38
  6. If authorised, the SKS returns the current security keys. The Publisher uses these keys to sign and encrypt outgoing messages, and the Subscriber uses them to decrypt and verify incoming messages. This process can be further secured by integrating with an Authorisation Service to issue access tokens for the SKS.51
4.3 The Business Continuity and Disaster Recovery (BCDR) Plan
The BCDR plan is essential to ensure that the data pipeline and the critical analytics services it supports can withstand and quickly recover from disruptive events, ranging from a single component failure to a full regional cloud outage. The plan must be built around clearly defined objectives and leverage cloud-native DR strategies.

First, the project must define the RTO and RPO for the system47:
  • RTO - Recovery Time Objective: The maximum acceptable downtime for the data pipeline and its consuming services. For real-time operational analytics, this should be measured in minutes or hours, not days.
  • RPO - Recovery Point Objective: The maximum acceptable amount of data loss. For a real-time OT data stream, the RPO should be near-zero, ideally measured in seconds.
The BCDR plan will be a multi-faceted strategy leveraging cloud-native capabilities44:
  • Continuous Data Replication: The core of the DR strategy is the real-time replication of data and critical infrastructure to a secondary, geographically separate cloud region. This geo-redundancy ensures that if the primary region fails, a up-to-date copy of the data is available elsewhere.44
  • Automated Failover: The plan will utilise cloud provider services like Azure Site Recovery (ASR) or AWS Elastic Disaster Recovery (DRS) to orchestrate and automate the failover process. In the event of a disaster, these services automatically switch traffic and processing to the secondary region, minimising the RTO.44
Backup and Restore: While replication handles infrastructure failure, backup and restore is critical for recovering from data corruption or logical errors (e.g., a bug in an application corrupts a dataset). Regular, immutable backups of critical configurations (data models, analytics models) and key data snapshots will be stored in a separate, secure location.44
Table 3: Comparison of Cloud DR Strategies.47 For the OT/IT data pipeline
Warm Standby approach is recommended as a balance between cost and performance, providing a low RPO and a manageable RTO.

The choice of the PubSub architecture provides a significant advantage for disaster recovery. In a DR event where the primary cloud region fails over to the secondary, the offshore OT Publisher is completely unaffected. It continues to publish messages to the same, single broker endpoint. The cloud infrastructure and the broker are responsible for re-routing these messages to the newly active subscriber applications in the secondary region. This decoupling means the OT systems require zero reconfiguration during a cloud DR event, which dramatically reduces the complexity and recovery time of the entire system. This inherent resilience is a key architectural benefit that should be explicitly documented in the BCDR plan.
Part V
Implementation, Operation, and Assurance
5.1 Project Execution and Phased Rollout
A project of this scale and complexity should not be implemented in a single "big bang" approach. A phased, agile rollout is essential to manage risk, incorporate learning, and demonstrate value incrementally.

The project will begin with a Pilot Phase focused on a single, well-understood, and non-critical offshore asset or process system. The goals of this pilot are to:
  • Validate the Architecture: Implement the end-to-end data pipeline on a small scale to prove the technical feasibility of the chosen components and their integration.
  • Test the Data Pipeline: Verify data flow, measure latency, and test the performance of the OPC UA Publisher, message broker, and cloud ingestion services.
  • Refine Data Models and Governance: Use the pilot as a practical exercise to build the initial OPC UA Information Models and test the data governance processes (e.g., data quality monitoring, role assignments).
The maturity of the pilot solution will be tracked using the Technology Readiness Level (TRL) scale defined in DNV-RP-A204, Appendix C.38 This provides a structured way to measure progress from an unproven concept (TRL 0) to a fully field-proven system (TRL 7). The pilot phase is successfully concluded when it reaches a TRL of 6 (System prototype demonstration in an operational environment).

Following a successful pilot, the project will move into the Scaling Phase. The validated architectural patterns, data models, and governance procedures developed during the pilot will be used as templates to accelerate the rollout to other assets. This factory-like approach ensures consistency and quality while reducing the time and effort required for subsequent deployments.

5.2 Qualification and Assurance of Analytical Models
The data pipeline's ultimate purpose is to feed data into computational and analytical models that provide business insights. The trustworthiness of these insights is directly dependent on the quality and reliability of the models themselves. Therefore, the project SHALL include a formal process for the qualification of all computational models, including any machine learning or AI models, based on the requirements of DNV-RP-A204, Section 11.38

This qualification process ensures that models are fit for purpose and that their outputs can be trusted for decision-making:
  1. Model Specification: Before development begins, a formal specification document must be created for each model. This document SHALL detail the model's objective, its system boundary (inputs and outputs), underlying assumptions, potential risks, and the criteria for its verification and validation.38
  2. Baseline Model: For complex models, a simple baseline model SHALL be developed first. This baseline serves as a benchmark against which the performance of the more advanced model can be compared. If the advanced model does not significantly outperform the baseline, its additional complexity may not be justified.38
  3. Qualification Program: Each model must undergo a rigorous testing program in an environment that is as realistic as possible. This involves testing with both synthetic data (to cover edge cases) and historical real-world data. The model's performance will be measured against the quality metrics defined in its specification.
  4. Evaluation and Uncertainty Quantification: The results of the qualification program must be formally evaluated to confirm that the model meets its requirements. Crucially, this process must also quantify the model's uncertainty. No model is perfect, and understanding its margin of error or confidence level is essential for making risk-informed decisions based on its output.38
To ensure proper governance and facilitate reuse, all qualified models, along with their specifications, version history, and qualification reports, SHALL be stored and maintained in a central model library. This ensures that the lineage and quality of every model in production are fully documented and auditable.38

5.3 Operational Monitoring and Maintenance
Once the data pipeline is in production, it transitions from a project to an operational service that requires continuous monitoring and maintenance to ensure its long-term health and reliability. The DataOps unit will be responsible for these activities.

  • Monitoring Dashboards: A set of comprehensive dashboards will be created to provide real-time visibility into the status of the pipeline. These dashboards will monitor:
  • Pipeline Health: Key performance indicators such as end-to-end data latency, message throughput at each stage (publisher, broker, consumer), and the operational status of all pipeline components.
  • Data Quality: Real-time visualisation of the data quality metrics defined during the design phase. These dashboards SHALL include notification functionality to automatically alert the DataOps team when the quality of a data stream drops below its defined threshold, enabling rapid investigation and remediation.38
  • Maintenance Procedures: A full set of operational run books will be developed, documenting the standard operating procedures for:
  • Routine Maintenance: Scheduled activities such as software patching and certificate renewals.
  • Incident Response: Step-by-step procedures for diagnosing and resolving common failure scenarios.
  • BCDR Execution: The detailed plan for executing the failover and failback procedures defined in the BCDR plan.
5.4 Documentation and Continual Improvement
Comprehensive documentation is not just a project close-out activity; it is a critical deliverable that forms the basis for operational assurance and future development. The project SHALL produce a complete documentation package that is structured, version-controlled, and maintained throughout the system's lifecycle, as per the requirements of DNV-RP-A204, Section 12.38

This package will include:
  • Digital Twin Platform Documentation: This covers the asset information model (the OPC UA models), the data management and governance plan, and all security documentation, including penetration test results and user access credentials.
  • Computational Model Documentation: This includes the specification and qualification results for every analytical model that consumes data from the pipeline.
  • Operational Assurance Documentation: This contains the procedures for the ongoing assurance activities, such as the process for periodic reassessment of the system's fitness for purpose and the MoC process for managing the impact of physical asset modifications on the digital system.
Finally, the project must establish a framework for Continual Improvement, in line with ISO 9001 principles.38 This is a feedback loop where insights from operations—including data from the monitoring dashboards, data quality reports, and the performance of the analytics models—are systematically collected and analyzed. This analysis will identify opportunities to improve the data pipeline, refine the data models, enhance the analytics, and update the governance processes. Periodic reassessments of the organization's maturity will also be conducted to identify and prioritize new areas for improvement, ensuring that the digital system evolves and continues to deliver value as the business and its environment change. This ongoing process ensures that the "digital trust" established during the project is not a one-time achievement but a sustained state, actively maintained throughout the asset's lifecycle.
Part VI
Conclusion and Key Recommendations
The implementation of a real-time OT/IT data pipeline for Norwegian offshore assets is a foundational step in the digital transformation of the energy sector. It moves beyond simple data collection to create a strategic capability that enables data-driven decision-making, predictive operations, and enhanced safety. However, the success of this endeavour is contingent upon a disciplined, structured, and holistic approach that balances technological innovation with rigorous governance, security, and quality assurance.

This guide has outlined a comprehensive roadmap for project managers, grounded in the world-class standards and recommended practices of OPC UA, DNV, and ISO, and informed by the real-world experiences of industry leaders like Equinor and Aker BP.

The key recommendations for a successful implementation are:
  1. Establish Governance as the Foundation: Do not treat governance as an afterthought. The formal establishment of data governance policies and the clear definition of roles—particularly the Data Owner within the OT organisation—are the most critical first steps. This structure is the primary mechanism for managing risk and demonstrating compliance with PSA regulations.
  2. Embrace Standards Holistically: The project's architecture, data models, and security must be built upon the OPC UA framework. Specifically, the use of OPC UA Information Models is essential for providing data context, and the OPC UA PubSub model is the superior architectural choice for securely and scalably bridging the OT/IT divide. This standards-based approach ensures interoperability and future-proofs the investment.
  3. Prioritise Digital Trust through Assurance: The frameworks provided by DNV-RP-A204 and DNVGL-RP-0497 should be adopted as the core methodology for project assurance. This includes conducting a formal Organisational Maturity Assessment at the outset to identify and mitigate capability gaps, and implementing a formal Qualification Process for all data and analytical models to ensure their outputs are trustworthy.
  4. Design for Resilience from Day One: High availability and disaster recovery cannot be bolted on. The architecture must be designed for resilience at every layer, combining OPC UA server redundancy at the OT source with geo-redundant, automated failover strategies in the cloud. The decoupled nature of the PubSub architecture should be leveraged as a key enabler of a simplified and robust BCDR plan.
  5. Manage the Human Element Proactively: The transition to data-driven operations is a significant cultural shift. A proactive and well-resourced change management program that includes targeted communication and comprehensive training is essential for building competence, gaining buy-in from all stakeholders, and ensuring the long-term adoption and success of the solution.
By adhering to these principles, project managers can navigate the technical and organisational complexities of OT/IT convergence. The result will be more than just a data pipeline; it will be a foundational digital asset that enhances competitiveness, improves safety, and positions the organisation to lead in the next generation of offshore operations.
Appendix
References
  1. Aker BP seeks operational success on Yggrasil through an advanced digital twin - DNV, accessed on June 22, 2025, https://www.dnv.com/cases/aker-bp-seeks-operational-success-on-yggrasil-through-an-advanced-digital-twin/
  2. Cybersecurity Barrier Management - SINTEF, accessed on June 22, 2025, https://www.sintef.no/en/projects/2021/cybersecurity-barrier-management/
  3. Digitalisation - International Regulators' Forum, accessed on June 22, 2025, https://irfoffshoresafety.com/wp-content/uploads/2020/07/IRF-July-2020-Article-from-Norway-Digitalization.pdf
  4. STRATEGIES FOR DATA-DRIVEN DIGITAL TRANS- FORMATION ..., accessed on June 22, 2025, https://2478981.fs1.hubspotusercontent-na1.net/hubfs/2478981/OPCF%20Prediktor%20OPC%20UA%20Success%20Story%20Equinor%20Microsoft.pdf
  5. Digital twins streamline operations - Aker BP, accessed on June 22, 2025, https://akerbp.com/en/digital-twins-streamline-operations/
  6. Ærfugl - Aker BP, accessed on June 22, 2025, https://akerbp.com/en/project/aerfugl-2/
  7. Aker Solutions' Digital Twin Provider — Aize - YouTube, accessed on June 22, 2025, https://www.youtube.com/watch?v=NnvWHTyTuxY
  8. Aker BP bolsters digitalization arsenal by streamlining brownfield asset management with help of digital twins - Offshore-Energy.biz, accessed on June 22, 2025, https://www.offshore-energy.biz/aker-bp-bolsters-digitalization-arsenal-by-streamlining-brownfield-asset-management-with-help-of-digital-twins/
  9. NOA and Fulla – Delivering on the Promise of Digital Transformation - Aker Solutions, accessed on June 22, 2025, https://www.akersolutions.com/what-we-do/projects/noa-and-fulla-delivering-on-the-promise-of-digital-transformation/
  10. Securing Safety in the Norwegian Petroleum Industry with Digital Twins - NTNU Open, accessed on June 22, 2025, https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/3024592/no.ntnu%3Ainspera%3A107093487%3A30052606.pdf?sequence=1&isAllowed=y
  11. Meld. St. 12 (2017–2018) – Health, safety and environment in the petroleum industry, accessed on June 22, 2025, https://www.regjeringen.no/en/dokumenter/meld.-st.-12-20172018/id2595598/?ch=3
  12. The PSA, AI and ICT safety - Havtil, accessed on June 22, 2025, https://www.havtil.no/en/explore-technical-subjects2/technical-competence/news/2023/the-psa-ai-and-ict-safety/
  13. Following Up on the Digitalization Initiatives in the Norwegian Petroleum Activity: Regulatory Perspective | Request PDF - ResearchGate, accessed on June 22, 2025, https://www.researchgate.net/publication/343482319_Following_Up_on_the_Digitalization_Initiatives_in_the_Norwegian_Petroleum_Activity_Regulatory_Perspective
  14. SAFETY AND RESPONSIBILITY UNDERSTANDING THE NORWEGIAN REGIME - Havtil, accessed on June 22, 2025, https://www.havtil.no/contentassets/c194536bd96f4db58fc696985c4722ff/safety-an-responsibility-2024-nett.pdf
  15. Cybersecurity risks of automated (and autonomous) offshore oil and gas units—the IMO cybersecurity rules framework | The Journal of World Energy Law & Business | Oxford Academic, accessed on June 22, 2025, https://academic.oup.com/jwelb/article/17/6/444/7822176
  16. Cyber threats and the petroleum industry - International Regulators' Forum, accessed on June 22, 2025, https://irfoffshoresafety.com/wp-content/uploads/2018/10/PS5-Cyber-threats-and-the-petroleum-industry-Ueland-PSA.pdf
  17. NORSOK standards - Standard Norge, accessed on June 22, 2025, https://standard.no/en/sectors/petroleum/norsok-standards/
  18. NORSOK STANDARDS AND DATA SHEETS | Study Guides, Projects, Research Design | Docsity, accessed on June 22, 2025, https://www.docsity.com/en/docs/norsok-standards-and-data-sheets/8995818/
  19. NORSOK STANDARD I-001 Field instrumentation - Cloudinary, accessed on June 22, 2025, https://res.cloudinary.com/forlagshuset/image/upload/v1558345462/Learning%20Resources/Automatisering/A2/Kap%201/1_Field_instrumentation_2010.pdf
  20. NORSOK STANDARD R-002 Lifting equipment - American Crane School, accessed on June 22, 2025, https://americancraneschool.com/wp-content/uploads/2025/05/Norway_Crane_Regulations_NORSOK-Standard-R002-Lifting-Equipment.pdf
  21. Control, Electrical, Fire & Gas, Instrument and Safety Instrumentation Systems Standards - Iceweb - Engineering Institute of Technology, accessed on June 22, 2025, https://iceweb.eit.edu.au/about-iceweb/standards/control_electrical_standards.html
  22. NORSOK STANDARD Z-013 Risk and emergency preparedness assessment - Regulations.gov, accessed on June 22, 2025, https://downloads.regulations.gov/BSEE-2018-0002-46042/attachment_20.pdf
  23. NORSOK STANDARD I-002 Safety and automation system (SAS) - Cloudinary, accessed on June 22, 2025, https://res.cloudinary.com/forlagshuset/image/upload/v1558345462/Learning%20Resources/Automatisering/A2/Kap%201/2_Safety_and_automation_system_SAS.pdf
  24. New edition of NORSOK P-002:2023 Process system design, accessed on June 22, 2025, https://standard.no/globalassets/fagomrader-sektorer/petroleum/p-002/norsok-p-002-rev-2023-info-external_sn.pdf
  25. Norsok Design Principles Coding System | PDF | Hvac | Natural Gas - Scribd, accessed on June 22, 2025, https://www.scribd.com/document/109451375/NORSOK-DESIGN-PRINCIPLES-CODING-SYSTEM
  26. NORSOK - downloads - InterDam Downloads, accessed on June 22, 2025, https://downloads.interdam.com/files/E-Book%20InterDam-NORSOK%20Standards%20and%20requirements.pdf
  27. NORSOK R-005 has been revised, and is now published in Norwegian - Standard Norge, accessed on June 22, 2025, https://standard.no/en/news/norsok-r-005-has-been-revised-and-is-now-published-in-norwegian/
  28. NORSOK Standards Explained - Onix, accessed on June 22, 2025, https://onix.com/blog/the-norsok-standards-r-002-r-003-r-005-explained
  29. NORSOK STANDARD R-005 Safe use of lifting and transport equipment in onshore petroleum plants - American Crane School, accessed on June 22, 2025, https://americancraneschool.com/wp-content/uploads/2025/05/Norway_Crane_Regulations_NORSOK-Standard-R005-Safe-Use-Of-Lifting-And-Transport-Equipment-In-Onshore-Petroleum-Plants.pdf
  30. NORSOK STANDARD_L-005.docx - SlideShare, accessed on June 22, 2025, https://www.slideshare.net/KCMSpecialSteel/norsok-standardl005docx
  31. Norsok Standard I-005u3 1 | PDF | System - Scribd, accessed on June 22, 2025, https://www.scribd.com/document/177035978/Norsok-Standard-I-005u3-1
  32. norsok z-008:2024 - Standard Norge, accessed on June 22, 2025, https://online.standard.no/norsok-z-008-2024
  33. NORSOK Z-008 Risk Based Maintenance & Consequence Classification | PDF - Scribd, accessed on June 22, 2025, https://www.scribd.com/document/525977895/NORSOK-Z-008-Risk-Based-Maintenance-Consequence-Classification
  34. The Implementation of Norsok Z-008 for Equipment Criticality Analysis of Gas Central Processing Plant - Iptek ITS, accessed on June 22, 2025, https://iptek.its.ac.id/index.php/ijmeir/article/download/4863/3672
  35. The Implementation of Norsok Z-008 for Equipment Criticality Analysis of Gas Central Processing Plant | Priyanta | International Journal of Marine Engineering Innovation and Research - Iptek ITS, accessed on June 22, 2025, https://iptek.its.ac.id/index.php/ijmeir/article/view/4863/0
  36. PMS RCM (Planned maintenance system - reliability centred) - DNV, accessed on June 22, 2025, https://www.dnv.com/services/pms-rcm-planned-maintenance-system-reliability-centred--12199/
  37. The Implementation of Norsok Z-008 for Equipment Criticality Analysis of Gas Central Processing Plant - ResearchGate, accessed on June 22, 2025, https://www.researchgate.net/publication/334175803_The_Implementation_of_Norsok_Z-008_for_Equipment_Criticality_Analysis_of_Gas_Central_Processing_Plant
  38. OPC 10000-12 - 1715001645-ua-part-12-discovery-and-global-services-1.05.02-2022-11-01.pdf
  39. UA Part 1: Overview and Concepts - 6.4 Redundancy, accessed on June 22, 2025, https://reference.opcfoundation.org/Core/Part1/v104/docs/6.4
  40. UA Part 4: Services - 6.6.2 Server Redundancy, accessed on June 22, 2025, https://reference.opcfoundation.org/Core/Part4/v104/docs/6.6.2
  41. How to Simplify OPC Server Redundancy with Cogent DataHub - Blog, accessed on June 22, 2025, https://blog.softwaretoolbox.com/how-to-simplify-opc-server-redundancy-with-cogent-datahub
  42. OPC Redundancy - OPC Expert, accessed on June 22, 2025, https://opcexpert.com/opc-redundancy/
  43. OPC UA Redundancy - Apis Foundation 8 - Prediktor Docs, accessed on June 22, 2025, https://docs.prediktor.com/docs/foundation8/APIS_Hive/UAServerComm/OPC_UA_Redundancy.html
  44. 8 Cloud Disaster Recovery Solutions to Know in 2025 - N2W Software, accessed on June 22, 2025, https://n2ws.com/blog/cloud-disaster-recovery-solutions
  45. Azure storage disaster recovery planning and failover - Learn Microsoft, accessed on June 22, 2025, https://learn.microsoft.com/en-us/azure/storage/common/storage-disaster-recovery-guidance
  46. Disaster Recovery in Azure: Architecture and Best Practices - Cloudian, accessed on June 22, 2025, https://cloudian.com/guides/disaster-recovery/disaster-recovery-in-azure-architecture-and-best-practices/
  47. Disaster Recovery on AWS: 4 Strategies and How to Deploy Them - Cloudian, accessed on June 22, 2025, https://cloudian.com/guides/disaster-recovery/disaster-recovery-on-aws-4-strategies-and-how-to-deploy-them/
  48. Modernizing OT Middleware: The Shift to Open Industrial IoT Architectures with Data Streaming - Kai Waehner, accessed on June 22, 2025, https://www.kai-waehner.de/blog/2025/03/17/modernizing-ot-middleware-the-shift-to-open-industrial-iot-architectures-with-data-streaming/
  49. Secure manufacturing data - Security Best Practices for Manufacturing OT, accessed on June 22, 2025, https://docs.aws.amazon.com/whitepapers/latest/security-best-practices-for-manufacturing-ot/secure-manufacturing-data.html
  50. Cloud Disaster Recovery for DevOps Team: Best Practices - ControlMonkey, accessed on June 22, 2025, https://controlmonkey.io/blog/cloud-disaster-recovery/
  51. UA Part 14: PubSub - 5.4.3 Security Key Service, accessed on June 22, 2025, https://reference.opcfoundation.org/Core/Part14/v104/docs/5.4.3
  52. UA Part 14: PubSub - 8 PubSub Security Key Service Model, accessed on June 22, 2025, https://reference.opcfoundation.org/Core/Part14/v104/docs/8
  53. (PDF) Strategies for Data Backup and Disaster Recovery in Google Cloud Platform, accessed on June 22, 2025, https://www.researchgate.net/publication/390663117_Strategies_for_Data_Backup_and_Disaster_Recovery_in_Google_Cloud_Platform
  54. Backup and Disaster Recovery | Microsoft Azure, accessed on June 22, 2025, https://azure.microsoft.com/en-us/solutions/backup-and-disaster-recovery
  55. Back up and restore OT network sensors from the sensor console - Microsoft Defender for IoT, accessed on June 22, 2025, https://learn.microsoft.com/en-us/azure/defender-for-iot/organizations/back-up-restore-sensor
  56. Microsoft Azure Cloud Backup & Recovery Solution for MSP and Business - Acronis, accessed on June 22, 2025, https://www.acronis.com/en-eu/solutions/backup/cloud/azure/
  57. Disaster recovery options in the cloud - AWS Documentation, accessed on June 22, 2025, https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
  58. Disaster Recovery on AWS Cloud.pdf - SlideShare, accessed on June 22, 2025, https://www.slideshare.net/slideshow/disaster-recovery-on-aws-cloudpdf/258036814
  59. AWS: Elastic Disaster Recovery Review - Enterprise Storage Forum, accessed on June 22, 2025, https://www.enterprisestorageforum.com/backup/aws-elastic-disaster-recovery-review/
  60. Data Replication Tools: Securing Your Data Against Loss - Acronis, accessed on June 22, 2025, https://www.acronis.com/en-sg/blog/posts/the-importance-of-data-replication-tools/
Made on
Tilda