1.1 - From Process Data to Business ValueThe fundamental objective of bridging the OT/IT divide is to transform raw, real-time process data into a strategic asset that drives tangible business value. This initiative is not merely a technological upgrade; it is a strategic enabler that facilitates a shift from reactive and preventative operational models to proactive and predictive ones. By establishing a seamless, secure, and contextualised flow of data from offshore assets to cloud analytics platforms, an organisation can unlock significant improvements across its entire value chain.
The key value drivers for this project are:
- Production Optimisation: Real-time visibility into process parameters allows for continuous analysis and optimisation, maximising throughput and efficiency. Advanced models can identify subtle performance deviations and recommend adjustments to maintain optimal production levels.
- Predictive Maintenance: Moving beyond scheduled maintenance, the data pipeline enables the use of machine learning models to predict equipment failures before they occur. This reduces costly unplanned downtime, minimises unnecessary maintenance activities, and extends asset life. This aligns with the strategic goals of operators like Aker BP, which explicitly targets "less maintenance" as a key outcome of its digital twin initiatives for assets like Yggdrasil.1
- Enhanced Safety and Barrier Management: The ability to detect process anomalies and equipment degradation in real-time provides an early warning system for potential safety incidents. This directly supports the core mandate of the Petroleum Safety Authority (PSA) to ensure and improve safety levels through the adoption of new technology.2 Analytics can be used to monitor the health and performance of safety-critical barriers, providing a new layer of assurance.
- Reduced Emissions: Process optimisation directly correlates with energy efficiency. By fine-tuning operations based on real-time data analytics, an organisation can significantly reduce its energy consumption and, consequently, its carbon footprint. This is a primary strategic goal for major Norwegian operators like Equinor and AkerBP.4
The successful implementation of this strategy is demonstrated by leading operators on the Norwegian Continental Shelf. Equinor's OMNIA platform, built on Microsoft Azure, serves as a powerful blueprint. Their core principle is the creation of a common data platform to break down historical data silos, emphasising that "data without any context is quite useless".
4 This highlights the absolute necessity of the data modeling phase of this project, ensuring that raw tag values are enriched with semantic meaning before they reach the cloud. Equinor's use of OPC UA as a standard for contextualisation and their aggregation of
19 OPC UA servers on the Johan Sverdrup field into a single gateway validates the architecture proposed in this guide.
4Similarly, Aker BP's digital twin initiatives for the Yggdrasil and Ærfugl fields, developed in partnership with Cognite and Aize, showcase the end-state application this data pipeline will empower. Their strategy involves creating a "single source of truth" by integrating static engineering data (like 3D models and P&IDs) with the live, real-time OT data stream.
5 This living digital representation of the physical asset enables advanced visualisation, collaboration, and data-driven decision-making, fulfilling the ambition of creating a safer, more efficient, and lower-emission operation.
1 The explicit use of DNV-RP-A204 in the Yggdrasil project further validates the adoption of the assurance frameworks outlined in this guide as industry best practice.
11.2 Navigating the Regulatory and Standards LandscapeExecuting a project of this nature within the Norwegian offshore sector requires strict adherence to a multi-layered framework of regulations and standards. This framework is not an obstacle but a guide to ensuring the final solution is safe, reliable, and robust. The project plan must be built upon this foundation to ensure compliance and quality from its inception.
- Petroleum Safety Authority (PSA) Norway: The PSA's regulatory regime is performance-based and technology-neutral, meaning it does not prescribe specific technologies but sets functional requirements for safety. A key principle is that any digitalisation initiative must contribute to improving safety, and the operator must take full "ownership of risk" associated with the new technology.3 For this project, this means demonstrating through a rigorous risk management process how the OT/IT data pipeline enhances safety and how potential new risks (e.g., cybersecurity, data integrity) are controlled. The PSA has a strong focus on ICT safety and expects companies to have robust processes for qualifying and adopting new digital solutions.2
- NORSOK Standards: The NORSOK standards represent the collective best practices of the Norwegian petroleum industry and are a cornerstone of offshore engineering. While no single standard governs the entire OT/IT data pipeline, several are relevant to the source systems. Standards such as NORSOK I-002 (Safety and Automation Systems), I-005 (System Control Diagrams), and Z-008 (Risk-based maintenance) define the design, operation, and safety context of the OT environment from which the data originates.17 The project's data acquisition strategy must respect the integrity and performance of these safety and control systems, ensuring that data extraction is non-intrusive and does not compromise their primary functions.
- DNV Recommended Practices: DNV provides the critical frameworks for assurance and quality in the digital domain.
- DNV-RP-A204 (Assurance of digital twins): This document provides a comprehensive framework for building "digital trust" in complex systems like the one proposed. It outlines auditable requirements for governance, data quality, computational model qualification, and operational assurance.38 Its adoption by Aker BP for the Yggdrasil project confirms its status as a key industry guideline.1
- DNVGL-RP-0497 (Data quality assessment): This practice provides the methodology for defining, measuring, and managing data quality across syntactic, semantic, and pragmatic dimensions. It is the foundation for the project's data quality assurance program.38
- ISO Standards: The International Organisation for Standardisation provides the overarching frameworks for quality and data management.
- ISO 9000/9001: These standards define the principles of a Quality Management System (QMS), including the process approach, risk-based thinking, and continual improvement, which will govern the management of this project.38
- ISO 8000: Referenced within the DNV practices, this standard provides the formal definitions for data quality, underpinning the entire data quality framework.38
The combined regulatory landscape, encompassing directives from the PSA, recommended practices from DNV, and quality frameworks from ISO, establishes a stringent and non-negotiable requirement for a formal, auditable governance structure. This is not merely a matter of best practice but a foundational component for demonstrating risk ownership and maintaining a license to operate in the Norwegian digital offshore environment.
1.3 Establishing a Robust Governance FrameworkA robust governance framework is the cornerstone of this digital strategy, ensuring that data is managed as a critical asset throughout its lifecycle. It provides the structure for accountability, quality control, and security. The project SHALL implement a formal governance model based on the requirements of DNV-RP-A204, Section 10.3.
381.3.1 Data Governance Policies and RolesThe project charter must include the establishment of formal data governance policies and procedures dedicated to managing the data pipeline.
38 Central to this is the definition of clear roles and responsibilities, which bridges the organisational gap between OT and IT:
- Data Owner: This role is accountable for a specific data domain (e.g., data from a specific process unit or asset). This accountability must reside within the OT/Operations department, as the Data Owner is ultimately responsible for the physical asset and the operational decisions made based on its data. They are accountable for the data's fitness for purpose.
- Data Steward: Reporting to the Data Owner, the Data Steward is responsible for the day-to-day management of the data. This includes defining data quality rules, establishing the semantic context of data points (e.g., what a specific tag represents, its engineering units), and managing metadata. This is a crucial, hands-on role that requires deep domain expertise.
- Data Custodian: This is a traditional IT role responsible for the technical management of the data infrastructure, such as the cloud storage, databases, and transport mechanisms. They ensure the infrastructure is secure, available, and performs as required, but they are not accountable for the data content itself.
- DataOps Unit: This is a cross-functional team, comprising members from OT, IT, and analytics, responsible for the end-to-end operational management of the data pipeline. Their duties include monitoring data flow and quality, responding to incidents, and managing the deployment of new data sources or analytics applications.38
1.3.2 Formalised ProceduresThe governance plan must formalise procedures to manage the entire data lifecycle, as required by DNV-RP-A204 (ID 10.3-4).
38 These procedures will form the core of the operational
run books for the data pipeline:
- Dataset Configuration: A standardised process for onboarding new data sources. This includes how to control and verify that a new dataset is properly configured in the data platform, from the edge publisher to the cloud consumer.
- Data Quality Assessment: A continuous process for monitoring, reporting, and remediating data quality issues. This involves establishing data quality dashboards with notification functionality to alert the DataOps unit when quality metrics for a dataset drop below predefined thresholds.38
- Application Integration: A controlled process for integrating new cloud services or analytics applications. This ensures that new consumers are correctly integrated without compromising the data quality or security of the platform.
- Verification and Validation (V&V): A formal V&V plan to test and approve any changes to the data sources, pipeline components, or consuming applications before they are put into operation. This prevents unintended consequences and maintains the integrity of the production environment.
- Cybersecurity: The data management procedures must include specifications for cybersecurity at every stage of implementation.38
1.4 Integrating Quality Management and AssuranceTo ensure the project delivers a solution that is fit for purpose and meets all stakeholder requirements, a formal Quality Management System (QMS) must be integrated into the project management methodology. This QMS will be based on the principles of the ISO 9000 family of standards.
38The project will be managed according to the core principles of ISO 9001:2015, which are highly synergistic with modern project management practices.
38- Process Approach: The entire project lifecycle, from planning and design to implementation and operation, will be viewed as a set of interrelated processes. Each process will have defined inputs, outputs, controls, and resources, allowing for systematic management and improvement.
- Risk-Based Thinking: The project will proactively identify, analyze, and mitigate risks at every stage. This is not a one-time activity at the project's start but an ongoing process integrated into all decision-making. This aligns perfectly with the risk management frameworks of both the PSA and DNV.
- Continual Improvement (PDCA): The Plan-Do-Check-Act cycle will be applied to all project activities. This iterative approach fosters organizational learning, allowing the project team to refine processes, improve outcomes, and adapt to changing requirements throughout the project's duration.38
A critical aspect of the QMS is treating documentation as a key quality deliverable. All project artefacts - from requirement specifications and architectural designs to test plans, V&V reports, and operational procedures—will be formally controlled documents. They will be subject to versioning, review, and approval cycles. This approach, which aligns with the documentation requirements in DNV-RP-A204, Section 12, ensures full traceability from requirements to implementation and provides the body of evidence necessary for final system assurance and regulatory acceptance.
38