Sensor fusion: Delivering a paradigm shift in the industrial Internet of Things

Blogs
Pedro Vale, Chief Technology Officer at LYTT, August 18 2022

Sensor fusion not only transforms how companies interact with their sensor data, it also represents a paradigm shift in digital acceleration across sectors. This ground-breaking technology is achieved when multiple data-streams of different types are blended to deliver more accurate and varied insights to address a greater set of operational challenges than ever before. At the heart of this digital gear change are industrial Internet of Things (IoT) platform providers offering the possibility of real-time sensor fusion – which can drive holistic operational decisions.

However, not all sensor fusion is created equal. Some industrial IoT platform providers can only incorporate simple sensor types due to a lack of processing capability and insufficiently robust algorithms. Intelligently extracting data from complex sensor types – such as Distributed Acoustic Sensing (DAS) and Distributed Temperature Sensing (DTS) – and transforming it into contextualized insights has proven exceedingly difficult.

As part of my vision for LYTT’s platform evolution, my team and I have focused on building resilient algorithms and establishing sensor fusion capability within our cloud analytics engine. This enables us to transform all types of sensor data, including DTS and DAS, into combined and actionable insights for customers across oil & gas (O&G), water, and carbon capture storage (CCS).

In the section below, I’ll go into greater depth about the technical challenges of sensor fusion, describing the changes made to our processes and digital infrastructure that are driving meaningful, real-time insights for customers. I will also discuss how enhanced Machine Learning (ML) models hold the key to future innovation.

The challenge: Combining simple and complex sensor types systematically

Combining multiple data types, such as temperature and acoustic, means additional data can be supplied to processing algorithms, allowing them to perform more accurate analyses. The enhanced insight provided by the combined data can unlock new ways to answer critical operational questions.

One of our greatest barriers to sensor fusion previously had been desynchronized data-streams. In ‘live’ systems, there are often delays in data acquisition. For example, a model might require the last 30 seconds of DAS data and readings from a pressure sensor as inputs. The fiber optic data may arrive, but before engaging the ML model, pressure data is also needed. LYTT overcame this barrier by developing enhanced processing capabilities that allow disparate data types to be merged methodically into an orderly flow.

The solution: Building a next-generation digital architecture to expand data processing capability

Cracking sensor fusion in its truest form required a targeted upgrade of the LYTT LYVE platform architecture. Originally, LYTT’s data processing pipelines were executed via bespoke software services, which meant that any new functionality had to be constructed from scratch.

By transitioning the platform to an enhanced engine, LYTT can not only offer more functionality; such as the ability to acquire data from multiple sources and operate across several databases; but can also provide a field-tested execution environment for our data processing pipelines that build in performance, reliability and flexibility.

As a result of these enhancements, LYTT LYVE now has the capability to merge desynchronized variables across datasets, and instantaneously funnel the relevant information for further processing and analysis. This allows for a single, consistent operational view - keeping insights concise and connected.

The opportunity: Empowering innovation with enhanced Machine Learning models

Machine Learning has always been a core enabler for LYTT. It creates enormous potential to process and analyze huge volumes of data of all types.

Sensor fusion unlocks the opportunity to develop much more contextualized ML models, which can interpret several real-time perspectives of the assets being monitored. This empowers ML models to generate much richer insights about the assets, such as enhanced understanding of multi-phase flow measurements across a wider geographical range and timescale. The process is similar to how human brains combine the senses of touch, taste, and smell to interpret the full experience of eating.

ML models are inherently agile and flexible, so there are endless opportunities to refine them. LYTT’s Data Scientists are continually adapting and evolving our ML workflow to accommodate additional data sources and respond to new challenges as they arise. Sensor fusion advances this process by unblocking the opportunity for multiple input combinations, and streamlines the development and deployment of advanced ML models.

The future of sensor fusion

The unlocking of sensor fusion has allowed LYTT to create an evolved IoT platform that provides more contextualized and accurate insights across an even wider range of use cases. The ability to integrate additional data sources and enable strategies to account for out-of-order or late data, is pivotal to achieving reliable and dynamic insights to inform operational decision-making across sectors.

Sensor fusion offers boundless opportunities, and at LYTT we are always looking ahead to the next step. For us, this is centered around a ‘Plug and Play’ approach, which enables our customers to consume merged datasets while allowing them to deploy their own ML models onto LYTT’s enterprise platform.

LYTT exists to empower our customers to solve their operational challenges through the implementation of innovative cloud solutions. If you would like to join our team, check out our careers page and see how you can get involved in pushing technological boundaries.

CAREERS

x
This website uses cookies. Click here to learn more. That's Fine