Among the significant cost saving measures that were sparked by the 2014 oil price downturn has been the industry’s increase in the application of digital technologies. Real-time data, condition monitoring, and the Industrial Internet of Things (IIoT) have contributed to cutting costs, but the amounts of data to be processed and interpreted go far beyond the capabilities of human examination and interpretation. Energy Northern Perspective spoke with Jane Ren, CEO and a founder of Atomiton, to find out how her company’s work with Artificial Intelligence offers a digital decision-making solution that saves time and ultimately reduces the cost of doing business.
A medical doctor by training, Jane Ren’s experience stretches from Sutter Health and Intel Digital Health to Cisco and GE Global Software. Her background has contributed to the development of Atomiton’s Thing Query Language – TQL. The TQL system that drives the Atomiton Stack, which provides developers a tool to bring together data from a wide range of sources, including sensors, cameras and equipment, to develop intelligent IIoT applications that employ artificial intelligence.
When asked about how the oil and gas industry has responded to Atomiton’s offerings, Ren explains, “We’ve seen the industry responding with a lot of enthusiasm. There was quite a bit of introspection after the 2014 oil price crash, and the industry realised that just discovering more oil was not going to give them more margin, so the crisis pushed them to do something. But that’s not to say that they knew what to do and how to do it.”
Among the industry efforts to regroup during the oil price downturn has been digitisation. And Atomiton has been at work to create a programming environment that makes it possible for operators to deal with the overwhelming amounts of data collected. “Machine intelligence, which people often call artificial intelligence, is the ability of computers to look at statistics, patterns and associations and be able to make judgments and conclusions, very often more accurately than people can,” explains Ren.
She continues, “In oil and gas we can use different categories of artificial intelligence. One is machine learning, which means feeding multiple sources of sampled data and giving the computer hours of training to detect certain conditions. But other types can be combined, such as images and optics – being able to recognise patterns based on what’s captured by cameras. For example, AI can recognise that a person is present in an unsafe zone without a hard hat.”
Beyond Big Data
Ren emphasises that today’s AI is different from simply gathering Big Data: “So, when you think about Big Data, it’s all about the quantity of data that made a difference in how the computer is able to crunch those data to get better decisions. But Big Data requires as much as three months’ time for the data scientists to be able to arrive at a decision. And this is often too late for taking action in the timeframe of the operations.”
“In drilling operations, for example, the crew makes decisions in a matter of hours or minutes, or even shorter. The key difference for us and the Atomiton Stack technology is the speed to make decisions from the data that’s generated fresh – what we call fresh data – within minutes’ and even seconds’ time. Quantity is important, but speed is absolutely essential for the industry to make use of the data,” Ren explains.
Cleaning the data
Atomiton software is designed to help bring order to what Ren calls “dirty data”, which categorises much of the overwhelming information gathered by sensors, cameras and other systems. She explains, “The raw data generated by these operations we call ‘dirty’ for three reasons. The first is the data comes from many different vendors, and each one of them may follow a different format, and the computer can’t always tell if these different formats actually refer to the same thing. The second is that dirty data is noisy, which means that although a sensor may ‘talk’ five times a second, that doesn’t mean every time it ‘speaks’ there is meaningful data. And the third is that dirty data are not correlated, arriving from different sources.”
“Atomiton provides the ability to process dirty data into clean data within the second’s time, in line. That means that intelligence on top of that gets derived in line as well,” Ren adds.
With equipment from multiple vendors, many of which use different data formats, learning to make sense of all the data in the heterogeneous environment of a drilling rig or refinery can be a challenge. “There is definitely a steep curve in the beginning to integrate them, and this is where software makes a difference in terms of being able to coordinate abstracts from the difference in the messages and automatically convert or standardise,” explains Ren.
“For example, in Atomiton Stack, when standardising to the same format, there is one metric that we call ‘sensor on-boarding’. This is how much time it takes to add a new sensor to the system. It makes a big difference if it’s a couple of hours’ time, which is what we use, or if it’s one- or two-weeks’ time, because with one or two weeks, you have to stop a lot of other things running to be able to do that.”
“It’s not just the initial learning curve, but also the downstream – are you future proof? In the future, you’re going to change your pumps, you’re going to change your motor, and that will affect your artificial intelligence software? And the answer is it should not – it should be plug-and-play.”
Thing query language
At the heart of the Atomiton Stack is TQL – thing query language – reminiscent of the SQL (structured query language) that revolutionised database management development. Ren says, “The reason we compare TQL to SQL is that SQL introduced a higher level of language which removed the complexity of all the hardware interactions and computer memory. That makes it much easier for many more people to develop applications based on SQL. The general goal of TQL is the same, except that it tries to remove all the complexity – hides the detail and complexities – of interacting with machines and equipment and sensors.”
“So, it’s more facing towards the industry, the IT departments’ developers have better tools to develop these applications or AI usage much more rapidly. And today we are intending to offer TQL as an open standard, which means that others can also contribute to its specifications, and work with developers and universities and train a lot of people in many different countries to accomplish that goal. We think it’s going to benefit the industry.”