Machine Sensors – Pipeline
Last updated
Last updated
The following pipelines provide the complete workflow for equipment monitoring and detecting anomalies.
In the first part of the workflow, the data we are fetching from the shared event will be parallelly processed in three ways.
For prediction using DS Lab Runner Component
For Detection using the Python component
Storing raw data in mongo collection [Collection Name: adnoc_raw_data]
From the shared event, we fetch the data and send it to the Python component which formats
the data structure and further sends only the value field to the DS Lab Runner.
This component is used for the prediction of the sensor data which we send through the
python component and it will send the predictions to the out event.
This component will write the predicted data in mongo collection. [Collection Name
og_anamoly_data]
From the shared event, we fetch the data and send it to the Python component, using lookup
collection which we have already stored in the bi_testing database. We check whether the
generated data is an anomaly or normal data and send it to the output event with a flagged field
named is_alert with values as “yes” or “no”.
This component will write the raw data which we generated in the Data Generator pipeline in the
Mongo collection. [Collection Name: adnoc_raw_data].
In the second part of the workflow, after passing through the Rule Engine Component which detects the anomaly, gives a flagged column called is_alert, and goes through the Rule Splitter which segregates normal and anomaly data in different workflows:
The normal data goes through the python component which changes the data structure and sends it to the Web Socket producer.
The anomaly data is further divided into three ways and parallelly processed as explained:
The anomaly data go through the Python component that changes the data structure and sends it to the Web Socket Producer which will send data to the dashboard for producing an alert.
The anomaly data will kick start the Jira Python component to create a Jira Ticket.
Store the alert data in the Mongo collection [Collection Name: adnoc_alert_tag_values]
This component is used for segregating the anomaly and normal data using the flagged field
“is_alert” dividing the workflows into two.
This component is used to format the normal data’s structure to the web socket’s structure and it
will send it to the web socket producer.
This component will send the normal data to the Web socket client.
This component is used to format the anomaly data’s structure to the web socket’s structure and
it will send it to the web socket producer.
This component will send the anomaly data to the Web socket client from there it will be used in
a dashboard for the live alert.
This component is used to create a ticket for a specific sensor alert in Jira.
This component will write the alert data in mongo collection. {Collection Name:
adnoc_alert_tag_values]