If I understand correctly, gross errors are detected when 'the quality of data reconciliation', a summary value given in the output, becomes zero.
Your last example in the video also shows that even a flow with a small k-value could be a gross error. In that case, the value tolerance is estimated way too low, forcing the calculation to spread the errors over other flows. I guess a missing flow would give a similar result.
Where could I find more information on how quality of data reconciliation is calculated, and how flows with likely gross errors are selected?
I my example, I get 8 flows with likely gross errors.
If I understand correctly, it might be that only one or two flows cause the whole error, and not necessarily the flows with the highest k-values.
Is there a systematic procedure to get to the root of the gross error(s)?
Note that in case of a missing flow, the flow couldn't be detected to be the cause of the problems because it is simply not included in the model!
Here is a selection of books that cover the topics of data reconciliation and gross error detection:
o Chemical Process Structure and Information Flows (Mah, 1990)
o Process Plant Performance (Madron, 1992)
o Material and Energy Balancing in the Process Industries (Veverka, Madron, 1997)
o Data Processing and Reconciliation (Romagnoli, Sanchez, 2000)
o Data Reconciliation and Gross Error Detection (Narasimhan, Jordache, 2000)
o Process Plant Instrumentation (Bagajewicz, 2001)
o Smart Process Plants (Bagajewicz, 2010)