Process Plant Instrumentation Design And Upgrade

  • Uploaded by: Siti Nurhidayah
  • 0
  • 0
  • January 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Process Plant Instrumentation Design And Upgrade as PDF for free.

More details

  • Words: 110,603
  • Pages: 364
Loading documents preview...
Process Plant Instrumentation Design and Upgrade

Process Plant Instrumentation Design and Upgrade

Miguel J. Bagajewicz, Ph.D. School of Chemical Engineering and Materials Science University of Oklahoma

Boca Raton London New York

CRC Press is an imprint of the Taylor & Francis Group, an informa business

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2000 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper Version Date: 20161019 International Standard Book Number-13: 978-1-56676-998-3 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

To all those people from my generation who pursued a country in which it would be worth living. Some of them disappeared in the process.

CONTENTS

Preface

xiii

Acknowledgments

xvii

Chapter 1. Plant Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction 1 Plant Information and Operations Management 1 Model-Based Monitoring 4 Quality of Data 5 References 11 Chapter 2. Instrumentation Design Goals . . . . . . . . . . . . . . . . . . . . . . . 15 Introduction 15 Measured and Key Variables 15 Selection of Monitoring Variables 16 Selection of Key Variables in Control 17 Selection of Measured Variables for Fault Diagnosis 24 Instrumentation Design Goals 25 Upgrading of Instrumentation 27 Nomenclature 28 References 28 Chapter 3. Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Introduction 31 Flow Rate Instrumentation 31 Level Measurement 46 Temperature Measurement 51 vii

viii

Contents

Pressure Measurement 56 Density Measurement 58 On-Line Process Analyzers 60 Transmission and Transformation of Signals Nomenclature 62 References 63

60

Chapter 4. Errors in Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Introduction 65 Instrument Properties 65 Measurement Quality 66 Sensitivity and Speed of Response 71 Hysteresis and Dead Band 72 Calibration Curves 74 Accuracy of Different Instruments 76 Nomenclature 78 References 78 Chapter 5. Variable Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Introduction 81 Model 81 Measurement Equation 82 Graphs and Flowsheets 82 Connectivity of Systems 83 Observability 85 Redundancy 86 Linear Systems 87 Canonical Representation of Linear Systems 87 Other Methods 93 System Degree of Redundancy 95 Quantification of Observability and Redundancy 96 Graphs and Canonical Matrices 100 Nonlinear Systems 110 Full Nonlinear Systems 120 Nomenclature 123 References 125 Chapter 6. Design and Upgrade of Nonredundant and Redundant Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Introduction 127 Upgrade and/or Design Goals 127 Design for Estimability 128

Contents

Design for Estimability Efficiency 139 Compulsory Measurements and the Upgrade Case Sensor Networks for Bilinear Systems 142 Nomenclature 156 References 157

ix

140

Chapter 7. Data Reconciliation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Data Reconciliation 159 Background 159 Linear Data Reconciliation 162 Steady-State Linear Data Reconciliation 163 Nonlinear Steady-State Data Reconciliation 169 Dynamic Data Reconciliation 170 Nomenclature 171 References 172 Chapter 8. Design of Precise Sensor Networks . . . . . . . . . . . . . . . . 175 Introduction 175 Cost-Optimal Design 175 Multiple Instruments and Hardware Redundancy 176 Maximum Precision Models 178 Generalized Maximum Precision Model 180 Relation Between Sensor Network Models 181 Solution Procedures for Linear Systems 183 Parameter Estimation in Nonlinear Systems 184 Nomenclature 189 References 190 Chapter 9. Precision Upgrade of Sensor Networks . . . . . . . . . . . . 191 Introduction 191 Upgrade Options 191 Cost Benefit Analysis 192 Upgrade Models Based on Addition of Sensors 192 Model for Resource Reallocation 198 Generalized Model for Resource Reallocation and Upgrade 201 Nomenclature 209 References 209 Chapter 10. Reliability of Nonrepairable Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Introduction 211 Sensor Service Availability 211

x

Contents

Sensor Service Reliability 212 Failure Density and Failure Rate 212 Markovian Model 214 Mean Time to Failure 216 Estimation Availability and Reliability of Variables 217 Determination of Estimation Reliability 218 Estimation Reliability in Nonredundant Systems 221 Availability, Reliability and Degree of Estimability 222 System Availability and Reliability 222 Nomenclature 223 References 224 Chapter 11. Design of Reliable Linear Nonrepairable Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Introduction 225 Nonredundant Networks Featuring Maximum Reliability 225 Redundant Networks Featuring Maximum Reliability 237 Redundant Networks Featuring Maximum Reliability and Hardware Redundancy 245 Redundant and Restricted Networks 246 Nomenclature 250 References 251 Chapter 12. Design of Reliable Bilinear Nonrepairable Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Introduction 253 Bilinear Multicomponent Systems 253 Energy Networks 266 Nomenclature 266 References 267 Chapter 13. Design of Reliable and Cost-Efficient Nonrepairable Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Introduction 269 Minimum Cost Model 269 Minimum Number of Sensors Model 270 Solution Procedure 270 Relation to Other Models 270 Limitations of Previous Models 271 Generalized Maximum Reliability Model 271 Nomenclature 278 References 278

Contents

xi

Chapter 14. Design of Repairable Sensor Networks . . . . . . . . . . . 279 Introduction 279 Failure Intensity 279 Repair Intensity 280 Expected Number of Repairs 280 Maintenance and Total Cost 284 Residual Precision 284 Minimum Cost Model 286 Nomenclature 289 References 290 Chapter 15. Design of Robust Sensor Networks . . . . . . . . . . . . . . . 291 Introduction 291 Origin of Gross Errors 291 Gross Error Handling 293 Test for Gross Error Presence 293 Gross Error Detection in Dynamic Data Reconciliation 295 Inaccuracy in Gross Error Detection 296 Multiple Gross Error Identification 296 Gross Error Size Estimation 297 Sensor Network Error Detectability 297 Sensor Network Gross Error Resilience 298 Robust Sensor Networks 299 Minimum Cost Model for Robust Networks 300 Nomenclature 302 References 303 Chapter 16. Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Introduction 307 Genetic Algorithms 307 Nomenclature 313 References 313 Chapter 17. Design of Sensors for Process Fault Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Introduction 315 Fault Detection, Diagnosis and Alarms 315 Fault Observability 326 Fault Resolution 326 Sensor Network Design 327

xii

Contents

Sensor Location for Fault Observability 327 Sensor Location for Fault Resolution 332 Nomenclature 337 References 338

Index

343

PREFACE

The process of obtaining good estimates of process variables that conform to basic laws has received the name estimation. An example of this is the Kalman filter. Estimators based on the assumption of steady state have existed since the seminal article by Kuehn and Davidson [Chem. Eng. Prog., 57, 44(1961)], and are known as data reconciliation methods. Recently, estimation methods for dynamic systems, which received the name of dynamic data reconciliation, have been developed. In reality, these latter methods can be classified as optimal estimation methods. Names aside, the real issue at stake is not the presence of noise, but the detection of gross errors, that is, biased instruments and leaks. There is a vast amount of literature devoted to the selection and good maintenance of instruments. This literature covers the selection of the right instrument for a particular range and system, but only after the desired accuracy and reliability of measurement have been established. Little has been written on how to systematically determine the right accuracy and reliability needed when selecting an instrument, much less how much redundancy is needed for a particular system. The key variables that need estimation come from control requirements, as well as monitoring needs for safety, quality control and production accounting. These are the starting points of the design methodology. This book concentrates on determining the optimal accuracy and reliability of instruments and their location. To determine this, certain desired properties of the system of instruments are used as constraints while the cost is minimized. These properties, among others, are variable observability, system reliability and precision of certain variables. This book is not a textbook. Rather, it is intended to be an organized collection of the most relevant work in this area, published until the middle of 1999. It may, therefore, seem a book for specialists. However, it has been xiii

xiv

Preface

written with the intention of making it readable by regular engineers with some background in linear algebra, mathematical optimization and graph theory. It is organized so that the complexity of the sensor network design is addressed step by step. Thus, after the issue of errors and estimability has been covered in Chapters 4 and 5, Chapter 6 covers the design of sensor networks that fulfill redundancy goals. Chapter 7 presents a review of data reconciliation, so that the design of sensor networks with accuracy/precision goals can be presented in Chapter 8. Chapter 9 presents models for the upgrade of sensor networks. Chapter 10 covers new concepts of reliability associated with sensor networks. Chapters 11 and 12 are devoted to the design of reliable systems. Chapter 13 incorporates maintenance, that is, deals with repairable systems. Chapter 14 reviews the theory of gross error detection preparing the ground for Chapter 15, which presents techniques that help design sensor networks that are capable of filtering gross errors appropriately. Finally, Chapter 16 covers the use of genetic algorithms to solve some of the problems posed in the book, and Chapter 17 is devoted to the incipient work on instrumentation location for fault detection. To avoid appendices, the book blends new concepts with the presentation of known background material. For example, well-known concepts from graph theory, such as spanning trees, cutsets and cycles, are blended with the new concept of estimability in Chapter 5. One of my former advisors (I had three) taught me some elements of the art of dreaming. He had a sign posted outside his office. It said something like this: The great projects are

r dreamt by the crazy r executed by the audacious r enjoyed by the happy He was crazy and audacious, as required. However, he did not have the tenacity to hold on to his project. He has, nonetheless, instilled in me what he was preaching and doing. On a much smaller scale, I also had a dream, worked with tenacity, and am enjoying the result, despite the sour taste of having to let this manuscript go to the publisher without being absolutely sure it is perfect, as I would have preferred. Oscar Wilde said it very well: “In this world there are only two tragedies. One is not getting what one wants, and the other is getting it.” I spent endless hours away from the family and from myself. Neither my family nor probably myself would have benefited from not writing this book, simply because I would be shamelessly pursuing some other lovely chemical engineering endeavors. My wife has asked me not to apologize for the time stolen from the family, so I will not. At least, tradition states, I should dedicate it to her and my children. I am sure they will agree that the dedication I have chosen brings back many buried ghosts from our common past, and pays

Preface

xv

tribute to those victims of state terrorism executed during the summary and barbaric killings of the 1970s in my country Argentina. There are other more civilized ways to handle dissent of any kind, even dissent that is perceived as evil. My native country is finally finding the strength to remember. I hope it is also acquiring the wisdom to prevent these tragedies from being repeated.

ACKNOWLEDGMENTS

When I accepted this challenge, I naively thought that writing a book is just a matter of knowing the material and putting together a nice story about it. It is after all, my first book, so I should be forgiven. Very soon I realized what nobody had told me, or if I had heard it, I had ignored. Knowledge cannot be put together so easily, especially when you are the first one writing about an issue, because there is nowhere to look. After I started to organize the material, I realized that there are many holes in the story, many unanswered questions that make the story lose continuity. I postponed, as is probably customary, the end date of the book, so I would be able to answer some of these questions. Mabel S´anchez has been a remarkable brain to have available to accomplish this task. Although Qiyou Jiang did not work directly in sensor location matters, his outstanding work on data reconciliation has influenced my thinking to such an immeasurable extent. I regret not having him around anymore. Both of them have contributed to what have been, so far, the most academically productive years of my life. My deep thanks are also extended to my students Mariano Savelski, Hern´an Rodera and Jos´e Soto, who patiently proofread some chapters of the manuscript and provided constructive criticisms. My student, Margiori Rivas, patiently read the entire book and warned me of many mistakes. For this I am very grateful. Finally, Dr. Shankar Narasimhan provided valuable feedback on sections of the book. The great staff of the School of Chemical Engineering and Materials Science at the University of Oklahoma, who make my office life so livable, have proven, once again, that intelligence is the ability to laugh at oneself without regret. They have facilitated so many things that I cannot enumerate.

xvii

CHAPTER 1

Plant Data Management

INTRODUCTION Instrumentation is needed in process plants to obtain data that are essential to perform several activities. Among the most important are control, the assessment of the quality of products, production accounting (sometimes called yield accounting) and the detection of failures related to safety. In addition, certain parameters that cannot be measured directly, such as heat exchanger fouling or column efficiencies, are of interest. Finally, new techniques, such as on-line optimization, require the construction of reliable computer models for which the estimation of process parameters is essential. This book concentrates on the tasks of determining the optimal set of measured variables and selecting the accuracy and reliability of the corresponding instruments. The goal is to obtain sufficiently accurate and reliable estimates of variables of interest while filtering bad data due to possible instrument malfunction. An additional goal is to observe and diagnose single and multiple process faults. First, this chapter highlights the different activities in which data play an important role. Next, model-based monitoring and quality of data, including an overview of the different ways of guaranteeing the desired quality, are discussed. This background material provides the foundation for the goals of instrumentation design and upgrade. PLANT INFORMATION AND OPERATIONS MANAGEMENT In the last decade, the chemical process industry has been incorporating new technologies in the form of several computer programs to help gather, filter, 1

2

PLANT DATA MANAGEMENT

organize and use the information of plants for several technical and management activities. Through these different software packages, quality of products and cost containment have improved considerably. Efficiency of operations, which in the past relied on the expertise of local unit operators, can now rely on interactive and computer-based supervisory control activities at the plant-wide level. In addition, yield accounting, operations planning and maintenance scheduling benefit from more reliable and accurate data. In view of these new advances in data processing, even management procedures are undergoing revision. Most process plants are designed to run at steady-state conditions. In practice, these conditions are not strictly met because plants are subject to minor unpredictable changes. However, the assumption of steady state is still successfully used to perform most of the gathered data analysis, with the obvious exception of control. This analysis, followed by decision making, covers several activities of plant operation. Briefly, some of these activities are:

r Operation monitoring: This is a short-term activity in which the data are used, either as part of control loops or as means to change control settings.

r Fault detection: This includes the detection of instrument malfunction and equipment failure and the assessment and quantification of leaks.

r Performance analysis: This activity is typically performed on a daily basis and covers what in the oil-refining industry is called yield accounting, oil accounting or oil movement. r Process modeling: Simulation has proven to be an efficient tool for the process engineer. It provides means to analyze alternative processing conditions, to evaluate retrofit changes and detect operational problems. r Operations planning: Typical activities, such as scheduling, fouling monitoring, heat exchanger cleaning and catalyst activation, rely on data from the process and its adequate modeling. r Production planning: This is performed on a larger time scale. Several software packages have been developed to address this problem, especially for the oil industry. r Maintenance planning: Production losses increase with poor maintenance. As frequency of a given maintenance plan is increased, cost increases. Naturally, a trade-off is obtained where total cost of production loss and maintenance is minimum. If the maintenance plan can be modified by using techniques that allow early identification of potential problems, the overall cost is reduced. r Parameter estimation: The estimation of parameters is of paramount importance in practice. Typically, these parameters are column efficiencies, heat transfer coefficients, flash vaporization efficiencies, etc. All of them are impossible to be measured directly. In many cases, the precision and location

Plant Information and Operations Management

3

of the existing instrumentation are insufficient to obtain good quality estimates. A typical example is the prediction of fouling in crude fractionation preheating trains, which is information of vital importance for the optimization of the cleaning cycles. r On-line optimization: Data from the plant are used to “tune” the parameters of a simulation model, which in turn are used as a basis for an optimization of the plant operation. As a result, a new steady-state condition that maximizes profit is obtained. This new technology was introduced in industry in the 1980s and has resulted in millions of dollars of savings in operating costs (from 2% to 20%) for different scenario applications: crude fractionation plant: (Mullick, 1993); distillation (Smith, 1996); ethylene plants (Lauks et al., 1992); olefin plants, catalytic cracking, hydrocrackers, bisphenol-A plants, FCC units, and other cases referenced by Zhang et al. (1995) and Brydges et al. (1998). The cycle of Figure 1.1 depicts how real-time optimization cycles operate in plants (Forbes and Marlin, 1996). Not only hierarchical optimization and control loops are already in practice producing millions of dollars of savings, but there are also plans to perform closed loop optimization, that is, automatic update of control setpoints. r Enterprise resource planning: This term was coined recently to refer to a planning activity, based mainly on optimization procedures, that will comprise several of the above described planning activities (operations, production, maintenance) in addition to finance and management planning. (Badell et al., 1998; Bunch, 1998; Grosdidier, 1998). Thus, a vertical integration between business and plant operations using optimization models is starting to take place (Badell and Puigjaner, 1998). An integration of all these activities, from operation and monitoring to long-term planning, is also known as Plant-Wide Management and Control (Swanson and Stewart, 1994; Pelham and Pharris, 1996). Some companies have started to integrate management

Estimate parameters

Update the model

Filtering of gross errors & Data reconciliation

Optimize & Update controller set points

Plant & Controllers FIGURE 1.1. Real-time optimization.

4

PLANT DATA MANAGEMENT

and enterprise resource planning with the above activities (Benson, 1995; Natori and Tjoa, 1998; Harkins, 1999). The idea of operating plants in a “hands-off” mode was for decades a dream of plant managers and plant decision-making teams. With reliable data already available and with the help of the undergoing research, this dream is starting to become a reality (Koolen, 1994). It is clear that quality of data is a key aspect for all the above activities. Some of them, such as on-line optimization, cannot be successfully performed without accurate and consistent data. This challenges designers to determine the best sensor network in terms of number of instruments, location and quality that is needed for this whole cycle to work properly. The choice of instrumentation has therefore ceased to be a localized and independent problem in the plant to become one that is global and multiobjective.

MODEL-BASED MONITORING Model-based monitoring consists of the use of a combination of models (formal descriptions of process behavior) and on-line measurements to achieve the following goals: a. b. c. d.

Produce estimates of measured and unmeasured variables Identify malfunctioning instrumentation Identify unsafe and/or faulty operating conditions and their origin Identify events that can impact efficiency and quality of products

Kramer and Mah (1993), in an excellent overview of this issue, discussed several scenarios in which good estimates of data can be obtained. Data reconciliation and gross error detection are techniques that help to accomplish tasks (a) and (b) and constitute a particular case of the concept of data rectification. Although data reconciliation relies on analytical constraints and mostly on least square estimation, data rectification can obtain these estimates by using techniques such as Kalman filtering, pattern recognition, neural networks, principal component analysis and partial least squares. Likewise, the detection of faults relies on techniques based on statistics, but several other techniques can be used. Tasks (c) and (d) are direct consequences of using a model. The field of sensor network design and upgrading has traditionally relied on model-based concepts. Almost all work addressing monitoring goals include: —the ability of the sensor network to provide estimates of variables of interest —the ability to guarantee certain accuracy through data reconciliation —reliability

Quality of Data

5

—capability of identifying gross errors through statistical model-based techniques —capability of identifying process faults QUALITY OF DATA Quality of data is a loose term that is often used to refer to several properties that a set of data should have. These properties can be condensed in the following three broad attributes:

r accuracy r precision or reproducibility r reliability Accuracy is the ability of an instrument to measure the correct or “true” value. In turn, precision can be defined as the ability of a sensor to reproduce a value within a certain interval. Thus, an instrument can be precise and not accurate. This happens when repeated measurements of the same variable fall within a small interval that does not contain the true value. Conversely, an accurate instrument may have poor reproducibility, but the value of the mean of many of its measurements may be close to the true value. Figure 1.2 illustrates these concepts. Although these concepts are fairly simple, they are sometimes confused. Indeed, it is common to see engineers to refer to precise systems as accurate and vice versa. Moreover, some books and manuals use the concept of precision as being part of a more general concept of accuracy that includes both the variance of the measurements and the deviation of the mean from the true value. Unfortunately, instead of proposing a new name, accuracy is still used, increasing the confusion. Reliability of data is defined as the probability that the data will be actually not absent during a certain period of time. In turn, availability of data is the probability of data not being absent at a certain instance in time. In this regard, reliability is a more stringent requirement. Accurate and imprecise

Inaccurate and precise

True value

FIGURE 1.2. Accuracy and precision.

6

PLANT DATA MANAGEMENT

Summarizing, precision is a desired property related to the quality of the instrument and the existing noise in the system. Accuracy and reliability are more related to the probability of instrumentation failure. If the failure is not total but it only constitutes a bias, accuracy is compromised. When a sensor goes off-line, the failure is total and the signal is no longer present. Software or Analytical Redundancy In addition to direct measurements, there are indirect ways of determining process variables. Therefore, we further classify data acquisition as follows:

r data acquisition supported solely by instrument readings r data acquisition enhanced by software When data acquisition is solely supported by instrument measurements, each particular value of a variable of the system is directly associated to its source, the instrument that measures it. It is well known that in a world not constrained by cost, precise and failure-proof instrumentation would suffice to obtain precise and reliable estimates. However, because in almost all cases, the probability of a sensor failure is not negligible, redundancy is used as means of guaranteeing data availability. With redundancy comes discrepancy from one instrument to another measuring the same variable. This might not matter if the noise associated with the signal is small enough. However, because the level of noise is usually not small, readings have to be reconciled. Thus, redundancy is classified as:

r hardware redundancy r software or analytical redundancy Hardware redundancy is a term used when two or more sensors are used to measure the same variable. This is the case of two thermocouples measuring the temperature inside a vessel or a pipe in an environment where this temperature is the same (for example, a steam boiler). Another example is the use of two flowmeters installed to measure the same flow. Software or analytical redundancy is a term used when a set of measurements of different variables is supposed to satisfy a mathematical model. A simple example of this is a unit with several input and output streams. If one measures the flow rate of all streams with a single instrument per flow measured, there is no hardware redundancy. However, because the sum of the input flow rates has to be equal to the sum of the outputs, redundancy is now analytical. In other words, two estimates are available for each flow rate, one from its direct measurement and the other obtained by using the material balance equations, which constitutes a conflict that needs to be resolved. Data reconciliation deals with the task of determining the best statistical estimates of all these variables.

Quality of Data

7

Data Reconciliation Because instrument readings are inaccurate and do not even obey basic conservation laws, there is a need to determine the best estimates out of a conflicting set of readings. However, in many facilities, plant operators still assume that readings are accurate enough for monitoring and control purposes. Because imbalances create conflicts with production accounting clerks, balances have been forced by the use of several heuristic techniques. For example, the elimination or manual correction of measurements that in the experience of the operator are less trustworthy has been the most popular. Data reconciliation techniques were introduced in the 1980s particularly in the oil industry. Not only are these techniques able to enhance the accuracy of plant data, but they are also capable of detecting and filtering instrument bias and identifying leaks. Several books outline several aspects of data reconciliation (Mah, 1990; Madron, 1992; Veverka and Madron, 1997; Romagnoli and S´anchez, 1999; Narasimhan and Jordach, 2000). In addition, hundreds of articles have been devoted to the problem. Data have to be accurate and consistent. Accuracy is obtained by proper instrument selection and calibration. Consistency is tested at a first level by the plant operator and later by the process engineer. Underneath each of these evaluations is a model based on empirical knowledge of the plant or on basic laws of nature (mass and energy conservation). In the case of total mass conservation, the mathematical form of this model is a simple linear set of equations. When component balances are made, the set of equations is bilinear. The full energy balances involving temperatures, compositions, flow rates and pressures as variables are also nonlinear, especially because they include the evaluation of thermodynamic properties. Thus, data reconciliation is a systematic way of performing this assessment, taking into account the precision of each contributing measurement and making use of statistical tools. In addition to the value of the estimate of each measured variable, data reconciliation is also capable of providing estimates for unmeasured variables. When these unmeasured variables are process parameters, the technique received the name of parameter estimation. Various ways have been proposed to validate data without the need of using data reconciliation. In principle, the term “soft sensor” was coined for the use of existing process and laboratory data to infer the value of a certain measurement (Martin, 1997; Hong et al., 1999). These soft sensors are based on neural networks and wavelet decomposition. Precision and Software/Analytical Redundancy Software or analytical redundancy increases the precision of estimates. It is well known that the precision of estimates produced by data reconciliation is higher than those associated with the data provided by the redundant

8

PLANT DATA MANAGEMENT

measurements. In other words, the standard deviation associated with the estimates is lower than those of the individual measurements. For production accounting and planning, this reduction in the level of uncertainty is important. In the oil industry, a small percentage gained in the uncertainty can lead to different financial decisions, especially nowadays when more and more sophisticated computer programs are used to make predictions and decisions. Reliability and Software/Analytical Redundancy If data are to be reliable, the instrumentation used has to be reliable too. However, because balance equations and other more complex models can be used for the estimation of variables, redundancy provides a way of enhancing reliability. Therefore, through the use of data reconciliation, one can guarantee that when a particular instrument measuring directly a variable fails, an estimate of this variable is still available through the model. Thus, one can afford to use less reliable instrumentation and reduce cost, as long as a whole scheme is in place to perform the estimation through software. Detection of Gross Errors in Instrumentation In the absence of gross errors, data reconciliation reduces typically to a simple optimization problem where the adjustments to the measurements are minimized, requiring that the adjusted values satisfy a plant model, usually material and energy balances. It is precisely the presence of gross errors or outliers that makes the data reconciliation task difficult because they need to be identified and eliminated. Instrument malfunction is a general term that relates to situations that range from miscalibration to total failure. In the absence of redundancy, miscalibration or bias cannot be detected, unless the deviation is so large that it becomes obvious. Redundancy, and especially software or analytical redundancy, is the only way to contrast data and determine possible malfunctions of this sort by filtering the data to detect and/or estimate or eliminate these biases. In addition, process systems often have leaks that can also be detected through software or analytical redundancy. This approach was called gross error detection by one group of researchers (Romagnoli and Stephanopoulos, 1980; Mah and Tamhane, 1982; Crowe et al., 1983; Crowe, 1994; Bagajewicz and Jiang, 1997, 1998) and lately sensor/data validation or signal reconstruction by another group (Dunia et al., 1996; Qin et al., 1997; Tay, 1996; Tham and Parr, 1996; and others). Identification of gross errors is performed by using different techniques. The most popular techniques are those rooted on statistical hypothesis testing. In these tests, the null hypothesis is that there are no gross errors, whereas the alternative hypothesis is that at least one gross error exists in the set of

Quality of Data

9

measurements considered. The set under analysis could be the whole system (global tests), a particular measurement (measurement tests) and/or a particular unit (nodal or constraint tests). The different contributions to this problem are briefly reviewed in the next paragraph. Reilly and Carpani (1963) were the first to propose the global test and the nodal test. The measurement test was proposed by Mah and Tamhane (1987) and Crowe et al. (1983). Almasy and Sztano (1975) proposed the maximum power measurement test, which was the object of several follow-up articles. Madron (1985) proposed an alternative method of about the same power as the maximum power test. The nodal test was introduced by Reilly and Carpani (1963) and discussed later by Mah et al. (1976). New techniques relying on the generalized likelihood ratio were proposed by Narasimhan and Mah (1987, 1988), which were later proven equivalent to the measurement test. Johnston and Kramer (1995) proposed a bayesian approach to the whole problem of reconciliation and gross error detection. Later, principal component analysis (PCA) was applied to gross error testing by Tong and Crowe (1995). Since then, a large number of PCA-based sensor validation techniques have been presented (Dunia et al., 1996; Qin et al., 1997; Tay, 1996; and others). Finally, Rollins and Davis (1992) introduced an unbiased estimation technique that uses Bonferroni tests. In an effort to depart from the statistically based techniques, neural network techniques have also been proposed as tools for gross error detection (Gupta and Narasimhan, 1993; Himmelblau, 1994; Karjala and Himmelblau, 1994, 1996; Reddy and Mavrovouniotis, 1998). Each of the aforementioned methods has its own problems, especially when several gross errors are present at the same time. Estimation of Gross Errors Once a gross error or a set of gross errors is detected, the following two questions need to be answered:

r Should the corresponding measurements be eliminated or should they be independently estimated?

r How is a gross error related to a bias, especially when averages of several measurements are reconciled? How to filter process variations from gross errors? Should historical data be used? In trying to answer these questions, several methods have been developed throughout the past years. Three kinds of strategies help identify multiple gross errors:

r Serial elimination (Ripps, 1965; Serth and Heenan, 1986; Rosenberg et al., 1987), which identifies one gross error at a time by using some test statistic

10

PLANT DATA MANAGEMENT

and eliminates the corresponding measurement until no gross error is detected. Different existing commercial software use this technique based on the measurement test and recently by means of principal component tests. r Serial compensation (Narasimhan and Mah, 1987), which identifies the gross error and its size, compensates the measurement and continues until no error is found. r Simultaneous or collective compensation (Keller et al., 1994; Kim et al., 1997; S´anchez and Romagnoli, 1994; S´anchez, 1996; S´anchez et al., 1999), which proposes the estimation of all gross errors simultaneously. In addition, Jiang and Bagajewicz (1999) proposed a serial identification with collective compensation strategy (SICC) for dynamic systems and steady-state cases. Finally, the unbiased estimation technique (UBET), proposed by Rollins and Davis (1992), makes the identification first and then a simultaneous estimation. Many researchers have evaluated the performance of these approaches. Serial elimination is simple but has the drawback of losing redundancy and is not applicable to gross errors that are not directly associated with measurements, e.g., leaks (Mah, 1990). Serial compensation is applicable to all types of gross errors. However, its results completely depend on the accuracy of estimation for the size of gross errors (Rollins and Davis, 1992). Collective compensation is considered to be more correct but computationally too intensive and impractical (Keller et al., 1994). Nevertheless, the results from simultaneous/collective compensation methods seem to be more accurate. For example, the simultaneous estimation method developed by S´anchez (1996) and later modified by S´anchez et al. (1999) is very accurate. However, it is still not suitable for large systems, because it becomes combinatorially expensive. Commercial software vendors claim that the introduction of data reconciliation technology has produced millions of dollars of savings. Oil losses come from inaccurate computations of transactions and from tank evaporation and actual spills. Imbalances between input and output measurements in refineries can be of the order of 0.5–1% (Miles and Jellfs, 1988) or even higher, depending on the quality of the instrumentation. In a typical refinery processing 150,000 barrels of crude per day this represents close to a million dollars per year in oil that is unaccounted for. Any project aiming at the identification of these losses will realize this type of savings. However, losses are not the only issue. Not only biased instrumentation can lead to underestimation of produced goods but can also affect monitoring, thus reducing the efficiency of plant operation. At the same time, proper identification of biased instrumentation allows better maintenance and prevents accidents. Some of these features, such as leak detection, are not available commercially and will represent a novel technological application.

References

11

Detection of Process Faults Faults propagate throughout the process, altering the readings of instruments (pressures, temperatures, flow rates etc). Thus, sensors should be able to determine departures from normal operation. The quality of data needed for this task is, therefore, different. Although in normal operation the focus is on accuracy of estimation of key variables, it shifts to observability and proper diagnosis of faults in abnormal situations. Thus, one of the problems is to prevent the false diagnosis because of faulty instrumentation.

REFERENCES Almasy G. A. and T. Sztano. Checking and Correction of Measurements on the Basis of Linear System Model. Problems of Control and Information Theory, 4, 57 (1975). Badell M. and L. Puigjaner. Short-Term Planning from the Business Level in ERP Systems with Vertical Integration. Paper 240j. Annual AIChE Meeting, Miami (1998). Badell M., A. Espuna and L. Puigjaner. Using ERP Systems with Budgeting Optimization Tools for Investment Decision Making. Paper 239b. Annual AIChE Meeting, Miami (1998). Bagajewicz M. and Q. Jiang. An Integral Approach to Dynamic Data Reconciliation, AIChE J., 43, pp. 2546 (1997). Bagajewicz M. and Q. Jiang. Gross Error Modeling and Detection in Plant Linear Dynamic Reconciliation. Comp. & Chem. Eng., 22, 12, pp. 1789–1810 (1998). Benson R. S. Computer Integrated Management: An Industrial Perspective on the Future. Comp. Chem. Eng., 19, Suppl., pp. S543–S551 (1995). Brydges J., A. Hrymak and T. Marlin. Real Time Optimization of a FCC Recovery Section. Proceedings of FOCAPO (1998). Bunch P. R. Integration of Planning and Scheduling Systems with Manufacturing Processes. Paper 235g. Annual AIChE Meeting, Miami (1998). Crowe C. M. Data Reconciliation. Progress and Challenges. Proceedings of the Process Systems Engineering (PSE) International Symposium, Kyongju, Korea (1994). Crowe C. M., Y. A. Garcia Campos and A. Hrymak. Reconciliation of Process Flow Rates by Matrix Projection. I. The Linear Case. AIChE J., 29, pp. 818 (1983). Dunia R., J. Qin, T. F. Edgar, T. J. McAvoy. Identification of Faulty Sensors Using Principal Component Analysis. AIChE J., 42, pp. 2797–2812 (1996). Forbes J. F. and T. E. Marlin. Design Cost: A Systematic Approach to Technology Selection for a Model-Based Real-Time Optimization Systems. Comp. & Chem. Eng., 20, No 6/7, 717–734 (1996). Grosdidier P. Understand Operation Information Systems. Hydrocarbon Processing. Sept. (1998). Gupta G. and S. Narasimhan. Application of Neural Networks for Gross Error Detection. Ind. Eng. Chem. Res., 32, 8, pp. 1651 (1993). Harkins B. Turning Knowledge into Profit. Chemical Engineering, March (1999). Himmelblau D. M. Rectification of Data in a Dynamic Process Using Artificial Neural Networks.

12

PLANT DATA MANAGEMENT

Proceedings of the Process Systems Engineering (PSE) International Symposium, Kyongju, Korea (1994). Hong S. J., J. H. Jung, C. Han. A Design Methodology of a Soft Sensor Based on Local Models. Comp. & Chem. Eng. Suppl., pp. S351–S354 (1999). Jiang Q. and M. Bagajewicz. On a Strategy of Serial Identification with Collective Compensation for Multiple Gross Error Estimation in Linear Data Reconciliation. Ind. & Eng. Chem. Res., 38, 5, pp. 2119–2128 (1999). Johnston L. P. M. and M. A. Kramer. Maximum Likelihood Data Rectification. Steady State Systems. AIChE J., 41, 11 (1995). Karjala T. W. and D. M. Himmelblau. Dynamic Data Rectification by Recurrent Neural Networks vs. Traditional Methods. AIChE J., 40, 11, pp. 1865 (1994). Karjala T. W. and D. M. Himmelblau. Dynamic Rectification of Data via Recurrent Neural Nets and the Extended Kalman Filter. AIChE J., 42, pp. 22–25 (1996). Keller J. Y. M. Darouach and G. Krzakala. Fault Detection of Multiple Biases or Process Leaks in Linear Steady State Systems. Comp. & Chem. Eng., 18, pp. 1001 (1994). Kim I., M. S. Kang, S. Park and T. F. Edgar. Robust Data Reconciliation and Gross Error Detection: The Modified MIMT Using NLP. Comp. & Chem. Eng., 21, 7, pp. 775–782 (1997). Koolen J. L. A. Plant Operation in the Future. Comp. & Chem. Eng., 18, pp. S477–S481 (1994). Kramer M. A. and R. S. H. Mah. Model-Based Monitoring. FOCAPO Proceedings, Crested Butte, (1993). Lauks U. E., R. J. Vanbinder, P. J. Valkenburg and C. van Leeuwen. On-Line Optimization of an Ethylene Plant. ESCAPE-1, Supplement to Comp. & Chem. Eng., 16, Suppl., pp. S213–S220 (1992). Madron, F. A New Approach to the Identification of Gross Errors in Chemical Engineering Measurements. Chem. Eng. Sci., 40, pp. 1855 (1985). Madron F. Process Plant Performance, Measurement Data Processing for Optimization and Retrofits. Ellis Horwood, West Sussex, England (1992). Mah R. S. H. Chemical Process Structures and Information Flows. Butterworths, Stoneham (1990). Mah R. S. H., G. M. Stanley and D. M. Downing. Reconciliation and Rectification of Process Flow and Inventory Data. Ind. Eng. Chem., Process Des. Dev., 15, 1 (1976). Mah R. S. H. and A. C. Tamhane. Detection of Gross Errors in Process Data. AIChE J., 28, 828 (1982). Mah R. S. H. and A. Tamhane. Detection of Gross Errors in Process Data. AIChE J., 33, pp. 1514 (1987). Martin G. Consider Soft Sensors. Chem. Eng. Prog., July (1997). Miles J. and P. A. M. Jelffs. Computer Aided Loss Investigation and Monitoring. The Second Oil Loss Conference. P. A. M. Jelffs, editor. John Wiley & Sons, Ltd. (1988). Mullick S. Rigorous On-Line Model (ROM) for Crude Unit Planning, Scheduling, Engineering and Optimization. AIChE Spring Meeting (1993). Narasimhan S. and C. Jordache. Data Reconciliation & Gross Error Detection. An Intelligent Use of Process Data. Gulf Publishing Company, Houston, TX (2000). Narasimhan S. and R. S. H. Mah. Generalized Likelihood Ratio Method for Gross Error Identification. AIChE J. 33, 9, pp. 1514–1521 (1987).

References

13

Narasimhan S. and R. S. H. Mah. Generalized Likelihood Ratios for Gross Error Identification in Dynamic Processes. AIChE J., 34, pp. 1321 (1988). Natori Y. and I. B. Tjoa. To Innovate Chemical Plant Operation by Applying Advanced Technology and Management. Proceedings of FOCAPO (1998). Pelham R. and C. Pharris. Refinery Operations and Control: A Future Vision. Hydrocarbon Processing, July (1996). Qin S. J., H. Yue and R. Dunia. Self-Validating Inferential Sensors with Application to Air Emission Monitoring. Ind. Eng. Chem. Res., 36, 5, pp. 1675–1685 (1997). Reddy V. N. and M. L. Mavrovouniotis. An Input-Training Neural Network Approach for Gross Error Detection and Sensor Replacement. Inst. of Chem. Eng. Trans IchemE, 76, part A, May (1998). Reilly P. M. and R. E. Carpani. Application of Statistical Theory of Adjustment to Material Balances. 13th Can. Chem. Eng. Conf., Montreal, Que (1963). Ripps D. L. Adjustment of Experimental Data. Chem. Eng. Progr. Symp. Ser. 55, 61, 8–13 (1965). Rollins D. K. and J. F. Davis. Unbiased Estimation of Gross Errors in Process Measurements. AIChE J., 38, 4, pp. 563–572 (1992). Romagnoli J. and M. S´anchez. Data Processing and Reconciliation for Chemical Processes Operations. Academic Press (1999). Romagnoli J. and G. Stephanopoulos. On the Rectification of Measurement Errors for Complex Chemical Plants. Chem. Eng. Sci. 35, 5, pp. 1067–1081 (1980). Rosenberg J., R. S. H. Mah and C. Iordache. Evaluation of Schemes for Detecting and Identifying Gross Errors in Process Data. Ind. Eng. Chem. Res., 26, pp. 555 (1987). S´anchez M. Ph.D. Thesis. Universidad Nacional del Sur (Argentina) (1996). S´anchez M. and J. Romagnoli. Monitoreo de Procesos Continuos: An´alisis Comparativo de T´ecnicas de Identificaci´on y C´alculo de Bias en los Sensores. AADECA 94-XIV Simposio Nacional de Control Autom´atico, Argentina (1994). S´anchez M., J. Romagnoli, Qiyou Jiang and M. Bagajewicz. Simultaneous Estimation of Biases and Leaks in Process Plants. Comp. & Chem. Eng., 23, 7, 841–858 (1999). Serth R. and W. Heenan. Gross Error Detection and Data Reconciliation in Steam Metering Systems. AIChE J., 32, pp. 733 (1986). Smith O. Development and Benefits of On-Line Process Modeling Applications. AIChE Spring Meeting (1996). Swanson I. and R. Stewart. Towards Full Plant-Wide Management and Control. PetrochemAsia 94 International Conference, Singapore, Nov. (1994). Tay M. E. Keeping Tabs on Plant Energy and Mass Flows. Chem. Eng., September (1996). Tham M. T. and A. Parr. Succeed at On-Line Validation and Reconstruction of Data. G. F. Nalven, editor. AIChE, New York (1996). Tong H. and C. M. Crowe. Detection of Gross Errors in Data Reconciliation by Principal Component Analysis. AIChE J., 41, 7, pp. 1712 (1995). Veverka V. V. and F. Madron. Material and Energy Balances in the Process Industries. Elsevier (1997). Zhang Z., P. R. Pike and T. Hertwig. An Approach to On-Line Optimization of Chemical Plants. Comp. Chem. Eng., 19, Suppl., pp. S305–S310 (1995).

CHAPTER 2

Instrumentation Design Goals

INTRODUCTION In this chapter, the goals of instrumentation design are presented in further detail. First, the concept of key variables is introduced. Then, different types of criteria for the selection of key variables are presented. Finally, the sensor network design goals are discussed in detail.

MEASURED AND KEY VARIABLES Traditionally, the fields of monitoring and process control relied heavily on an a priori selection of measured variables. The same can be said about the emerging field of parameter estimation in on-line optimization. The traditional concept used is to measure directly the variable whose value one desires to estimate. When one departs from this concept, a new view of the field of sensor network design emerges. Software or analytical redundancy can provide better and more reliable estimates through the use of steady-state and dynamic models. When the variable of interest is measured, its estimate is improved. Thus, in general, one can state that measured variables are not necessarily the same as the key variables. This prompts the following definitions: Definition: Key variables are the variables for which their estimated values are of interest for control, monitoring and parameter estimation. Definition: Measured variables are the variables whose measurements allow reliable and accurate estimates of key variables. 15

16

INSTRUMENTATION DESIGN GOALS

Thus, the optimal choice of measurements can now be performed by making use of software or analytical redundancy while guaranteeing reliability/availability and good gross error filtering properties. This is the objective of this book. In the next sections, the selection of key variables in the field of process monitoring control and alarm is reviewed.

SELECTION OF MONITORING VARIABLES Monitoring of a process is performed for several purposes. Table 2.1 summarizes some of the monitoring purposes and the variables of interest. No comprehensive systematic methodology exists to optimally select these variables. Nevertheless, singular value decomposition was used for this purpose in the context of parameter estimation in on-line optimization (Krishnan et al., 1992a, b). Parameter estimation is based on a model. For example, column efficiencies, heat exchanger heat transfer coefficients and flash vaporization efficiencies can only exist if a model of the associated equipment exists. Therefore, mathematical expressions derived from these models are used to estimate the parameters. In this regard, as long as these equations can be solved, parameter estimation poses no other challenges than their sensitivity to the measurement of some variables. These problems can be mitigated by using software or analytical redundancy. In on-line optimization, there is an additional problem: the sensitivity of the optimum to the parameters. This has, in turn, an effect on the desired accuracy TABLE 2.1. Key Variables in Monitoring. Monitoring Goal Safety

Production accounting Quality control Parameter estimation Environmental impact

Key Variables Fault tree analysis, process hazard analysis (PHA) and hazard and operability studies (HAZOP) determine chains of events and variables to monitor to prevent accidents. This information is, in turn, used to assess fault occurrence on-line. Flow rate and composition of input and output process streams. Composition of streams, especially products, or variables that indirectlyinfluence them. These parameters are normally not possible to measure directly. Examples are heat exchanger fouling and tray efficiencies that are used in on-line optimization. Environmental consciousness and regulations dictate that certain variables should be monitored on a regular basis to assess a possible damage on the environment.

Selection of Key Variables in Control

17

of the estimates of these parameters. Krishnan et al. (1992a, b) presented an analysis of which parameters should be used. They also discuss in detail how plant-wide models can be used to estimate the needed parameters, thus presenting an alternative to the traditional estimation methods that focus on measurements around the units where parameters are used. In addition, they introduce a methodology to select measurements. This methodology is based on a combination of singular value analysis (SVA) and sensitivity analysis. The procedure, however, does not rely on cost analysis. SELECTION OF KEY VARIABLES IN CONTROL Although most of the design methods of instrumentation networks for monitoring purposes rely on steady-state models, the design of control schemes is based on dynamic models. The accuracy needed in the variables of interest is related to control loop performance, which many times supersedes cost. There are some guidelines and models to determine these variables. Although they are typically suggested by the designer, some heuristic methods exist (Seborg et al., 1989; Ogunnaike and Ray, 1994). Mathematical programming techniques exist for particular cases such as the decentralized discrete regulator problem with output feedback (Oloomi and Sawan, 1989). To design a control system, an appropriate number of controlled and manipulated variables need to be selected. The number of manipulated variables cannot exceed the degree of freedom, which is defined mathematically as the number of process variables minus the number of independent equations. In addition, if offsets cannot be tolerated, the number of manipulated variables should be larger than the controlled variables. In the following sections, guidelines to select manipulated and key variables are first discussed. The technique of singular value analysis to determine key variables is reviewed next followed by an overview of the concept of inferential control. Classification of Process Variables in Control Theory In control system design theory, process variables are classified as either inputs or outputs, as is shown in Figure 2.1. The output variables are process variables that ordinarily are associated with exit streams of a process or conditions inside units, such as compositions, temperatures, levels and flow rates. To satisfy plant and control objectives, a subset of the output variables is selected as variables to be controlled (i.e., controlled variables). Typically, the inputs are associated with inlet streams (e.g., feed composition or feed flow rate) or environmental conditions (e.g., ambient temperature). In general, controlled variables are measured on-line, and their measurements are used for feedback control. However, this is not

18

INSTRUMENTATION DESIGN GOALS

x1

y1

x2

Input variables

x3 • • xM

PROCESS

y2 y3 • • yN

Output variables

FIGURE 2.1. Process variables.

always the practice. The case in which measured and controlled variables do not coincide has received the name of inferential control (Brosilow and Tong, 1978), which is primarily used in distillation to infer composition from flow rate and temperature measurements. Measured variables that are not controlled variables received the name of secondary measurements (Weber and Brosilow, 1978) in the control literature. As it was discussed at the beginning of this chapter, these concepts are generalized in this book, so that all measurements contribute to obtain good estimates of key variables. Selection of Controlled Variables The consideration of control objectives has led to a number of guidelines for the selection of controlled variables from the available output variables. We now reproduce a list presented by Seborg et al. (1989).

r Guideline 1: Select variables that are not self-regulating. A common example is liquid level in a storage vessel with a pump in the exit line.

r Guideline 2: Choose output variables that may exceed equipment and operating constraints (e.g., temperatures, pressures and compositions).

r Guideline 3: Select output variables that are a direct measure of product quality (e.g., composition and refractive index) or that strongly affect it (e.g., temperature and pressure). r Guideline 4: Choose output variables that seriously interact with other controlled variables. The steam header pressure for a plant boiler that supplies several downstream units is an example of this type of output variable. r Guideline 5: Choose output variables that have favorable dynamic and static characteristics. Ideally, there should be at least one manipulated variable that has a significant direct and rapid effect on each controlled variable. Key variables for control may not be state variables but, rather, some combination of them that indicates the system performance, such as product recovery ratios, pressure ratios in compressors, process loads etc.

Selection of Key Variables in Control

19

Selection of Manipulated Variables Based on the plant and control objectives, a number of guidelines have been proposed for the selection of manipulated variables from among the input variables:

r Guideline 6: Select inputs that have large effects on the controlled variables. Typically, the steady-state gain between the manipulated and controlled variable should be as large as possible. r Guideline 7: Choose inputs that rapidly affect the controlled variables. In other words, avoid time delays. r Guideline 8: The manipulated variables should affect the controlled variables directly rather than indirectly. For example, in a system heated by steam injection in a jacket, the exit temperature of the process stream is better controlled by manipulating the inlet steam flow, rather than the exit condensate, because the steam rate has more direct effect on the pressure of the shell and consequently on the saturation temperature. r Guideline 9: Avoid recycling disturbances. That is, it is preferable not to manipulate an inlet or a recycle stream, which can be avoided by manipulating a utility stream. These guidelines may be in conflict, especially when a variable has a larger steady-state gain (Guideline 6) but slower dynamics (Guideline 7), which calls for a trade-off analysis. Selection of Measured Variables The recommendations of control practitioners are that output variables used as controlled variables are the obvious choice. They also recommended to additionally measure selected input variables and output variables, because recorded measurements of manipulated inputs provide useful information for tuning controllers and troubleshooting control loops. Finally, measurements of disturbances in inputs can be used in feed-forward control schemes. In choosing outputs to measure and in locating measurement points, both static and dynamic considerations are important. Seborg et al. (1989) suggest:

r Guideline 10: Reliable, accurate measurements should be considered. r Guideline 11: Select measurement points that have an adequate degree of sensitivity. For example, in distillation columns a product composition is often controlled indirectly by regulating a temperature near the end of the column if an analyzer is not available. For high-purity separations, the location of the temperature measurement point is very important.

20

INSTRUMENTATION DESIGN GOALS

r Guideline 12: Select measurement points that minimize time delays and time constraints. Reducing dynamic lags and time delays associated with process measurements improves closed-loop stability and response characteristics. r Guideline 13: Select measurements that can filter disturbances appropriately. In distillation columns, when temperature is used to assess composition, disturbances due to the presence of components that are not anticipated in the feed can make control inappropriate (Downs and Ogunnaike, 1994). In addition, Friedmann and Moore (1972) pointed out that if key variables cannot be measured directly or the available instruments do not provide the desired accuracy, then other measurements are needed to provide an estimate of the key variables through software or analytical redundancy. Singular Value Analysis Singular value decomposition has been suggested as a more systematic way of selecting controlled and manipulated variables. This procedure allows the determination of the best multiloop technique by capturing the “decoupled” open loop gains of the system (Arkun and Ramkrishnan, 1984; Lau et al., 1985; Moore, 1986, 1987; Skogestad and Postlethwaite, 1996, Chapter 10). Consider a process that has n manipulated variables. We assume that a steadystate process model is available and that it has been linearized as follows: C = KM

(2.1)

where C is the vector of n controlled variables, M is the vector of m manipulated variables (both expressed as deviation variables), and K is the steady-state gain matrix. Normally, n = m and one requirement is that K should be nonsingular if the controlled variables are to be independently regulated. A small eigenvalue of K indicates that larger changes in one or more manipulated variables are needed to control the process. The singular values of K are nonnegative numbers defined as the positive square roots of the eigenvalues of K T K . The first r singular values are positive numbers where r is the rank of matrix K T K . The remaining n − r singular values are zero. Finally, the condition number (CN) is defined as the ratio of the largest and smallest nonzero singular values (Moore, 1986, 1987). If K is singular, then it is ill-conditioned and by convention CN = ∞. The concept of a condition number can also be extended to nonsquare matrices (Klema and Laub, 1980). Thus, the condition number is a positive number that provides a measure of how ill-conditioned the gain matrix is. Processes with poorly conditioned K matrices tend to require large changes in the manipulated variables to influence the controlled variables. A justification of this statement is given by Seborg et al. (1989, p. 691).

Selection of Key Variables in Control

79

L Distillate D=1,277 lb-mole/hr zD=93 % C3

73 1,763 lb-mole/hr

Sidedraw F64=828 lb-mole/hr z64=90 % i-C4

64

882 lb-mole/hr

38 Sidedraw F15=883 lb-mole/hr z15=90 % n-C4

15

1,764 lb-mole/hr

21

5

V

Bottoms B=1,421 lb-mole/hr zB=95 % i-C5

FIGURE 2.2. Four components column. Adapted from Seborg et al., 1989.

Roat et al. (1986) analyzed the choice of manipulated variables for a complex, four-component distillation column (Figure 2.2). The choices are taken from Seborg et al. (1989) and reproduced in Table 2.2. The four components are propane, isobutane, n-butane, and isopentane. There are six possible manipulated variables. In addition, ratios of these variables are also permissible. Table 2.2 shows the condition numbers for six schemes that have been evaluated for the column. Because three of the strategies have roughly the same low CN , these were subjected to further evaluation using dynamic simulation. Based on simulation results, the best control strategy was number 4. When the process gain matrix is nonsquare, the singular value decomposition (SVD) is obtained as follows: K = U 6V T

(2.2)

where U is a n × m orthonormal matrix. Its columns are called “left singular vectors.” Matrix V is also an m × m orthonormal matrix, with columns called “right singular vectors.” Finally, 6 is an m × m diagonal matrix of scalars called the “singular values,” which are organized in descending order.

22

INSTRUMENTATION DESIGN GOALS

TABLE 2.2. Condition Numbers for the Gain Matrices. Controlled Variables z D = Mole fraction of propane in distillate D z 64 = Mole fraction of isobutane in tray 64 side-draw z 15 = Mole fraction of n-butane in tray 15 sidedraw z B = Mole fraction of isopentane in bottoms B Possible Manipulated Variables L = Reflux flow rate D = Distillate flow rate V = Steam flow rate Strategy Number 1 2 3 4 5 6

B = Bottoms flow rate F64 = Sidedraw flow rate at tray 64 F15 = Sidedraw flow rate at tray 15

Manipulated Variables

Condition Number

L/D, F64 , F15 , V V/L , F64 , F15 , V D/V, F64 , F15 , V D, F64 , F15 , V L , F64 , F15 , B L , F64 , F15 , V

9030 60100 116000 51.5 57.4 53.8

Note: In each control strategy, the first controlled variable is paired with the first manipulated variable and so on. Thus, for strategy 1, x D is paired with L/D and x B is paired with V .

Moore (1987) presented an ethanol column example, where two manipulated variables are chosen (distillate flow rate D and condenser heat load Q). The process gain matrix is given by: 

∂ T1 ∂D  ∂T  2  K =  ∂ .D  .  . ∂T n ∂D

 ∂ T1 ∂Q  ∂ T2   ∂Q  ..   .  ∂ Tn  ∂Q

(2.3)

and describes the temperature sensitivity on each tray with respect to each of the two manipulated variables. In his example, U is a 50 × 2 matrix, V is a 2 × 2 matrix and the condition number is CN = 66.4. The article also suggests two methods to locate the sensors needed. In this case, two temperature measurements in trays need to be chosen. One of these methods is based on the location of the principal components in the left singular vectors. The other is a more detailed analysis based on a global search. The most sensitive element (the largest) in each column of the left singular matrix is chosen as the measured variable. After the sensor locations have been selected, it is important to always check the singular value analysis (SVA) of the smaller system consisting of the m × m system of manipulated and measured

Selection of Key Variables in Control

23

variables selected in the previous step. The objective is to determine if the condition number and the singular values are still roughly the same magnitude. The condition number for this system can be worse or better than predicted by the overall analysis including all the variables. If the condition number from the partial analysis is considerably worse, a more detailed global method of sensor selection may need to be considered. This is called the intersensitivity method. The intersensitivity method is based on a global search of all possible sensor combinations. A partial singular value analysis can be performed for each combination of sensors that make physical sense. According to Moore (1987), this procedure is more time consuming and in most cases does not yield results that are significantly different from the principal component method. Singular value analysis has also been suggested as part of the selection of optimization variables in on-line optimization schemes (Krishnan et al., 1992a,b; Glemmestad et al., 1997). Glemmestad et al. (1997) also suggested the use of an alternative procedure based on the study of the influence of the disturbances in the objective function of on-line optimization for each measured set. Inferential Control In inferential control, process measurements that can be obtained rapidly are used to infer the value of the controlled variable. For example, as analyzed in the previous section, if the overhead product stream in a distillation column cannot be analyzed on-line, sometimes measurement of the top tray temperature can be used to infer the actual composition. For a binary mixture, the Gibbs phase rule indicates that there is a unique relation between composition and temperature if pressure is constant. Therefore, a thermodynamic equation can be used to relate the temperature of the top tray to the overhead composition. For the separation of multicomponent mixtures, approximate methods to estimate compositions must be used. Based on process models and plant data, simple algebraic correlations can be developed to relate the mole fraction of the heavy key component to several different tray temperatures (usually near the top of the column). Then, the overhead composition can be inferred from available temperature measurements and used in the control algorithm (Moore, 1986). If necessary, the parameters in the correlation may be updated when new composition measurements become available. For example, if samples are sent to the analytical laboratory once per hour, the correlation parameters can be adjusted so that the predicted values agree with the measured values. The concept of inferential control can be used for other process operations, such as chemical reactors where composition is normally the controlled variable. Selected temperature measurements can be used to estimate the outlet composition when it cannot be measured on-line. However, when inferential control does not perform satisfactorily, incentive exists to introduce other online measurements for feedback control. Consequently, there is considerable

24

INSTRUMENTATION DESIGN GOALS

interest in the development of new instrumentation, such as process analyzers, which can be used on-line and which exhibit very short response times. Havre and Skogestad (1996) and Cao et al. (1997) introduced the concept of output effectiveness (OE) for measurement selection. This OE is defined as the square root of the diagonal elements of scaled open loop transfer function multiplied by its generalized (Moore-Penrose) inverse. The details of this procedure and its justification can be found in the original article and in Cao et al. (1998), who extended the use of the OE to secondary measurements. Finally, practitioners and researchers are starting to realize that techniques such as multiple regression, principal component analysis, partial least squares and neural networks, which are popular in monitoring, can be used effectively in control (Deshpande et al., 1996; Soroush, 1998). Interaction of Design and Control It is the current state of the art that process design precedes the design of the control system and they constitute separate activities. In systems control theory, it is a well-known fact that this approach has serious limitations because the plant design determines the process dynamic characteristics. To ameliorate these deficiencies, Luyben and Floudas (1994) proposed singular value analysis, as well as other measures to construct a multiobjective framework for the design of process systems and their control schemes simultaneously. Downs and Ogunnaike (1994) also discuss the design for controllability and operations. Covering all these design techniques is outside the intended scope of this book. Nevertheless, as this field continues to develop, it will produce improved lists of key variables for which a sensor network will have to be built. SELECTION OF MEASURED VARIABLES FOR FAULT DIAGNOSIS Alarms are important elements in monitoring. They can be broadly classified as quality loss alarms and safety alarms. In the first case, alarms are set to monitor if a particular variable is within an expected range. Hardware and software redundancy of measurements is thus of importance. In addition, data reconciliation and, in particular, gross error detection become a must. In the case of safety alarms, the problem is similar, but the goals are not quite the same. If quality is lost, the system can continue functioning, and this quality loss can be measured by a monetary loss. However, when safety is compromised, then the equipment or even the plant should be shut down. Faults originate somewhere in the process, and the disturbances propagate to places where a sensor can detect them. Several techniques have been developed to establish the existence of a fault (observability), determine its nature (diagnosis) and take appropriate action (alarm and or shutdown), assuming that the instrumentation is free of biases and is well calibrated. A few books cover this

Instrumentation Design Goals

25

field in detail (Himmelblau, 1978; Pau, 1981; Gertler, 1998). Several techniques of diverse origins are used nowadays: fault trees, cause-effect digraphs, neural networks, knowledge-based systems and model-based statistical approaches such as principal component analysis and partial least squares. The field is very much intertwined with the data reconciliation field to the extent that the same techniques are used sometimes in both fields under different names. Fault diagnosis poses an even more ambitious goal than simple control and instrument malfunction diagnosis. Indeed, for each pattern of signals that are suspect of process malfunction, one needs to be able to sort if the pattern is being created by instrument malfunction or if it is a genuine process fault, or if it is a combination of both. With valuable production at stake, a shutdown based on false alarms is an issue of increasing concern. For this reason, there is increasing interest in the use of simultaneous instrument and process fault detection. INSTRUMENTATION DESIGN GOALS Several goals exist for every designer of a set of instruments in a plant.

r Cost: The simplest objective function used in design has been the sum of the annualized capital investment cost and the operating cost. The simplest design objective in the case of sensor networks has been the overall annualized investment cost. Operating costs that are related to electricity consumption have always been neglected, leaving maintenance costs as the primary components. r Estimability: This term was coined to designate the ability to estimate a variable using hardware or software, and it will be formally introduced in Chapter 5. In the absence of more sophisticated goals, estimability can be used as a substitute goal. The connection between redundancy and estimability will be explored in detail in Chapter 5. r Precision: In many cases, specific levels of precision can be developed for particular variables in a system. For example, accounting might be interested in closing the material balance within a certain percentage or even request that the precision of each stream crossing battery limits has a particular value. Parameter estimation is also a software-based activity for which the parameter precision depends on the set of sensors used. r Reliability: If information on the frequency of instrumentation failure is available, it is possible to set as a design goal that the software availability or reliability of variables be larger than prespecified thresholds. r Gross error robustness: A robust sensor network is defined as a network that has good performance in the presence of gross errors. This includes: — Gross error detectability: This term is coined after the ability (or more precisely the statistical power) to detect gross errors.

26

INSTRUMENTATION DESIGN GOALS

— Residual precision: After gross errors are detected, the level of precision associated with the estimates deteriorates, usually because the variables with gross errors are eliminated from the set. Precision availability is a level of precision achieved when data reconciliation is performed after a certain number of gross errors are eliminated. — Gross error resilience: Undetected gross errors have a smearing effect on the estimates obtained by data reconciliation. This effect can be quantified, and a robust network is capable of containing this smearing to a desired small level. r Maintenance: Until recently, the costs associated with maintenance have not been considered part of the design of sensor networks. Two aspects are of importance: the cost of preventive and corrective maintenance. r Fault diagnosis and alarms: Only a few articles addressing alarm needs in this context have been published. Tsai and Chang (1997) proposed to design alarm logic minimizing the expected loss, which is assessed in terms of probabilities. This line of work builds up on the early analysis performed by Inoue et al. (1982) and Kohda et al. (1983). Lately, Raghuraj et al. (1999) used signed directed graphs to propose optimal sensor networks that would guarantee fault observability. Although all these goals seem equally important, cost has been the traditional objective function used in design. However, in some cases, other goals, such as precision, have also been used as objective functions in design. This book concentrates on the minimum cost model. The simplest of these models is: Minimize {Investment Cost} s.t. Desired Level of Estimability of Key Variables The unknowns of this problem are the number and location of measurements as well as the type, precision, and reliability of each instrument. Throughout the book, other constraints will be added until a complete minimum cost model is presented. This model is the following: Minimize {Total Cost} s.t.  Desired Level of Estimability of Key Variables     Desired Level of Precision of Key Variables  Desired Level of Reliability of Key Variables    Desired Level of Gross-Error Robustness

Upgrading of Instrumentation

27

where the total cost includes the maintenance cost, which regulates the availability of variables, a concept that substitutes reliability when the system is repairable. The above complete model covers one important aspect of monitoring, which is to filter relatively good data for monitoring and eventually control purposes. However, it does not cover the optimal design of a sensor networks and a logic for alarm systems. Even though important attempts are being made to address this issue (Tsai and Chang, 1997; Raghuraj et al., 1999), a model based on cost-efficient alarm design is yet to be produced. Likewise, the aforementioned direct incorporation of control performance measures as additional constraints of a cost-optimal model is a matter that has not been investigated yet. Finally, methods for cost-optimal instrumentation design corresponding to the implementation of several other monitoring procedures, like principal component analysis (PCA), projection to latent structures (PLS), wavelet analysis, and neural networks, among others, have not been yet proposed. While the above model seems complete for process monitoring in the absence of severe faults, once this latter is considered a more ambitious problem arises. From the exclusive point of view of fault detection, the problem of the design of instrumentation is as follows: Minimize {Total Cost} s.t.  Desired Observability of Faults     Desired Level of Resolution of Faults  Desired level of Reliability of Fault Observation    Desired level of Gross-Error Robustness in the Sensor Network The combination of both goals, that is, the design of a sensor network, capable of performing estimation of key variables for production accounting, parameter estimation for on-line optimization as well as fault detection, diagnosis and alarm (all of this in the context of detecting and assessing instrument biases and process leaks) is perhaps the long term goal of this field.

UPGRADING OF INSTRUMENTATION When reconciliation methods and new control strategies gain acceptance in practice by process and instrumentation engineers, the issue of instrument revamping becomes critical. Replacing an existing measurement device with a more accurate one is the first solution offered to the problem of increasing data accuracy. However, this is not always the most economical option.

28

INSTRUMENTATION DESIGN GOALS

Industry wants to determine how a limited amount of resources (typically fixed capital) can be allocated to maximize certain goals, typically accuracy. In addition, certain resources can be reallocated, as is the case of laboratory time and chemicals for analysis, thermocouples, pressure gauges and even flowmeters. There is increasing interest for this type of problems. A commercial software (DATACON from Simulation Sciences, 1993) includes tools to perform (albeit rudimentary) some of these studies. Recently, researchers from British Petroleum and Imperial College studied how to perform optimal reallocation (Alh´eriti`ere et al., 1997, 1998a,b). This issue will be covered throughout the book and with especial emphasis in Chapter 9.

NOMENCLATURE c: CN : D: Fi : m: Q: U: V: xi : yi : zi :

vector of controlled variables condition number distillate flow rate Flow rate vector of manipulated variables condenser heat load matrix of left-singular vectors matrix of right-singular vectors input variable output variable composition

Greek Letters 6:

matrix of singular values

REFERENCES Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Evaluation of the Contribution of Refinery Process Data to Performance Measures. AIChE Annual Meeting, Los Angeles (1997). Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Cost Benefit Analysis of Refinery Process Data: Case Study. Comp. & Chem. Eng., 22, Suppl., pp. S1031–S1034 (1998a). Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Cost Benefit Analysis of Process Data in Plant Performance Analysis. AIChE Annual Meeting, Miami (1998b). Arkun Y. and S. Ramkrishnan. Structural Sensitivity Analysis in the Synthesis of Process Control Systems. Chem. Eng. Sci., 39, 7 & 8, pp. 1167 (1984). Brosilow C. and M. Tong. Inferential Control of Processes. II. The Structure and Dynamics of Inferential Control Systems. AIChE J., 24, pp. 492–500 (1978).

References

29

Cao Y., D. Rossiter and D. H. Owens. Screening Criteria for Input and Output Selection. ECC97 CDROM, number 773, Brussels, Belgium (1997). Cao Y., D. H. Owens and D. Rossiter. Output Effectiveness and Scaling Sensitivity for Secondary Measurement Selection. Trans IChemE, 76, Part A, October (1998). DATACON Manual. Application: Optimal Location of Instrument. Simulation Sciences (1993). Desphande P. B., S. S. Yerrapragada, E. F Jacob, R. N. Mohanty and S. L. Joshi. Improve Control with Software Monitoring Technologies. Hydrocarbon Processing, Sept. (1996). Downs J. J. and B. Ogunnaike. Design for Control and Operability: An Industrial Perspective. Proceedings of FOCAPO 94 (1994). Friedmann P. G. and J. A. Moore. For Process Control. . . Select the Key Variable. Chemical Engineering, June (1972). Gertler J. J. Fault Detection and Diagnosis in Engineering Systems. Marcel Dekker, New York (1998). Glemmsatd B., S. Skogestad and T. Gundersen. On-Line Optimization and Choice of Optimization Variables for Control of Heat Exchanger Networks. Comp. & Chem. Eng., 21, Suppl., pp. S379– S384 (1997). Havre K. and S. Skogestad. Input-Output Selection and Partial Control. Proc. IFAC’96. 13th World Congress, San Francisco, CA, Vol. M, pp. 181–186 (1996). Himmelblau D. Fault Detection and Diagnosis in Chemical and Petrochemical Processes. Elsevier, Amsterdam (1978). Inoue K, T. Kohda, H. Kumamoto, I. Takami. Optimal Structure of Sensor Systems with Two Failure Modes. IEEE Transac. on Reliability, Vol. R-31, 1 (1982). Klema V. C. and A. J. Laub. The Singular Value Decomposition: Its Computation and Some Applications. IEEE Trans. Auto. Control, AC-35, 164 (1980). Kohda T., H. Kumamoto, K Inoue. Optimal Shutdown Logic for Protective Systems. IEEE Transac. on Reliability, Vol. R-32, 1 (1983). Krishnan S., G. Barton and J. Perkins. Robust Parameter Estimation in On-line Optimization. Part I. Methodology and Simulated Case Study. Comp. & Chem. Eng., 16, pp. 545–562 (1992a). Krishnan S., G. Barton and J. Perkins. Robust Parameter Estimation in On-line Optimization. Part II. Application to an Industrial Process. Comp. & Chem. Eng., 17, pp. 663–669 (1992b). Lau H., J. Alvarez and K. F. Jensen. Synthesis of Control Structures by Singular Value Analysis. AIChE J., 31, 3, pp. 427 (1985). Luyben M. L. and C. A. Floudas. Analyzing the Interaction of Design and Control. 1. A Multiobjective Framework and application to Binary Distillation Synthesis. Comp. & Chem. Eng., 18, 10, pp. 933–969 (1994). Moore C. Application of Singular Value Decomposition to the Design, Analysis and Control of Industrial Processes. Proc. Am. Control Conf., pp. 643 (1986). Moore C. Determining Analyzer Location and Type for Distillation Column Control. 14th Annual Meeting of the Federation of Analytical Chemistry and Spectroscopy Societies. Detroit (1987). Ogunnaike B. A, W. H. Ray. Process Dynamics, Modeling and Control. Oxford University Press (1994). Oloomi H. and M. E. Sawan. Optimal Sensor Placement in Decentralized Control Systems. Int. J. Systems Sci., 20, 6, pp. 939–943 (1989). Pau L. F. Failure Diagnosis and Performance Monitoring. Dekker, New York (1981).

30

INSTRUMENTATION DESIGN GOALS

Raghuraj R., M. Bhushan and R. Rengaswamy. Locating Sensors in Complex Chemical Plants Based on Fault Diagnostic Observability Criteria. AIChE J., 45, 2, pp. 310–322 (1999). Roat S. D., J. J. Downs, E. F. Vogel and J. E. Doss. The Integration of Rigorous Dynamic Modeling and Control System Synthesis for Distillation Columns: An Industrial Approach. Chemical Process Control CPC-III, Edited by M. Morari and J. J. McAvoy. CACHE-Elsevier, New York (1986). Seborg D. E., T. F. Edgar, D. A. Mellichamp. Process Dynamics and Control. John Wiley (1989). Skogestad S. and I. Postlethwaite. Multivariable Feedback Control, Analysis and Design. John Wiley & Sons (1996). Soroush M. State and Parameter Estimation and their Applications in Process Control. Comp. & Chem. Eng., 23, pp. 229–245 (1998). Tsai C. S. and C. T. Chang. Optimal Alarm Logic Design for Mass-Flow Networks. AIChE J., 43, 11, pp. 3021 (1997). Weber R. and Brosilow C. The Use of Secondary Measurement to Improve Control. AIChE J., 24, pp. 485–492 (1978).

CHAPTER 3

Instrumentation

INTRODUCTION The purpose of this chapter is to provide an overview of the different types of instrumentation used in process plants. The chapter focuses primarily on flow rate, level, temperature and pressure instruments. Although density, concentration and pH measurements, among others, are sometimes performed on-line, they are more often made in the laboratory. However, some on-line density measurements are discussed. It is also customary to include valves as part of instrumentation, but because the book does not discuss valve selection and location, their analysis is omitted. Finally, a brief discussion on new emerging tendencies in signal processing is added. A complete survey of instrumentation can be found in the handbooks edited by Liptak (1997) and Webster (1999).

FLOW RATE INSTRUMENTATION Flowmeters are used to monitor and control fluid flow. They are divided into five basic categories based on the method of measurement:

r differential pressure meters r velocity meters r positive displacement meters r mass meters r variable area meters Several factors play an important role in determining what type of meter is best suited for a particular operation. Most often, these factors include 31

32

INSTRUMENTATION

Flow

Venturi Tube Concentric

Segmental

Flow

Eccentric

Orifice plate

Flow

Flow nozzle

FIGURE 3.1. Differential pressure flowmeters.

application, fluid properties, desired performance, method and/or ease of installation, safety, environmental impact, being many times the cost an overriding factor. Differential Pressure Meters Differential pressure meters measure the difference in pressure between the two sides of a restriction in a confined stream. There are three general types (Figure 3.1):

r orifice meters r venturi meters r flow nozzles They are based on a restriction to the flow of fluid produced by reducing the diameter of the pipe. The equations governing these devices can be derived using the mechanical energy balance (Bernoulli equation). P +ρ

v2 + gh = constant 2

(3.1)

where P is the pressure, v the velocity and ρ the density. This form of the Bernoulli equation assumes that the density is constant and friction losses are neglected. Applying this equation to an orifice or a venturi one obtains: P1 − P2 = ρ

v22 − v12 2

(3.2)

33

Flow Rate Instrumentation

where subscripts 1 and 2 refer to upstream and downstream conditions, respectively. Now, the total flow rate is related to the velocity through the continuity equation, W = ρ A 1 v1 = ρ A 2 v2

(3.3)

one obtains: · P1 − P2 =

ρv22

1 − (A2 /A1 ) 2

¸ (3.4)

After introducing the diameters of the pipe and the restriction, this equation can be further rearranged using Equation (3.3) as follows: Wtheo

√ p 2π 2 1 = (P1 − P2 )ρ D2 p 4 4 1 − (D2 /D1 )

(3.5)

This equation provides a theoretical relationship when density is assumed constant. It applies well for liquids. When gases are considered, then the mechanical energy equation is rewritten as follows: v22



v12

Z = −2

P2 P1

dP ρ

(3.6)

where the change of height is assumed zero and friction losses are again ignored. This integral can be evaluated by assuming that the gas undergoes an adiabatic expansion. This is expressed by: µ ¶k 1 = constant P ρ

(3.7)

Upon integration, and after a few manipulations involving the continuity equation, one obtains: Wtheo

√ p 2π 2 Y1 = (P1 − P2 )ρ1 D2 p 4 1 − (D2 /D1 )4

(3.8)

where Y1 is the adiabatic gas expansion factor. The expression for Y1 is à £ ¤ !1/2 [1 − (D2 /D1 )4 ][k/(k − 1)](P2 /P1 )2/k 1 − (P2 /P1 )(k−1)/k £ ¤ Y1 = 1 − (D2 /D1 )4 (P2 /P1 )2/k [1 − (P2 /P1 )] (3.9)

34

INSTRUMENTATION

Thus, the equation for gases has been put into the form of Equation (3.5) for incompressible fluids with the inclusion of a correcting term (Y1 ). This additional factor is only needed for the compressible case. Deviations from the theoretical behavior predicted by Equations (3.5) and (3.8) are rooted in the measurement of the pressure and the corresponding diameter of the vena contracta. For this reason, a discharge coefficient C is introduced. C=

Wtrue Wtheo

(3.10)

The discharge coefficient is calculated at testing conditions, that is, at standard temperature and pressure. Further corrections need to be performed when actual conditions are different from the testing conditions. First, Y1 departs from the value at testing conditions and affects the value of the discharge coefficient. This variation in C is typically presented as a function of the Reynolds number (Figure 3.2). In addition, variations in the temperature of the fluid induce changes in the discharge coefficient. For compressible fluids, laboratory tests are conducted by using steam or air as the working fluid. As the fluid changes, the gas expansion factor also changes. Therefore, several equations exist to correct Y1 (Miller, 1996).

FIGURE 3.2. Variation of the discharge coefficient with the Reynolds number. Adapted from Upp, 1993.

Flow Rate Instrumentation

35

Testing and Base Conditions The conditions under which the manufacturer calculates the discharge coefficient are called testing conditions, whereas the typical average conditions at which the meter measures the flow are called base conditions. The deviations produced by conditions that are different from the testing or base conditions (temperature, pressure and fluid composition), which affect the discharge coefficient and the fluid density, are a source of errors. The correction of these deviations between actual and base or testing conditions are called compensation and is performed by measuring temperature and pressure so that the changes in the discharge coefficient can be inferred. Certain resistance exists to perform flow compensation to the signal that the operator sees. Rather, it is preferred to perform this compensation on plant historians and production accounting databases. Velocity Meters Velocity meters measure fluid velocity and use the relationship Q = v A to calculate volumetric flow rates, where Q is the volumetric flow rate, v the fluid velocity and A the meter cross-sectional area. There are five basic types of velocity meters: electromagnetic, vortex, turbine, ultrasonic and pitot. The signal produced by the meter is linear with fluid velocity; therefore, the errors associated with square-root extraction that are typical of differential pressure meters are eliminated. Electromagnetic meters, or magmeters (Figure 3.3), are governed by Faraday’s Law of Electromagnetic Induction, V = k0 BLv, where V is the induced voltage, B the magnetic field, L the conductor length through the fluid (usually the diameter) and v the fluid velocity. Thus, the volumetric flow rate is obtained by: Q = k1

π V L 4 B

(3.11)

where k1 is a constant that is determined during testing or calibration. These meters operate by producing a magnetic field perpendicular to the fluid flow. The resulting voltage, which is proportional to the fluid velocity, is measured by two electrodes. Fluids used in magmeters must be electrically conductive and nonmagnetic. Consequently, they can be used in most waterbased operations. Turbine meters consist of a bladed turbine rotor that spins as a result of the action the fluid exerts on it (Figure 3.4). Because the fluid induces an angular velocity to the rotor, the rotation is proportional to the flow rate and a linear relationship is obtained.

36

INSTRUMENTATION

FIGURE 3.3. Electromagnetic meter. (a) Reprinted from Miller, 1996, reproduced with permission of The McGraw-Hill Companies.

In a Vortex meter (Figure 3.5), the flow is split into two streams forcing the fluid to roll up into a well-defined vortex. After the vortex is formed, it sheds, and a second vortex begins to form on the opposite side (downstream) of the element. The formation time of these vortices is proportional to the velocity. The vortex meter consists of a vortex-generating element spanning across the diameter of the pipe and a detector to convert the shedding energy into an

FIGURE 3.4. Turbine meter. Reprinted from Miller, 1996, reproduced with permission of The McGraw-Hill Companies.

Flow Rate Instrumentation

37

FIGURE 3.5. Vortex meter. (a) Reprinted from Dolenc, 1996, with permission from Rosemount Measurment, Eden Prairie, MN.

electric signal. The pressure fluctuates as a result of the vortex shedding, and the detector picks up its frequency. Thus, the frequency of shedding of the vortices from the bluff body is given by (Connel, 1996):

f =

St v ω

(3.12)

where St is the Strouhal number (known to remain constant for Reynolds numbers from 10,000 to 1,000,000), f is the frequency of shedding (Hz), v the stream velocity (m/sec) and ω the width of the bluff body (m). Both vortex and turbine are called linear flowmeters of the pulse-frequency type. Because the equations are linear, one can obtain a mean meter coefficient (K factor), which is the quotient of the pulse frequency and the volumetric flow rate. Therefore, the governing equation for these flowmeters is:

Q=

f K

(3.13)

where f is the frequency in pulses per second (Hz). The K factor is plotted versus flow rate over a range. Typical signature curves for a turbine and a vortex flowmeter are shown in Figure 3.6. The K factor varies; therefore, a mean value between the maximum and minimum is chosen. In ultrasonic meters, sound waves are used to determine the velocity of the fluid. In a Doppler meter, a constant frequency acoustic wave is transmitted into a fluid. Entrained solids or bubbles reflect back the sound, therefore, the frequency of the reflected wave changes by an amount proportional to the

38

INSTRUMENTATION

Linearity envelope K

K Mean K

Linearity (– %)

Linear range Lower limit. Usually given by Re=10,000 Flow rate

Flow rate

(a) Turbine

(b) Vortex

FIGURE 3.6. Turbine meter and vortex meter signature curves. Adapted from Miller, 1996.

velocity. In a transit time meter (Figure 3.7), an acoustic pulse is introduced diagonally in both upstream and downstream directions. The difference in time for pulses to travel through the fluid is used to calculate the velocity. As inferred by Figure 3.8, the time the signal takes to travel forward and backward is given by the following equations: d s − v cos α d tB = s + v cos α tA =

(3.14) (3.15)

where s is the velocity of the sound in the fluid. Therefore, inverting and taking the difference of both expressions, one gets: v=

d ( f B − f A) 2 cos α

(3.16)

where f A and f B are the frequencies at each receiver. In other words, the flow rate is proportional to the difference in frequencies.

c Panametrics, FIGURE 3.7. Transit time ultrasonic meter. Reprinted from Dolenc, 1996. ° Inc., 1996.

Flow Rate Instrumentation

A

39

B

v

a d

FIGURE 3.8. Schematics of ultrasonic meter.

The Pitot tube (Figure 3.9) consists of a small tube inserted horizontally in the pipe. This tube creates a stagnation point in the fluid flow. At this point, the pressure exerted by the fluid is larger than the pressure at static holes located in the wall. Applying the Bernoulli equation between the wall and the static holes one obtains Ps − P0 = ρ

vs2 2

(3.17)

where Ps and vs is the pressure and velocity at the stagnation point, whereas P0 is the static pressure. From this equation, one can derive an expression for the velocity: s vs =

2(Ps − P0 ) ρ

(3.18)

In the case of gases (typically above 200 ft/sec), compressibility becomes important and the fluid is assumed to be isentropically compressed at the point

Flow

Flow

DP

Low pressure side

Openings for static pressure High pressure side

(a) Side-Wall Static Tap FIGURE 3.9. Pitot tube schematics.

(b) Static Tube

40

INSTRUMENTATION

of impact. Thus, the following equation is derived: v # " u u 2k µ P0 ¶ µ P0 ¶(k−1)/k t −1 vs = C k − 1 ρ0 ρ0

(3.19)

The coefficient C is close to 1. We now express the mass flow rate as a function of the pressure drop and density for the incompressible case as follows: s √ (Ps − P0 ) (3.20) Q = vs A = 2K pitot 4Pitot ρ0 where K pitot is the pitot coefficient, which takes care of the departures from testing conditions, and 4Pitot is the compressibility factor. Because of the form of this equation and the fact that pressure difference is measured, pitot meters are sometimes also classified as differential pressure meters. Positive Displacement or Volumetric Meters Positive displacement (PD) meters, also called linear meters, directly measure volumetric flow rates by letting the flow pass through compartments of known volume. The fluid passage is then counted and multiplied by the individual volume of each segment. Measurements are taken for a given amount of time, and the output is transmitted on a volume/time basis using a counter. There are several types of PD meters, including the rotary sliding vane, the lobed impeller and the gear meter (Figure 3.10). In the rotary sliding vane meter, spring-loaded vanes move in and out of the moving rotor. A known amount of fluid is trapped between the vanes and the outer wall, which allows calculation of the volume/revolution ratio. In lobed impeller and gear meters, fluid is trapped between the impellers or gears and the wall. Thus, the same linear relationship between frequency and flow is established, as in Equation (3.13). These meters are used for product transactions at the battery limits (usually called cash transactions) and are subject of several ANSI standards, which makes them expensive. A typical signature curve for these types of flowmeters is shown in Figure 3.11. Some detailed features about positive displacement meters are discussed by May (1971). Mass Meters Mass flowmeters are classified into four types: Coriolis, Thermal, Heated Element and Temperature Rise. The Coriolis meter (Figure 3.12) is based on Newton’s Second Law of Motion (F = ma). In this meter, fluid flows through

Flow Rate Instrumentation

41

FIGURE 3.10. Positive displacement meters. Reprinted from Dolenc, 1996, with permission from Brooks Instrument, Hatfield, PA U.S.A.

two flow tubes causing them to twist. The amount of twist is proportional to the mass flow rate, which is measured by magnetic sensors attached to the tubes. As the fluid rotates due to direction changes with angular velocity Ä, a force F is exerted on the fluid. Thus, the governing equation of these meters is F (3.21) 2LÄ where L is the length of the pipe. In practice, the pipe is oscillated by electromagnetic forces at the natural frequency of the structure. Magnetic sensors on Q=

True volume K 1.01 1.00 0.99 Theoretical volume

Slip

Operating range Flow rate

FIGURE 3.11. Positive displacement meter: signature curve. Adapted from Miller, 1996.

42

INSTRUMENTATION

c Rosemount Inc., FIGURE 3.12. Coriolis flowmeter. Reprinted from Pomroy, 1996, ° 2000.

the tubes measure the displacement. The twist angle relates to the mass flow rate, whereas the frequency of vibration indicates the density of the fluid. The thermal mass-flow meter operates by measuring thermal properties of the fluid (Figure 3.13). One type of thermal mass-flow meter, the thermal anemometer, measures the amount of heat removed from an electrically heated probe. As the fluid flows through the probe, its temperature changes and, consequently, a variation of resistance is observed. To maintain the temperature constant, the power is varied, and it is this power variation that is used to infer the flow rate.

Temperature Sensor

Velocity Sensor

Wires

FIGURE 3.13. Immersible thermal flowmeter.

Flow Rate Instrumentation



••

43



Sensor Tube

Q2 Q Q1 Laminar By-pass

FIGURE 3.14. Bypass thermal flowmeter.

A second sensor is often introduced to measure the temperature and adjust the power on the first sensor. The mass flow rate is given by the following relationship: ·

¶ µ ¸ 1 E v2 /Rv k2 1/m Q= − k1 Tv − T k1

(3.22)

where E v is the electrical power, Rv is the resistance, (Tv − T ) is the temperature difference, which is maintained constant, and k1 , k2 and m are empirical constants. The temperature-rise flowmeter measures the temperature of the fluid upstream and downstream of a heater device and calculates the flow rate from the change in temperature (Figure 3.14). Typically, a bypass of the flow is arranged so that these temperature measurement elements can be properly installed. A laminar flow element induces a linear pressure drop, ensuring that the flow rates through the two fluid paths are proportional. Thus, the ratio of the flows is known. These flowmeters are governed by the following expression: ¶ µ q Q2 (3.23) Q = 1+ Q 1 C p 1T where Q 2 and Q 1 are the mass flow rates through the two paths, q is the heat added to the fluid, C p is the fluid heat capacity and 1T is the temperature difference. A recent review of mass flowmeters is offered by Gibson (1998). He points out the emergent popularity of Coriolis and Thermal mass meters prompted by the increased precision requirements in the process industry.

44

INSTRUMENTATION

Pressure Force

v2

v2

AF

Weight

Aa

Buoyancy Force

Float

v1 Pressure Force

(a) Schematics

(b) Force Balance Schematics FIGURE 3.15. Rotameter.

Variable Area Meters Area flowmeters use a float inserted in a vertical tube, exposing a variable area to the flow (Figure 3.15). The float stays at a certain height by virtue of a balance between buoyancy, weight and pressure forces. Assuming that the downward pressure is just the static pressure, whereas the upward pressure force is the total pressure (static + dynamic), one can write a force balance to obtain the following relation for the volumetric flow rate: s Q = K R Aa 2g

VF AF

µ



ρE −1 ρ

(3.24)

where K R is the correction factor, which is one under ideal conditions, Aa is the free area left by the float, A F the area of the float, VF the volume of the float and ρ F the density of the float. For compressible fluids, there is a compressibility factor that must be included. Applicability and Selection Criteria Differential pressure meters are the most widely used flowmeters. To maintain accuracy, they require 5-40 straight pipe diameters upstream of the meter. Orifices are the most widely used differential pressure meters, and their advantages include durability, low cost for large pipe diameters and the ability to be used at a variety of temperatures. However, these flowmeters have high sensitivity to changes in density and viscosity. In addition, they have a tendency to erode. Venturi meters are used in high flow rate applications and are more

Flow Rate Instrumentation

45

expensive. However, in such applications, they may result in significant permanent pressure losses and diminished accuracy. Flow nozzles are used when capacity is the important issue. They work well in high-velocity, high-temperature, and high-turbulence situations. They are also capable of handling larger solids than orifice meters. Although flow nozzles are less expensive than venturis, they result in higher permanent pressure losses and are less accurate. Pitot tubes are used in streams that carry clean liquids or gases. These tubes are sensitive to changes in the velocity profile of the fluid; therefore, several ports are placed across the diameter of the pipe and an average value for the differential pressure is obtained. The small openings of these meters tend to become plugged if particles are present in the fluid. In addition, they produce a low differential pressure, which makes repeating measurements a problem. Positive displacement (PD) meters are most often used in hydrocarbon batching, blending and custody transfer operations. They perform well with viscous fluids but are not recommended for gases because of a lack of sealing or for dirty fluids due to the presence of gears and vanes. Advantages of PD meters include their accuracy when using viscous fluids and their high flow range. Disadvantages include the lack of sealing around the gears, the loss of pressure due to the removal of energy by the meter, and the coating of gears by extremely viscous fluids leading to a volume reading higher than the actual flow. Coriolis meters are the most accurate instruments (Pomroy, 1996) and until 1999 are mainly used in critical control loops and in the management of high value fluids. Benefits of these meters include their high degree of accuracy, large flow range and their tight control ability. In addition, they do not need recalibration, do not obstruct the fluid flow and can be used with different types of fluids. Thermal mass-flow meters are effective with clean, low-density gases. They are used to measure gas flow in various applications such as ducts, pilot plants, purge streams and leak testing and are most commonly used in dopant gas flows for semiconductor production. Benefits include their ability to accurately measure gas flows. Disadvantages include the coating of probes by dirty fluids and the need for fluid uniformity. Variable area meters, such as rotameters and purge meters, can be used in both gas and liquid applications. Because their operation is governed by equilibrium of forces, their benefits include a small pressure loss, low cost and ease of installation. One disadvantage is the wide fluctuation in accuracy. Because there are no mechanical parts that impede fluid flow, magmeters are excellent for streams containing particles or corrosive chemicals. Other advantages include their ability to provide accurate voltage signals in both laminar and turbulent flow and their low sensitivity to changes in density, pressure and viscosity. A disadvantage of these meters is the potential of the electrodes to become coated, thereby producing errors in the voltage reading. However, this problem can be virtually eliminated by maintaining a fluid velocity between 8 and 15 ft/sec.

46

INSTRUMENTATION

Table 3.1 summarizes some of the features of flowmeters (Dolenc, 1996; Furness, 1998; McMillan et al., 1998; Pomroy, 1996; Smith, 1978; Upp, 1993; Zientara, 1972). Accurate measurement of mass flows requires the measurement of density or the pressure and temperature from which the density or density changes can be inferred. The most direct solution is to install a density transmitter. However, this is expensive. For liquids, a density/temperature correlation can be made, so that if the concentration is fairly constant, corrections can be made on the basis of temperature variations. In the case of gases, knowledge of the absolute pressure is also needed for such calculations. Another criteria related to the range of measurement, linearity and accuracy will be discussed in Chapter 4.

LEVEL MEASUREMENT Level measurement is essential, even when one assumes steady state. It provides one more element of flow rate data redundancy, because the difference in levels can be representative of a steady flow leaving or feeding a tank. They are divided into five basic categories based on the method used:

r differential pressure meters r displacer-based meters r signal-reflection-type meters r radio-frequency admittance meters r radiation-based meters Level instruments based on floats and switches are omitted. The former are being abandoned, and the latter do not offer continuous monitoring. Nevertheless, meters based on switches are still used for safety purposes. Differential Pressure Meters This is the most frequently used device. It consists of measuring the difference in pressure between two points in a vessel (Figure 3.16). The difference in pressure is given by: 1P = ρl gh + ρv g(H − h)

(3.25)

where h is the difference of level between the liquid-vapor interface and the lower arm of the meter and H is the total height difference between the arms. Inverting the equation, one gets: h=

1P 1P − ρv g H ≈ (ρl − ρv )g ρl g

(3.26)

47

(+ + +)/(++) (+ + +)/(++) (+ + +)/(++) (−)/(−) (++)/(−) (+ + +)/(++) (−)/(++) (+ + +)/(−) (−)/(−) (+)/(+) (+ + +)/(+ + +) (+)/(+) (+ + +)/(−)

(+ + +)/(+ + +) (+ + +)/(−) (+ + +)/(++) (−)/(+ + +) (+ + +)/(−)

(+ + +)/(+)

(+ + +)/(+ + +) (−)/(−) (+ + +)/(+ + +) (+ + +)/(−)

Clean/Dirty Gas

(+ + +)/(+) (+ + +)/(+) (+ + +)/(++)

Clean/Dirty Liquid

(+ + +) (−) (++) (++)

(++)

(+ + +) (−) (+) (+) (+)

(++) (+) (++)

Corrosive Liquid

(+)/(+) (−)/(−) (++)/(++) (−)/(−)

(+)/(−)

(+ + +)/(+ + +) (−)/(−) (+)/(−) (++)/(+) (−)/(−)

(−)/(−) (−)/(−) (−)/(−)

Slurries Part./Fibrous

Note: (+++) = acceptable; (++) = normally acceptable; (+) sometimes acceptable; (- ) not acceptable.

Differential pressure Nozzle Orifice Venturi Velocity Electromagnetic Turbine Vortex Ultrasonic Doppler Ultrasonic transit time Volumetric Positive displacement Mass Coriolis Thermal mass Temp. rise Variable area

Technology

TABLE 3.1. Applicability of the Main Types of Flowmeters.

(+) (−) (+) (+)

(−)

(−) (−) (+ + +) (−) (+)

(+ + +) (+ + +) (++)

Steam

1400–5700 4500 4500 200–6000

300–1500

5000 6000 1500 1500 1500

4000 4000 4000

−400–800 32–150 32–150 ≤1000

−30–600

−40–350 −450–6000 −330–800 −50–900 −50–900

≤1000 ≤1000 ≤1000

Max Press. Temp. Range (psig) (◦ F)

48

INSTRUMENTATION

Differential Pressure Transmitter

H h

FIGURE 3.16. Differential pressure level measurement.

In this formula, one can neglect terms containing the vapor density and do not incur in any gross error. Displacement-Based Meters This type of level meter operates on the basis of buoyancy. The displacer (Figure 3.17) is immersed in a side arm chamber, sometimes inside the vessel, and is restrained by an elastic device whose motion or distortion is directly proportional to the buoyant force. The reading is taken by either measuring directly the vertical position or indirectly through torque devices. Signal Reflection-Type Meters These meters are based on the concept of sending a signal and measure its reflection from the surface (Figure 3.18). Two types of signals can be used: ultrasonic and electromagnetic. Based on the speed of the signal, the time of Displacer Transmitter

Displacer Chamber

FIGURE 3.17. Displacer.

Level Measurement

49

Ultrasonic Transmitter

Ultrasonic Transducer

FIGURE 3.18. Ultrasonic level meters.

response is directly proportional to the distance from the source/receiver to the liquid-gas interface. Radio Frequency Admittance Meters In radio frequency (RF) meters the change of impedance between two capacitor electrodes is used to measure the level (Figure 3.19). The first electrode is typically the wall of the vessel, whereas the second is a rod immersed in the fluid. Once a voltage is imposed between the two electrodes, one can measure the current and the capacitance. The change of level in the vessel results in a change of impedance. In RF devices, capacitance increases as the level in the vessel increases, because the air, which is a low dielectric, is replaced by a RF Transmitter

RF sensing element

FIGURE 3.19. Radio frequency level meters.

50

INSTRUMENTATION

Radioactive Source

Radioactive Detector

FIGURE 3.20. Nuclear radiation-based level meters.

higher dielectric liquid. A constant voltage is applied to a rod or cable (sensing element) in the process. The radio frequency current that results is monitored to infer the level of the process material. Admittance systems are capacitance devices with the ability to ignore coatings on the sensor. Radiation-Based Meters These meters consist of a source of gamma rays that is located on one side of the vessel and a detector installed on the opposite side (Figure 3.20). Because the liquid has lower transmitance than the vapor, the detector is able to determine its level. Strapping Tables Many vessels are not cylindrical, and they have a varying cross-sectional area. Therefore, although the level can be measured by several of the above-described devices, the actual volume of material inside the vessel is not a linear function of it. Instead of making use of such a nonlinear analytical relation, industry actually makes use of a table. One of course can, and in many cases it has been done, create a function by simply fitting the table values. Strapping tables are a vivid example of old practices to which industry holds-on to unnecessarily. Applicability and Selection Criteria A primary benefit of differential pressure level meters is that they can be readily installed on a vessel and can be easily removed using block valves. However, these meters are subject to errors because of density variations of the liquid, which are typically caused by product composition or product temperature changes. If accurate measurements are to be made, these variations must be compensated. Therefore, their main application is for clean liquids (Connell, 1996).

Temperature Measurement

51

Displacement type level meters work well with clean liquids, and they are accurate and applicable over a wide range of fluid densities. However, they are also affected by changes in product density. Because the displacer is in the process fluid, solids can deposit on it and cause a calibration shift. Displacers should only be used for relatively nonviscous, clean fluids and perform better when used for short spans. One disadvantage shared by these devices and the differential pressure level devices is that two penetrations (nozzles) into the vessel are required (Sholette, 1996). Ultrasonic level meters do not have moving parts, and they do not come in contact with the process material. In addition, calibration is relatively easy. However, dust, heavy vapors, surface turbulence, foam and even ambient noise can affect their performance. Moreover, because they are based on the velocity of sound in the vapor phase, temperature and pressure can also affect their performance. Radio frequency (RF) technology produces highly accurate measurements in storage tanks and some process vessels. Its primary disadvantage is cost, which can be justified for tank gauging and inventory control. In addition, RF admittance is fairly application specific. Indeed, temperature, pressure, chemical compatibility and electrical properties affect measurement accuracy and need to be carefully considered. Capacitance measurements are also subject to large errors caused by conductive coatings. Admittance technology avoids this problem by correcting the output for coatings. A second sensor is added to monitor the dielectric constant and then compensate the calibration based on this information. Because nuclear systems require a radioactive source mounted on one side of the vessel and a radiation detector or receiver on the other side, no tank penetration is needed to make a successful level measurement. Typical process concerns, such as very high temperatures or pressures, or corrosiveness of the liquid to be measured, are irrelevant. High cost, estimated to be two to four times the cost of other technologies, prevents this technology to be widespread. In addition, licenses, approvals and periodic inspections are also required. Accuracy, linearity and speed of response are also of concern. Finally, because they rely on signal attenuation, these devices require the density to be relatively constant (McMillan et al., 1998). Table 3.2 summarizes the application suitability of level instrumentation.

TEMPERATURE MEASUREMENT Temperature-measuring devices are divided into three groups:

r thermal-expansion thermometers r electrical devices r radiation-based devices

52

INSTRUMENTATION

TABLE 3.2. Applicability of the Main Types of Pressure Measuring Devices. Technology Differential pressure Displacer RF admittance/capacitance Ultrasonic Electromagnetic

T > 300 ◦ F P > 600 psi Liquids Vapors Y Y Y N N

Y Y Y N N

Y Y Y Y Y

Y Y Y N Y

Changing Density N N Y Y Y

Source: Parker (1998), Carsella (1998).

Thermal Expansion Thermometers These temperature-measuring devices are based on the thermal expansion of either liquids or solids. In the case of liquids, the sensitive element (a bulb) contains a liquid or gas, which changes its volume or pressure with temperature. Those systems based on volume changes are completely filled with liquid, whereas those based on pressure changes are filled with gas. Liquid in glass thermometers are not used in automatic control systems, because readings are typically manual. Therefore, the so-called filled-system thermometers have been used more extensively (Figure 3.21). Changes in gas pressure are carried through a capillary, sometimes through large distances (up to 250 ft) to a bourdon tube. The motion of the bourdon is used directly or amplified by a mechanical linkage to drive a pen in a recorder.

FIGURE 3.21. Filled system. Reprinted from Perry, 1984, reproduced with permission of The McGraw-Hill Companies.

Temperature Measurement

53

FIGURE 3.22. Bimetallic thermometers. Reprinted from Perry, 1984, reproduced with permission of The McGraw-Hill Companies.

Another type of thermal expansion thermometers are bimetal thermometers (Figure 3.22), which are made of strips of two metals that have different expansion rates. Because these strips are tied in one end, the other end deflects. Typically, they are wound in a spiral or helix arrangement in which the loose end rotates with temperature changes. Electrical Devices Thermocouples are made of two different metallic wires that are connected. When the two junctions are at different temperatures, an electric current flows (Figure 3.23).

FIGURE 3.23. Thermocouple.

54

INSTRUMENTATION

However, when the circuit is open, no current flows and an electromotive force (E) is measured. The thermal electromotive force E is a measure of the difference in temperature through the following relation. E ≈ a(T1 − T2 ) + b(T1 − T2 )2

(3.27)

where a and b are constants. In control systems, the reference junction (cold junction) is usually located at the emf measuring device. This junction may be held at constant temperature such as in an ice bath or a thermostat oven, or it may be at ambient temperature but electrically compensated so that it appears to be held at a constant temperature. A thermocouple is inherently an inaccurate device (Wightman, 1972) because it is not normally possible to maintain a good quality control on the wire used. Resistance thermometers are based on the change in electrical resistance when a metal undergoes a change in temperature. Industrial resistance thermometers are usually constructed of platinum, copper or nickel, but semiconducting materials such as thermistors are in increased use. Different forms of bridge circuits are used in industrial resistance thermometry, the most common being the Wheatstone bridge. The change in electrical resistance of a material with a change in temperature, the coefficient of resistance, is positive and essentially constant over large portions of their range. Typical resistance versus temperature curves for platinum, copper and nickel are given in Figure 3.24. Thermistors operate on the same principle, but there is no metal involved. They are made of semiconductors such as mixtures of oxides of nickel, 8

Relative Resistance (Rt/Ro)

6

Nickel

Copper

4

Platinum 2

-200

0

200

400

600

800

FIGURE 3.24. Typical resistance curves. Adapted from Perry, 1984.

o

C

Temperature Measurement

55

manganese, copper, iron, magnesium, titanium and other metals sintered at high temperatures. Finally, integrated circuit sensors are electronic devices that provide a linear output of voltage as a function of temperature in a certain range. Radiation-Based Devices Pyrometers measure the temperature of an object by determining the quantity and characteristics of the energy that it radiates. There are two groups: (1) optical pyrometers in which the brightness of a hot object is compared with that of a source of standard brightness and (2) radiation pyrometers, which either measure the rate of energy emission per unit area over a relatively broad range of wave lengths or compare the radiation at two wave lengths. Applicability and Selection Criteria Many thermal expansion thermometers are relatively inexpensive, but they are not very amenable to the modern computer-based data acquisition systems. Although possible, conversion from mechanically based indicators to electronic signals require an investment that makes thermocouples and thermoresistance thermometers more competitive for this purpose. Among the latter, the use of thermocouples is less expensive and more popular. Table 3.3 illustrates some of the applicability criteria and ranges of use. TABLE 3.3. Applicability of the Main Types of Temperature Measuring Devices. Technology

Temp. Range (◦ F)

Thermal expansion Filled thermal Bimetallic

−300–1000 −80–800

Electrical Thermocouple

Simple Low cost Simple

Factory repair only Changing calibration with handling

−440–5000

Small size Wide range Small size Accurate

Cold junction can affect calibration Self-heating Drift Expensive Nonlinear response

Thermoresistance

−430–1800

Thermistor

−150–600

Optical Pyrometer

0–7000

Source: Johnson (1998), Zientara (1972).

Advantages

Narrow span No cold junction No contact Wide range

Disadvantages

Fragile Nonlinear scale

56

INSTRUMENTATION

PRESSURE MEASUREMENT Process pressure measuring devices are divided into three groups:

r liquid column devices r elastic element devices r electrical sensing devices Liquid Column Devices In liquid column devices, pressure is measured by determining the height of a liquid column. If the density of the liquid is known, this height is a measure of the pressure. Most forms of liquid column pressure measuring devices are commonly called manometers. When the liquid columns are contained in glass or other transparent tubes, the liquid height can be observed visually. Depending on the pressure range, water and mercury are the most frequently used. Because the density of the liquid used varies with temperature, this fact must be taken into account for accurate pressure measurements. Elastic Element Devices Elastic element pressure measuring devices rely on the deformation of an elastic material (usually metallic). The magnitude of the deformation is proportional to the applied pressure. There are three types: bourdon tube, bellows, and diaphragm.

r Bourdon-tube elements: Probably the most frequently used process pressureindicating device is the C-spring bourdon-tube pressure gauge (Figure 3.25). As pressure increases the bourdon tube end moves resulting in different positions of the pointer. r Bellows elements: These are axially elastic cylinders with folds. The bellows can be used unopposed, or it they can be restrained by an opposing spring (Figure 3.26). The pressure to be measured may be applied either to the inside or to the space outside the bellows, whereas the other side is exposed to atmospheric pressure. r Diaphragm elements: There are two types of diaphragm elements: Unopposed and opposed by a spring. The first type (Figure 3.27a) consists of one or more capsules each composed of two flat or corrugated diaphragms bonded together by soldering, brazing or welding. The second type of diaphragm is used for containing the pressure and exerting a force on the opposing elastic element. The diaphragm is a flexible or slack membrane made of rubber, leather, impregnated fabric or plastic. The movement of the diaphragm is opposed by a spring, which determines the deflection for a given pressure

Pressure Measurement

57

FIGURE 3.25. C-type bourdon manometer. Reprinted from Perry, 1984, reproduced with permission of The McGraw-Hill Companies.

(Figure 3.27b). This type of diaphragm is used for measurement of extremely low pressure, vacuum or differential pressure. Electrical Sensing Devices These devices take advantage of the property that electrical resistance of conducting solids changes with diameter and length. As pressure is exerted on wires, these dimensional changes result in an increase in the electrical resistance of the conductor. These devices are called strain gauges. Bonded strain gauges are those that are bonded directly to the surface of the elastic element whose strain is measured (Figure 3.28).

FIGURE 3.26. Bellow elements. (a) Unopposed bellow. (b) Spring-loaded bellow. Reprinted from Perry, 1984, reproduced with permission of The McGraw-Hill Companies.

58

INSTRUMENTATION

FIGURE 3.27. Diaphragm pressure devices. (a) Unopposed. (b) Opposed. Reprinted from Perry, 1984, reproduced with permission of The McGraw-Hill Companies.

Applicability and Selection Criteria Although elastic elements are used in practice to measure pressure locally, direct pressure readings in electric form are only possible if there is a way to transform the pressure reading to a voltage signal. Strain gauge pressure transducers provide a number of advantages. They cover the range from 0.15 to 660 bar and temperatures from −160◦ C to 350◦ C and are capable of providing 0–15 MV signals by proper use of electrical bridges. Piezoelectric devices operate more or less in a similar fashion. Cost is of course an issue irrespective of the transducer selected. Problems exist because overpressure, vibration, thermal cycling etc., which affect their performance. A good guideline for selection is offered by Omega (1998). DENSITY MEASUREMENT Liquid density is typically measured in the laboratory by means of floating type densitometers and weighing techniques. More sophisticated measurements can be achieved on line by the use of radiation based gravitometers or continuous weighing U-tubes based on force balance restoration.

FIGURE 3.28. Bonded strain gauge. Reprinted from Perry, 1984, reproduced with permission of The McGraw-Hill Companies.

Density Measurement

59

FIGURE 3.29. Continuous weight densitometers. (a) Direct weight. (b) Vibrating tube. c CRC Press, Boca Raton, Reprinted with permission from Wightman, 1972. Copyright ° Florida.

Radiation-based densitometers are based on the same principle as the radiation-based level measuring devices. A gamma ray is passed through the pipe, which is filled with the substance whose density is measured. The absorption rate is measured and related to the density. Continuous weighing U-tubes exist in two types: direct weight measurement and vibrating frequency measurement; both types are described in Figure 3.29. The direct weight measurement is performed by allowing the tube to pivot horizontally on across-leaf suspension as in the weights of a conventional beam balance. Any deviation in weight is detected optically and amplified. In the vibrating tube densitometer a drive coil and a pick up coil are suspended between the two tubes, which in turn are mounted on a pair of antivibrating mounts. Once the drive coil has exerted a force, the frequency of vibration is measured. This frequency is given by the following relation: 1 f ≈ 2

r

S I

(3.28)

where S is the stiffness and I is the inertia. The stiffness is a constant and is related to Young’s modulus of the sampling tube. The inertia is a function of the volume and the density of the fluid. Applicability and Selection Criteria Continuous weighing densitometers and radiation-based densitometers are expensive. Because density measurements are only critical to determine flow

60

INSTRUMENTATION

TABLE 3.4. Applicability of the Main Types of On-Line Analyzers. Technology Gas chromatography Thermal conductivity detector Ionization detector Infrared analyzer Thermal conductivity Ultraviolet absorption

Colorimeters

Sample

Range

Gas, vapor or volatile liquid Usually gas or vapor Usually used for hydrogen Liquids (especially aromatics) Usually liquids

0.1–100% ppm to 100% 0.1–100% 0.1–100% 0.01–100%

0.1–100%

Source: Foster (1975).

rate, unless density needs to be monitored for quality control purposes, industry relies on temperature and pressure measurements to compensate flow ratemeasuring devices. ON-LINE PROCESS ANALYZERS Typical on-line analyzers consist of pH meters, spectrophotometers, oxygen analyzers, mass spectrometers, moisture analyzers, liquid and gas chromatographs. The application of these analyzers to processes is discussed by Utterback (1976). Usually, these instruments can be calibrated to measure the concentration of one component directly. The advantages of using process analyzers are that information about off-spec products is available sooner, environmental monitoring is done in situ, and, of course, there is a reduction in laboratory costs (Foster, 1975). However, they present maintenance challenges that prevent them from becoming more popular. Table 3.4 summarizes their applications. TRANSMISSION AND TRANSFORMATION OF SIGNALS Pneumatic transmission used to be the main method of conveying measurement signals to control rooms. In the last decades, industry has undergone a transformation to electronic transmittal equipment. Devices that perform this operation are called transducers. For example, differential pressure transducers transform the pressure difference into an output voltage. Ideal transducers are linear, but in practice they are affected by zero offsets, hysteresis etc.

Transmission and Transformation of Signals

Temperature signal T

61

Differential Pressure signal

Pressure signal P

DP

‚ P/T

·

D P* (P/T)

K

Square Root

True mass flow

FIGURE 3.30. Computation of mass flow for gases.

In addition, many signals have to be electronically manipulated. For example, a square root of the signal from differential pressure flowmeters and pitot tubes must be taken to obtain the volumetric flow rate. Thus, one should expect the accumulation of errors in all these transformations. This process is illustrated in Figure 3.30. The assumption is that the density is given by a gas law of choice (ρ = PM/ZRT) and therefore roughly proportional to the quotient P/T . In the case of liquids, there is a direct relationship between density and inverse temperature, almost linear in the range desired. Thus, the same type of scheme as in Figure 3.30 can be used to obtain the final signal. A detailed formula to obtain mass flow rate containing several correction factors was developed several years ago by L. Spink from Foxboro (Connel, 1996). Smart Sensors and Digital Protocols In the last decade, digital communications protocols have been introduced. These protocols allow transmitters and other equipment to communicate with control systems, valves etc. Communication in these systems is two ways, from the sensor to the control room and vice versa. The new name of smart sensors has been coined for those instruments that are associated with these new technologies. Examples of these protocols are HART (Highway Addressable Remote Transducer) and Fieldbus. HART is an open (nonproprietary) standard protocol. Fieldbus is an all-digital, serial, two-way communications system, serving as a Local Area Network (LAN) for instruments used in process control and manufacturing automation applications. The demand for fieldbus technology is being driven by the desire of manufacturing end users to move away from older, centralized plant control strategies to distributed control in the field. Fieldbus is neither owned by any individual company nor regulated by a

62

INSTRUMENTATION

single nation or standards body. The technology is controlled by the Fieldbus Foundation, a not-for-profit organization consisting of more than 100 of the world’s leading controls and instrumentation suppliers and end users. In addition, with smart sensors, the number of trips to the field, which only 36% result in corrective action (Masterson, 1999), can be drastically reduced, calibration can be performed from the shop. Finally, these sensors prompted changes in maintenance practice, as “just in time” maintenance (predictive maintenance) can be performed. Masterson (1999) provided several examples of companies that have saved anywhere from 50,000 to 250,000 U.S. dollars in annual savings. The use of these technologies is eliminating the need for digital-to-analog conversion of signals, and vice versa, performed nowadays. Also an increase in the reliability of data obtained from flowmeters based on digital outputs of pulse counts and frequency is observed (McMillan et al., 1998). NOMENCLATURE A: B: C: C p: Cv : d: D: Ev : E: f: F: H: h: I: k: ki : K: K R: L: P: q: Q: Rv : S: s:

cross-sectional area magnetic field discharge coefficient heat capacity at constant pressure heat capacity at constant volume distance diameter electrical power electromotive force frequency of shedding in vortex meters (Hz) force height height difference of level inertia ratio of gas heat capacities (C p/Cv ) constants mean meter coefficient correction factor length pressure heat volumetric flow rate resistance stiffness velocity of sound in fluid

References

St : t: T: v: V: W: Y1 :

63

strouhal number time temperature velocity induced voltage mass flow rate adiabatic gas expansion factor

Greek Letters ρ: 4Pitot : Ä: ω:

density pitot compressibility factor angular velocity width of bluff body

REFERENCES Carsella B. Popular Level-Gauging Methods. Chemical Processing, Dec. (1998). Connel B. Process Instrumentation Applications Manual. McGraw Hill (1996). Dolenc J. W. Choose the Right Flow Meter. Chemical Engineering Progress, pp. 22, Jan. (1996). Foster R. A. Guidelines for Selecting Online Process Analyzers. Chemical Engineering, March (1975). Furness R. A. Flowmeter Classifications and Applications. Chemical Processing. Fluid Flow. Annual, pp. 72 (1998). Gibson W. D. Taking the Measure of Mass Flow. Chemical Engineering, Sept. (1998). Johnson F. L. Temperature Measurement and Control Fundamentals. Chemical Processing, June (1998). Liptak B., editor. Instrumentation Handbook. Third Edition. Chilton (1997). May D. L. Accurate Flow Measurements with Turbine Meters. Chemical Engineering, March (1971). Masterson J. S. Reduce Maintenance Costs with Smart Field Devices. Hydrocarbon Processing, January (1999). McMillan G. K., G. E. Mertz and V. L. Trevathan. Trouble Free Instrumentation. Chemical Engineering, Nov. (1998). Miller R. W. Flow Measurement Engineering Handbook. McGraw Hill (1996) Omega. Force Related Measurements. Transactions in Measurement and Control. Vol. 3, Putnam Publishing and Omega Press LLC (1998). Parker S. Selecting a Level Device Based on Application Needs. Chemical Processing. Fluid Flow Annual. pp. 75 (1998). Perry R. H. and D. Green. Perry’s Chemical Engineers’ Handbook, 6th Edition. McGraw-Hill, Inc. New York (1984). Pomroy J. Selecting the Right Flowmeter. Chemical Engineering, May (1996).

64

INSTRUMENTATION

Sholette W. Pick the Proper Level Measurement Technology. Chemical Engineering Progress, October (1996). Smith C. L. Liquid Measurement Technology. Chemical Engineering, April (1978). Upp E. L. Fluid Flow Measurement. Gulf Publishing Company (1993). Utterback V. C. Online Process Analyzers. Chemical Engineering, June (1976). Webster J. G., editor. Measurement, Instrumentation and Sensors Handbook. CRC Press, LLC (1999). Wightman E. J. Instrumentation in Process Control. CRC Press, Butterworth & Co, London (1972). Zientara D. E. Measuring Process Variables. Chemical Engineering, Sept. (1972).

CHAPTER 4

Errors in Measurement

INTRODUCTION This chapter is devoted to analyzing the basic aspects of errors in instrumentation measurements. A few general definitions related to instrumentation properties are first offered. In a second part, elements contributing to measurement quality, such as precision, systematic errors, hysteresis, dead band, sensitivity and speed of response, are covered. Following are the terminology and concepts used in calibration curves. Finally, in the last part, a brief review of the accuracy of different instruments is offered. A more complete coverage of the material discussed in this chapter can be found in the handbooks on instrumentation edited by Liptak (1997) and Webster (1999). INSTRUMENT PROPERTIES Range, Span and Rangeability The interval within which a certain variable is measured or transmitted is called range. It is expressed by stating the lower and upper range values. The span is simply the difference between the upper and lower range values. Finally, the rangeability is defined as the ratio of the highest to the lowest value that an instrument can measure with the same accuracy. Influence Variables Many variables influence the performance of an instrument. Ambient temperature and humidity for example affect the readings and introduce bias and/or 65

66

ERRORS IN MEASUREMENT

alter the precision. These are called influence quantities (Miller, 1996), and their effects are not linearly additive. Readability This is defined as the smallest scale increment to which the reading may be determined expressed as a percent of the full scale (Perry, 1984). This property of sensors is mostly related to visual reading, but it also appears in digital or analog reading devices.

MEASUREMENT QUALITY Measurements are subject to errors, no matter how much the conditions and the apparatus used to make them are improved. As it was briefly discussed in Chapter 1, precision and accuracy are terms that are often confused. In this section, elements contributing to their definition are discussed. Precision

105

105

103

103

Temperature

Temperature

Precision of an instrument is defined as the closeness of agreement among a number of consecutive measurements of a variable that maintains its value static. It bears no relation to the true value of the variable being measured. Figure 4.1 illustrates the concept with a set of measurements of a temperature of boiling water. Figure 4.1a corresponds to one thermometer, and Figure 4.1b corresponds to another that has a similar average but larger deviations from the average. Statistical theory is used to define precision. If the distribution of errors is assumed to be gaussian, then the variance of this distribution is estimated by

101 99 97 95

101 99 97 95

1

Time

250

1

(a)

Time

(b)

FIGURE 4.1. Precision.

250

Measurement Quality

67

using the standard deviation of a sample. sP

n i=1 (x i

s=

− x)2

n

(4.1)

where xi are the sample values, x the average and s the standard deviation. A statistically unbiased estimate of the variance of a normal distribution σ 2 is the modified variance sˆ 2 where r sˆ = s

n n−1

(4.2)

As n measurements are taken, statistical estimation theory states√that the population mean µ has p% probability to be in the interval x ± z p sˆ / n. The parameter z p is called confidence coefficient and is obtained by determining what value of x is such that the area under the curve in the interval [x, −x] is equal to p. In turn, p is called the confidence level, or confidence limit. For example for p = 0.95 (95%), the confidence coefficient is z p = 1.96, as it is illustrated in Figure 4.2. Because in practice the number of measurements is finite, the use of the t-student distribution is more appropriate. The confidence coefficient of this distribution (t p ) is defined as is the p% confidence coefficient for (n − 1) degrees of freedom. When n ≥ 30 the t-student and the normal distribution are practically the same. Precision (σ P ) is thus defined as half of the confidence

FIGURE 4.2. Normal distribution.

68

ERRORS IN MEASUREMENT

interval when one measurement is taken, that is: σ P = t p sˆ

(4.3)

Precision is more formally called repeatability or reproducibility by the International Society for Measurement and Control (ISA) and other literature sources. Repeatability is defined by ISA standards as the closeness of agreement among a number of consecutive measurements of the output for the same value of the input under the same operating conditions (ambient temperature, ambient pressure, voltage etc), approaching from the same direction. Reproducibility is defined the same way, but approaching from both directions. Thus, repeatability does not include hysteresis, dead band and drift effects (discussed below), whereas reproducibility does. Precision is thus a loose term coined by practitioners that does not have a standard definition. Therefore, its usage is also ambiguous because it can either refer to repeatability or reproducibility. From now on, unless indication to the contrary, it will substitute for reproducibility. Origin of Fluctuations Fluctuation of measurements has varied sources. For example, pressure measurements are affected by small fluctuations originated in pumps and/or vibrations of compressors and other factors. Process temperatures are affected by ambient temperature fluctuations etc. Turbulent flow is by definition full of fluctuations, and even laminar flows are subject to such variations, because flow is driven by pressure differences and depends on density. Thus, there are two kinds of fluctuations: (1) inherent to the process and hence to the variable measured and (2) external disturbances on the measurement process. Assumption of Normal Distribution Random errors are assumed to be normally distributed. This assumption is based on the theory that errors are the product of innumerable sources, and, consequently, the central limit theorem of statistical theory applies. This theorem states that a sum of a large number of disturbances each having its own distribution tends to give a disturbance with a normal distribution. Measured values are related to the state variables that they measure through a series of signal transformations. These transformations involve, among others, the use of measurement devices, transducers, electronic amplifiers and final reading instruments. Thus, signals are distorted when these nonlinear transformations are performed. One example is the square rooting performed when differential pressure signals are used to measure flow rate. Even if the original distribution is gaussian, the resulting distribution is skewed. Bagajewicz (1996) proved that this effect can have an impact of sizable proportions in data reconciliation

Measurement Quality

69

Temperature

105 104 103 102 101 100 99 98 97 96 95 1

Time

250

FIGURE 4.3. Systematic error.

practice. In addition, noise is usually assumed to have a normal distribution. However, this is sometimes not true. For example, when noise has a narrow frequency band width, a Rayleigh distribution is more appropriate (Brown and Glazier, 1964). Finally, it is a well-known fact that oscillating signals have probability distributions that are not normal (Himmelblau, 1970). The consequences of this nonnormal noise probability distribution are not explored here. Despite these counterexamples, the underlying assumption that all errors are gaussian is common to all the theories of data reconciliation and gross error detection in process plants. Systematic Error (Bias) Consider a variable that maintains its value static, for example, the boiling temperature of a pure fluid. Assume that a number of consecutive measurements of this variable is made and their mean is calculated. The systematic error of the instrument is defined as the closeness of agreement of the mean value of the measurements with the true value of the variable. It is also called bias. Figure 4.3 illustrates the concept with a set of measurements of a temperature of boiling water. The measurements in this figure have a systematic error of about +2◦ C. When the true value is known the amount of bias (δ) can be estimated by subtracting the mean value of all measurments from the true value(ˆx ): δ = x − xˆ

(4.4)

Thus, when the measurements are higher than the true value, the bias is positive and the reading is said to be high. Conversely, a negative bias corresponds to a low reading. When true values are not known, other instruments are needed to determine a good estimate of these “true values.” This process is called calibration. For example, temperature meters can be calibrated by using systems with well-known values of temperature, such as the boiling or melting point of

70

ERRORS IN MEASUREMENT

pure substances. Flowmeters, in turn, need to be calibrated with the aid of some other more precise instruments. Classification of Systematic Errors (Biases) Biases can be classified in two major categories: constant and variable biases. Sources for constant biases are: —use of incorrect assumption in the calibration procedure. The assumption of ideal gas behavior in a gas flowmeter calibration is one example. —corrections not performed in the calibration procedure —unknown errors in reference standards —incorrect installation of the instrument. For example, installing a flowmeter close to an upstream elbow. —zero shift Sources of variable biases are: —drift in the voltage supply to the instrument —span shift —wear of the instrument. For example, the orifice edge of a flowmeter can be affected by particles of a dirty stream. Drift is defined as a change in output over a specified period of time for a constant input. Shifts are, in turn, independent of time and correspond to errors in the measurement range. Span and zero shifts are illustrated in Figure 4.4.

Span shift Zero shift 100

% Output Specified characteristic

% Input

100

FIGURE 4.4. Span and zero shift. Adapted from Liptak, 1997.

Measurement Quality

71

104

Temperature

103 102 101 100 99 98 97 96 0

Time

250

FIGURE 4.5. Gaussian signal.

Outliers An outlier is defined as a measurement that can never be explained, calculated, estimated or anticipated. Human errors, electrical voltage surges and wiring problems are typical sources of outliers. In Figure 4.5, a signal without outliers is presented, whereas in Figure 4.6, a signal with small variance and outliers is presented. Figure 4.7 shows the effect of a high signal-wire resistance that causes erratic behavior and introduces a large number of outliers. In turn, Figure 4.8 shows a well-wired instruments with an occasional outlier. Sensitivity and Speed of Response Sensitivity of an instrument is the smallest change in the measured variable to which an instrument reacts with a change in reading. Speed of response, or lag, is a dynamic characteristic that describes the reaction of an instrument to a measured variable that changes with time. Few sensitivity and lag problems are 104

Temperature

103 102 101 100 99 98 97 96 0

Time

FIGURE 4.6. Gaussian signal with outliers.

250

72

ERRORS IN MEASUREMENT

104 103 102 101 100 99 98 97 96 0

250

FIGURE 4.7. Effect of signal-wire resistance; badly wired instrument. Adapted from McMillan et al., 1998.

encountered because most measuring devices have acceptably good response. Fine sensitivity causes cycling and results in poor control. For example, thermocouples are often put in thermowells to protect them from the process fluid. The presence of these thermowells create a time lag in the response of the instrument. Hysteresis and Dead Band Hysteresis, illustrated in Figure 4.9, is a phenomenon where an output corresponding to an increasing input varies through a certain path, which differs from the path the output follows. This occurs when the input is decreased from the maximum reached until the original value. The discrepancy between the paths is typically larger if the interval of change is larger. Dead band, illustrated 104 103 102 101 100 99 98 97 96 0

250

FIGURE 4.8. Effect of signal-wire resistance; properly wired instrument. Adapted from McMillan et al., 1998.

73

Measurement Quality

Output

Output

Output

Input

Input

Input

(a)

(b)

(c)

FIGURE 4.9. (a) Hysteresis. (b) Dead band. (c) Hysteresis and dead band.

in the same figure (and in combination with hysteresis) is a range within which the input can be changed without observing a change in the output. This is typically observed when the input direction of change is suddenly reversed.

Linearity When the expected measured output value is the same variable as the input value, then the expected output is a straight 45◦ line. This is, for example, the case when the flow measured by a flowmeter is plotted against the real flow, after all compensation and conversions have been performed. This diagram is shown in Figure 4.10. The conformity between the actual measured value and the characteristic line is called independent linearity and is obtained by making opposite sign

Independent straight line 100

% Output

Actual calibration curve Maximum positive deviation

Maximum negative deviation

% Input

100

FIGURE 4.10. Independent linearity. Adapted from Liptak, 1997.

74

ERRORS IN MEASUREMENT

maximum deviations equal. These deviations from linearity are called static errors. Accuracy The accuracy of a measurement is defined as the degree of conformity with a standard or true value. Therefore, precision and systematic errors contribute to the accuracy of an instrument. Thus, a device is said to be accurate if it is unbiased and precise. Accuracy (σa ) is many times reported numerically as follows: σa = δ + σ P

(4.4)

In many other textbooks, accuracy is defined in the same way as precision, ignoring (or assuming) that bias is not present. Precision, bias and accuracy are also often reported in relative terms, that is, as a percentage of the value measured. Some typical expressions are given in terms of: —the measured variable: ±1◦ C —a percentage of span: ±0.5% —a percentage of the upper range value: ±0.5% —a percentage of actual output reading: ±0.3% The International Society for Measurement and Control (ISA) and the American National Standards Institute (ANSI) define accuracy as the sum of linearity, hysteresis and repeatability, compared with two fixed points. As it was pointed out by Lewis (1998), “nothing is said whether the two fixed endpoints are in the right place,” and, therefore, the definition he proposes may be thought of as “relative accuracy.” Modern nomenclature, not reflected in ISA standards, uses the term “inaccuracy” instead of “accuracy” to indicate the degree of departure from standard or ideal values. Although the former term is preferred here, “accuracy” is so deeply rooted in the sensor location and data reconciliation literature that it will be nonetheless used throughout this book. CALIBRATION CURVES Instruments are used over a certain range. Therefore, one might expect the precision, bias and accuracy to vary in the range of the measurement. A graph that shows the ideal values of an output variable as a function of the input variable is called a specific characteristic curve. For example, an orifice flowmeter measures the pressure drop as a function of the volumetric flow rate. Thus, we expect the characteristic curve of such device to be a square root function. The calibration procedure consists of obtaining the actual value of the output

Calibration Curves

75

Specified characteristic curve Maximum positive deviation

100

Output

Actual calibration curve Maximum negative deviation

Input

100

FIGURE 4.11. Independent conformity. Adapted from Liptak, 1997.

variable by averaging the upscale and downscale readings. The comparison between the actual and characteristic curves is called independent conformity. This is illustrated in Figure 4.11. To construct such curves, another instrument is needed to determine the “true” value of the input. In practice, the device used for this purpose must have at least five times better accuracy than the other device and must be free from systematic errors (Miller, 1996). Although its shape is known, in many cases the characteristic curve cannot be constructed because many parameters are not known. For example in the case of the orifice flowmeter, one might not be able to calculate the actual value but anticipate that the ideal behavior is proportional to a square root function. Then, to determine conformity one needs to adjust the characteristic curve. When this adjustment is done by using the upper range value, the conformity is called terminal-based conformity. On the other hand, when the initial value is used, the conformity is called zero-based conformity (Figure 4.12). Thus, only in a few intervals within the instrument range one can say that a bias is negligible. Thus, one can think of conformity as the deviation of the actual measurement from an expected theoretical characteristic curve. In practice, electronic transformations, such as taking a square root of signals coming from differential flowmeters, are made to obtain a linear output-input characteristic curve. In such case, linearity establishes the difference between the actual and expected values. Flow Compensation Measurements are typically reported to the Distributed Control System (DCS) as percentages of the full span. In the case of differential flowmeters, there

76

ERRORS IN MEASUREMENT

Actual calibration curve

Actual calibration curve 100

100

Output

Output

Specified characteristic curve

Specified characteristic curve

Maximum deviation

Input

100

(a)

Maximum deviation

Input

100

(b)

FIGURE 4.12. (a) Terminal based conformity. (b) Zero-based conformity. Adapted from Liptak, 1997.

should be concern about the accumulation of errors, despite the accuracy of the flowmeter itself. For example, after the square root of the signal is taken, this resulting value is multiplied by the so-called meter factor, which is a constant for the particular flowmeter and fluid running through it and is supposed to contain the effect of density, temperature etc. Moreover, the meter factor is a result of certain base conditions, which are nothing more than the assumptions used to calculate it, including the flow conditions (temperature and pressure). In many cases, the DCS performs automatic adjustment of the meter factor to compensate for deviations from the base conditions. This procedure is known as flow compensation. For example, in the case of gases, knowledge of differences in pressure and temperature can be used to do this compensation (assuming that the concentration has not changed). In the case of liquids, temperature suffices. However, when the concentration is not known or it varies, the density needs to be inferred. In the case of petroleum liquids, the Watson K factor is a useful tool to transform petroleum fraction laboratory distillation curves into densities. However, the corrections for liquid are not satisfactory, because the lab information is not available on real time. With the introduction of databases (historians), this situation is changing because the data can be adjusted later for monitoring and accounting calculations.

ACCURACY OF DIFFERENT INSTRUMENTS Table 4.1 summarizes the accuracy and rangeability one can expect from the different types of flowmeters described in Chapter 3.

Accuracy of Different Instruments

77

TABLE 4.1. Accuracy and Rangeability of Flowmeters. Technology

Accuracy (%)

Rangeability

0.5–1.0 R 1.0–5.0 R

8:1 4:1

0.25–1.0 R 0.15–0.5 R 0.65–1.35 R 2.0–5.0 R 1.0–5.0 R

30:1 10:1 25:1 5–25:1 10–2750:1

0.15–0.5 R

10:1

0.10 R 0.15 R 0.3 R 1.0 R 0.5 R 2–10 R

80:1 20:1 10:1 50:1 50:1 10:1

Differential pressure Orifice and multivariable flow transmitter Orifice and differential pressure transmitter Velocity Electromagnetic Turbine Vortex Ultrasonic Doppler Ultrasonic transit time Volumetric Positive displacement Mass Coriolis Thermal mass (gases) Thermal mass (liquids) Variable area (metal tube) Source: Pomroy (1996), Dolenc (1996).

Pressure measurements can be as accurate as their theoretical limit (0.01%). However, in practice, they approach the range of 0.1–1%. Strain gauge pressure transducers can achieve accuracies of 0.5%. Solid-state pressure transducers have similar range of accuracy (Wightam, 1972). The accuracy of temperature and density measurements is given in Tables 4.2 and 4.3, respectively. TABLE 4.2. Accuracy and Range of Temperature Measurement Devices. Technology Thermal expansion Filled thermal Bimetallic Electrical Thermocouple Thermoresistance Thermistor Optical Pyrometer

Accuracy

Range (◦ F)

0.5–2.0◦ F 1–20◦ F

–300 to 1000 –80 to 800

0.4–0.75% (0.2◦ F, best) 0.01–0.12% (0.1◦ F, best) 1–15%

–260 to 2320

0.5–1%

−50 to 3000

Source: Johnson (1998), Zientara (1972).

–250 to 850 –80 to 150 –150 to 600

78

ERRORS IN MEASUREMENT

TABLE 4.3. Accuracy and Range of Densitometers. Technology Buoyancy U-tube (vibration type) U-tube (force balance) Radiation (nuclear)

Accuracy (sp. gr.) 0.0001 0.001 0.0005 0.0001

Source: Wightman (1972).

NOMENCLATURE s: sˆ : tp: p: x: xˆ :

standard deviation modified standard deviation confidence coefficient confidence mean value of all measurements of x true value of x

Greek Letters σa : accuracy σ P : precision δ: bias

REFERENCES Bagajewicz M. On the Probability Distribution and Reconciliation of Process Plant Data. Proceedings of the Process Systems Engineering (PSE) International Symposium. Kyongju, Korea, (1994). Comp. & Chem. Eng., 20, N 6/7, p. 813 (1996). Brown J. and E. V. D. Glazier. Signal Analysis. Reinhold, New York (1964). Dolenc J. W. Choose the Right Flow Meter. Chemical Engineering Progress, Jan., pp. 22 (1996). Himmelblau D. M. Process Analysis by Statistical Methods. Wiley (1970). Johnson F. L. Temperature Measurement and Control Fundamentals. Chemical Processing, June (1998). Lewis C. W. More Precise Temperature Measurements. Chemical Engineering, May (1998). Liptak B., editor. Instrumentation Handbook. Third Edition. Chilton (1997). McMillan G. K., G. E. Mertz and V. L. Trevathan. Troublefree Instrumentation. Chemical Engineering, Nov. (1998). Miller R. W. Flow Measurement Engineering Handbook. McGraw Hill (1996).

References

79

Perry R. H. and D. Green. Perry’s Chemical Engineers’ Handbook, 6th Edition. McGraw-Hill, Inc. New York (1984). Pomroy J. Selecting the Right Flowmeter. Chemical Engineering, May (1996). Webster J. G., editor. Measurement, Instrumentation and Sensors Handbook. CRC Press, LLC (1999). Wightman E. J. Instrumentation in Process Control. CRC Press, Butterworth & Co, London (1972). Zientara D. E. Measuring Process Variables. Chemical Engineering, Sept. (1972).

CHAPTER 5

Variable Classification

INTRODUCTION This chapter presents the classification of variables in their different categories. These categories are related to the ability to observe the system, that is, being able to obtain an estimate of the state variables of the system, especially when they are not measured. The concepts of observability and redundancy and the methods to determine observable and redundant variables are reviewed first. The unifying concept of estimability is presented next. Connections to graph theory are later made. These concepts from graph theory will be used in future chapters in design procedures. The extension of all these concepts to nonlinear systems is then covered in the last part of the chapter. MODEL Usually, a model for a process plant is given by a mathematical relation, typically a set of autonomous differential algebraic equations (DAE) such as the following one: d x1 = g1 (x1 , x2 ) dt

(5.1)

g2 (x1 , x2 ) = 0

(5.2)

For convenience, the set of state variables x has been partitioned into (x1 , x2 ). In addition, x is usually divided into measured variables x M and unmeasured variables xU . In using this model, we depart from the traditional input-output, or input-state-output representations so popular in control theory. 81

82

VARIABLE CLASSIFICATION

MEASUREMENT EQUATION State variables describe the behavior of a system. However, these variables may not be the ones that are being directly measured. Therefore, we introduce the measurement equation: z = h(x) + ε

(5.3)

where h(x) is a measurement function and ε is the measurement random error. Thus, the objective of monitoring is to be able to obtain the value of a subset of state variables of the set (x) by using the knowledge of the measurement values z. In the simplest case, h(x) is linear. Thus, an estimator of x (calibration graphs, filters, data reconciliation etc.) performs the inversion of Equation (5.3) as follows: e x = g(z)

(5.4)

In the rest of the book, the distinction between x and its estimate e x will be omitted with the only exception of Chapter 7, where it is again used. GRAPHS AND FLOWSHEETS A graph is defined as a set of vertices connected by edges. A flowsheet is a graphical representation of a process, also known as a digraph. It consists of units (nodes) and streams (edges). A graph is called directed when the edges (streams) have an established direction from one node to the other. Undirected graphs do not have any preferred direction; therefore, the edges can be traveled in any direction. Thus, a directed graph is called a digraph. Figure 5.1 illustrates U2 (purge)

Compressor S9

S8

S7

U5

S3 S1

S4

U1

S6

U3

U4 S1 (hydrogen) S2 (benzene)

S3 U1

Cooler S4

U2 Reactor

S2

S5 U3 Flash

Heater Benzene

S5

S9 U5

S8

S6

U4 S7

ENV

(a)

(b)

FIGURE 5.1. Simplified cyclohexane production process network. (a) Flowsheet. (b) Corresponding graph.

Connectivity of Systems

83

the relationship between flowsheet and graphs. Groups of units are sometimes represented by one node in the graph, as it happens with the set heater-reactorcooler, which is represented by node U2 . Because, by definition, an edge has to connect two vertices, sometimes the environmental node is added when building the graph of a flowsheet. This type of graph is called the augmented graph, and is shown in Figure 5.1(b).

CONNECTIVITY OF SYSTEMS A system is connected if in the corresponding undirected graph there is a path from every unit to any other unit. Consider the system in Figure 5.2. This system is connected. However, removal of process U5 makes it a disconnected system composed of two connected systems. Although connectivity of small systems can be determined by simple visual inspection, in large flowsheets this might not be so obvious. There are two ways to determine if a system is connected: using graph theory or manipulating the incidence matrix. Both are equivalent and are based on the same principle. Graph Approach to the Determination of Connectivity Take a node (unit). Merge it with any unit to which it is connected through an edge (stream). Repeat the procedure until no merging is possible. Then count the number of merged units left. This number is the number of connected systems. This process is shown in Figure 5.3. Figure 5.3(a) shows the graph corresponding to Figure 5.2. In Figure 5.3(b), nodes U2 , U3 , U7 and U8 are merged with the splitters in front of them. In Figure 5.3(c), mixer U4 is merged with splitter U1 and the second splitter, U6 , is merged with node U5 . The two nodes left are merged in Figure 5.3(d) to leave only one node. Suppose now that unit U5 is not originally present. Clearly, the process will end in a graph similar to the one of Figure 5.3(c), with two nodes (U1 and U6 ) not connected to each other. S2 S1

U2

U1

S3

S4 U4

U3

S5

S8 S6

U5

S7

U7

S10

U6

S9

( ):Measured flow rates FIGURE 5.2. A connected system.

U8

S11

84

VARIABLE CLASSIFICATION

FIGURE 5.3. Steps for the determination of connectivity.

Incidence Matrix Approach to the Determination of Connectivity The incidence matrix is constructed as follows:

r Each row represents a unit and each column represents a stream. r In each row, streams entering to the corresponding unit are represented by a number 1. Streams leaving the unit are represented by a number −1. The incidence matrix (C) for the process in Figure 5.2 is: S1 

1 −1  1     C =     

S2

S3 −1 1

S4

S5

S6

S7

S8

S9

S10

S11 

−1 1

−1 1 −1 1

−1 1

−1 1

−1 1

−1

U1  U2   U3   U4 (5.5)   U5   U6   U7 −1 U8

The method to determine connectivity based on the incidence matrix is:

r Take the first row of the incidence matrix. Find a column with a nonzero element.

Observability

85

r Search in that column for another row with a nonzero element. By construction, this element will have an opposite sign.

r Add the latest row found to the first row and eliminate the row and the column. r Repeat the procedure for the remaining rows (if any). r Count the number of rows left. This number is the number of connected systems. This procedure is illustrated next for the system shown in Figure 5.2. Successive application of the procedure for the first row leads to a new matrix C 0: S1 C = [1 0

S10 −1

S11 −1]

(5.6)

which proves that the system is connected. The reader can verify that in the case where unit U5 is not present the process ends with the following matrix: · S1 S6 S7 S10 S11 ¸ 1 −1 C = 1 −1 −1 0

(5.7)

In the analysis that follows and in the rest of the chapters of the book, we assume that all systems are connected.

OBSERVABILITY Observability of a variable in a system can be defined in a broad sense as the ability to perform an estimate of the variable by using certain measurements performed in the system. When discussing the measurement equation, it was made clear that often variables are not measured directly, but, rather, some other parts of the system are measured. Thus, in simple terms, determining the observability of a variable consists of determining if the function g(z) exists. Kalman (1960) introduced the concept of observability for linear dynamic systems. The seminal articles by Griffith and Kumar (1971), Kou et al. (1973) and Singh (1975) discussed observability in nonlinear systems. Stanley and Mah (1981) discussed this issue of observability of systems described by steady-state models in depth. In particular, they discuss conditions under which observability can be attained, conditions that are related to the nature and structure of the mappings h(x) and g(z). We now concentrate on a variable classification for linear steady-state systems.

86

VARIABLE CLASSIFICATION

The system shown in Figure 5.2 consists of 5 units and 11 streams. Consider that the hold-up of units U1 through U8 are negligible. Thus, all balances in the system do not contain derivatives and are represented by equations of the form shown by Equation (5.2). Notice first that all the flow rates after the first split (S2 , S3 , S4 , S5 ) cannot be calculated, because there is no material balance that can be used to obtain them. These variables are called unobservable variables. The rest of the unmeasured variables (S6 , S9 , S10 ,) can be obtained from material balances by using the measured values. We call these variables, observable variables. This leads to our first classification:  Measured (M) Variables

 Unmeasured (U )

½Observable (O) Unobservable (UO)

Thus, we formally define observability as follows: Definition: A non-measured variable is observable if it can be calculated in at least one way from the measurements.

REDUNDANCY Consider now the measured streams in Figure 5.2. If the flow rate of stream S1 is not measured, it can be estimated in only one way, that is, by using the value of S7 . However, there is also an additional way: by adding S8 and S11 . In this last case, we thus say, that the system {S1 , S7 , S8 , S11 } is redundant. Assume now that stream S7 and S8 are not measured. Then S1 could not be estimated by any balance equation and its removal makes it unobservable: we call it nonredundant. Thus, the following definitions follow: Definition: A measurement is redundant if the corresponding variable can be calculated in at least one way using the remaining measurements. Definition: A set of measurements of a system is redundant if all measurements are redundant. Definition: A measurement of a variable is nonredundant if after removing this measurement, the variable is unobservable, that is, it cannot be calculated by using a balance equation involving the other measurements of the system.

Canonical Representation of Linear Systems

87

We now complete our classification of variables as follows:  measured (M)   Variables

½ Redundant (R) Non-redundant (NR)

½   Observable (O) Unmeasured (U ) Unobservable (UO)

Redundancy is therefore a desirable property of a system because in the case when an instrument fails, its variable can still be estimated through balances. Moreover, if the number of different balances that can be used increases, there will be additional ways to calculate the variable. When more redundant measurements are involved in calculating a variable, it is said that the reliability of such system and the precision of the estimation increase. There is, therefore, a need to distinguish these different levels of redundancy. Hardware Redundancy We consider now the case where more than one instrument is used to measure the same variable. This has been traditionally the way redundancy has been understood. Hardware redundancy has no effect on observability, but it has an effect on the accuracy and the reliability as well as on other properties of the sensor network. This will be discussed in Chapter 8. LINEAR SYSTEMS The dynamic model of a material balance in a process plant can be represented by the following differential-algebraic system of equations (DAE): dw = Af dt

(5.8)

Cf = 0

(5.9)

where f are the flows of the different streams connecting the units, w are the hold-ups of the respective units and A and C are system matrices. CANONICAL REPRESENTATION OF LINEAR SYSTEMS A mathematical representation is now developed to obtain an analytical way to determine observable, unobservable and redundant variables. Consider first

88

VARIABLE CLASSIFICATION

the partition of the above system of equations into measured and unmeasured variables. Let · ¸ A −I D= (5.10) C 0 and  f x =  dw  dt 

(5.11)

Thus, the system of differential algebraic Equations (5.8) and (5.9) can be rewritten as follows: Dx = 0

(5.12)

By using simple rearrangement of columns in matrix D, the system can be rewritten in the following way: · [DU

DM ]

xU xM

¸ =0

(5.13)

where xU corresponds to unmeasured variables and x M corresponds to measured variables. Madron (1992) proposed a simple way of determining observable variables: as in Gauss-Jordan elimination for solving linear systems, a linear combination of rows combined with column rearrangements is performed to obtain a system that has the following form:   xO xUO   [D O DUO D R DNR ]  (5.14)  xR  = 0 xNR where D O and DUO , D R and DNR have the following structure:   I   D O = 0 0   0   DUO = G UO  0

(5.15)

(5.16)

Canonical Representation of Linear Systems

89



 −G RO   D R =  G RUO  GR   −G NRO   DUO =  G NRUO  0

(5.17)

(5.18)

The column rearrangement that leads to the identity matrix in Equation (5.15) allows the identification of the observable variables x O . Matrix G UO has typically more columns than rows and rank equal to the number of rows. Thus, it forms an undetermined system involving xUO , which will be ignored because it is insoluble. The final form of the system of equations is:

Unobservable

Redundant

Observable

Non-redundant

- GRO

- GNRO

0 GUO GRUO

GNRUO

I 0

0 0

GR

0

xO xUO xR

(5.19)

=0

xNR Measured

Unmeasured

This is called the canonical form of D. The last set of variables is called nonredundant; their measurement has to be taken at face value. In the context of dynamic data reconciliation, these variables have been called self-redundant, because the set of measurements can still be fitted to a curve. Rewriting the system, one has x O = G R O x R + G NRO xNR

(5.20)

G RxR = 0

(5.21)

Remark 1: G R represents a graph that corresponds to a redundant system only. We call this graph, the redundant graph of the system. In addition, the redundant system matrix is of the form ¥ ¦ G R = I G ∗R (5.22)

90

VARIABLE CLASSIFICATION

Finally ¥ ¦ G R O = 0 G ∗R O

(5.23)

Remark 2: When there are redundant measured hold-up, matrix G R has the following structure · GR =

A R −B R CR 0

¸ (5.24)

Therefore, Equations (5.8) and (5.9) are rewritten in terms of its specific parts as follows: BR

dw R = AR fR dt

(5.25)

CR fR = 0

(5.26)

Remark 3: Knowledge of x R and xNR allows the determination of the observable values. This is called co-optation. However, because measurements do not comply with Equation (5.21), adjustments are needed. The determination of estimates of x R such that Equation (5.21) is satisfied is called data reconciliation. Illustration Consider the system of Figure 5.2. Its incidence matrix is given by Equation (5.5). The canonical form of this matrix obtained by Gauss-Jordan factorization is:

S6 S9 S10 S2 S3 S4 S5 S1 S7 S8 S11 -1

1 1 1 C=

Observable

1 1

-1

1 -1

1 -1 -1 1 1 -1 1 -1 1 -1 -1

Unobservable Redundant

(5.27)

91

Canonical Representation of Linear Systems

S6 S8 U1

U2

S1

U3

S2

S3

U4 S4

U5 S5

S9

S7

( ):Measured flow rates and Hold-ups FIGURE 5.4. A system with tanks.

If in addition, variable S3 is measured, then the result is:

S6 S9 S10 S2 S4 S5 S1 S7 S8 S11 S3 -1

1 1 1 1

C=

1 1

-1

-1 -1

1 -1

1 -1 1 -1 -1

1 1 -1

(5.28)

Observable Redundant

Non-redundant

We now turn to another example that illustrates the changes in unit hold-up. Consider the system depicted in Figure 5.4. It contains nine streams, four tanks and one splitter. Assume that all hold-up and flow rates are measured except the hold-up of tank U3 , and the flow rates of S2 and S6 . Matrix D is then given by: S1



S2

S3

S4

S5

S6

S7

S8

S9

U1

U2

U3

U4



1 −1 1 −1   1 −1 1 −1   1 −1 −1 −1 D=  (5.29)  1 −1 −1 −1  1 −1 −1

92

VARIABLE CLASSIFICATION

The canonical form of D is:

S2 S6 U3 S4 S5 S7 S8 S9 U4 U1 U2 S1 1

1 1 -1 -1 -1 -1 1

1 D' =

1

1 1

Observable

S3

-1 -1 -1 -1 1 -1 1 1 -1

-1 -1 -1 Redundant

Non-redundant (5.30)

As a result of the procedure, only two equations (last two rows) represent the redundant system of equations. In addition, all variables are in this case observable. The corresponding matrices are: "

1 0 GR = 0 1  0  G RO =  0 −1  G NRO

# −1 0  −1 0 0 0  −1 0 0 0 1 0 0 0  1 0 1  1 −1 1

−1 −1 −1 0 −1 −1

(5.31)

0 0 0

(5.32)

0  = 1 −1

−1

1

(5.33)

0

Thus, U4 , S4 , S5 , S7 , S8 and S9 have been identified as redundant variables. In particular: A R = [1

0

−1 −1

C R = [0

1

0

−1

−1]

BR = 1

)

−1]

(5.34)

which correspond to the following system of equations: dw4 = f4 − f7 − f8 − f9 dt f5 − f8 − f9 = 0 The graph of the redundant system can be depicted as Figure 5.5.

(5.35) (5.36)

Other Methods

93

S8 S4

U4

S5 U5 S9

S7

( ):Measured flow rates FIGURE 5.5. Redundant system.

Thus, the nonredundant (self-redundant) variables are U1 , U2 , S1 and S3 and the observable variables are U3 , S2 and S6 , as it is indicated by the first three columns of the canonical form of D.

OTHER METHODS A few other methods exist for the observability analysis of linear systems. The pioneering work in this field was performed by Vaclavek (1969) and Romagnoli and Stephanopoulos (1980). More recently, matrix projection and QR decomposition have been proposed. We briefly illustrate these two methods next. Matrix Projection This method was proposed by Crowe et al. (1983), and it consists of determining a matrix P such that PDU = 0. Thus, by premultiplying Equation (5.13) by P one obtains: PDU xU + PD M x M = PD M x M = 0

(5.37)

which leaves a system of equation in terms of the measured variables only. Matrix P is obtained as follows. Assume DU is partitioned as follows: ·

DU,1 DU = DU,2

DU,3 DU,4

¸ (5.38)

Without loss of generality we assume that DU,1 is the largest nonsingular sub matrix of DU . Then P is given by: ¥ −1 P = −DU,2 DU,1

I

¦

(5.39)

94

VARIABLE CLASSIFICATION

Indeed ¥ PDU = 0

−1 −DU,2 DU,1 DU,3 + DU,4

¦

(5.40)

However, the right half columns of DU are linear combinations of the left −1 DU,3 + DU,4 = 0. half. Thus, −DU,2 DU,1 QR Decomposition This method was proposed by Swartz (1989). Let r = rank(DU ). Then the method consists of multiplying matrix DU by a permutation matrix 5, such that the first r columns are l.i. Then, DU 5 can be uniquely written as follows: · DU 5 = [Q 1

Q2]

R11

R12 0

0

¸ (5.41)

where Q 1 and Q 2 are orthonormal and R11 is upper triangular (Stewart, 1973; S´anchez and Romagnoli, 1996). These matrices can be obtained with one code instruction in MATLAB. We rewrite Equation (5.41) as follows: ·

DU = [Q 1

R11 Q2] 0

¸ R12 T 5 0

(5.42)

Thus, premultiplying xU by 5T one obtains a permutation of unmeasured variables [xU,1 xU,2 ], that is, 5T xU = [xU,1 xU,2 ]. Therefore, we premultiply Equation (5.13) by Q T to obtain "

R11 0

R12 0

# x  U,2 Q 1T D M xU,1  = 0 T Q2 DM xM

(5.43)

One equation derived from Equation (5.43) is: £ T ¤ −1 Q 1 D M x M + R12 xU,2 xU,1 = −R11

(5.44)

It follows that xU,2 is always unobservable. However, the set xU,1 contains observable and unobservable variables. Indeed, a variable in xU,1 is observable −1 R12 is zero. if the corresponding row of R11 Finally, Q 2T spans the null space of DUT , therefore Q 2T DU = 0, indicating that Q 2T works in the same way as the projection matrix P. The use of this method for observability analysis will be illustrated later in this chapter for nonlinear cases.

System Degree of Redundancy

95

SYSTEM DEGREE OF REDUNDANCY We are now in a position to give a mathematical interpretation of the degree of redundancy. First, note that matrix GR is of full rank. If GR is not of full rank, there would be one balance that can be written as a linear combination of the others. However, by construction, D has as many rows as units, and, therefore, all balances are linearly independent. Thus, since the degree of redundancy is the number of l.i. balance equations that can be written with measured quantities, it is equal to the rank of G R . This suggests the following definition: Definition: A system has degree of redundancy k, when at least k linearly independent balance equations can be written by using measured variables only. In the case of Figure 5.2, the degree of redundancy is two, as it follows from Equation (5.27). If S3 is measured, the degree of redundancy is still two. Property: The maximum degree of redundancy of a system is equal to the number of units. The property is self-evident from the fact that this is the maximum number of l.i. material balances that can written. This prompts the following definition: Definition: A system has full redundancy when all measurements are redundant. In addition, the system degree of redundancy is an indication of the appropriateness of the measurement location. For example, two systems can have the same number of measurements but different system degree of redundancy, as in one case too many variables are redundant and in the other, too many are nonredundant. Thus. the concept of system degree of redundancy does not fully encompass the richness of the different types of system redundancies that one can find. For example, systems can have the same number of measurements and the same number of units, but different degrees of redundancy. Consider the system of three units depicted in Figure 5.6(a). Assume all streams are measured. Thus, it contains three units and four measurements, resulting in a degree of redundancy of three. Consider now the system in Figure 5.6(b), and assume that all but S5 and S6 are measured. As in the case of Figure 5.6(a), this system contains three units and four measurements. However, its degree of redundancy is one. If the set of measured streams is x M = {S1 , S2 , S3 , S6 }, then the system degree of redundancy is two. Thus, the degree of redundancy of a system is a condition that reflects how effectively a certain number of measurements is distributed throughout the system. However, we have not found much connection with specific goals, such as reliability or gross error detectability, resilience or the residual precision

96

VARIABLE CLASSIFICATION

S2 S1

S2

S3

S4

S4

S1

S5 S3

(a)

S6

(b)

( ):Measured flow rates FIGURE 5.6. Illustration of system degree of redundancy.

under sensor failure, other than the fuzzy statement of saying that “the more redundancy, the better.” For this reason, one needs to be more specific and talk about attributes of specific variables at specific locations. This will be discussed in later chapters.

QUANTIFICATION OF OBSERVABILITY AND REDUNDANCY We now resort to a few definitions that will allow the quantification of observable and redundant variables. A unification of both concepts under a single notion of estimability is presented next. Estimability A generalized definition of observability was attempted by Ali and Narasimhan (1993) to denote as observable any variable, measured or unmeasured, for which an estimate can be produced. Estimability is a term preferred to avoid confusion and reserve the name observable to unmeasured variables, because it has become popular in the literature. The definition of estimability is formally presented next. Definition: A variable Si is estimable if it is measured or unmeasured but observable. Degree of Observability of Variables For convenience, we denote 2( p) as the set of all possible combinations of p measurements. We call 2 j ( p), the jth element (combination) of this set. We are now ready for the following definition:

Quantification of Observability and Redundancy

S2 S1

U2

U1

S3

S4 U4

U3

97

S6

S5

FIGURE 5.7. Illustration of the concept of degree of observability.

Definition: An unmeasured variable Si has degree of observability Oi if: a. It remains observable after the elimination of any combination 2 j (Oi − 1) ∈ 2(Oi − 1), and b. It becomes unobservable when at least one set 2 j (Oi ) ∈ 2(Oi ) is eliminated. Notice the words “any” and “at least” in this definition. They are crucial for the understanding of the concepts. In addition, also notice that the two conditions need to hold simultaneously. Consider the system of Figure 5.7 and assume that x M = {S1 , S2 }. Variable S6 has degree of observability O6 = 1, because just the elimination of the measurement in S1 makes it unobservable. Part a of the definition does not apply, because Oi − 1 = 0. If for example S3 is also measured, that is, x M = {S1 , S2 , S3 }, then S6 would have degree of observability O6 = 2, because elimination of one measurement at a time (S1 , S2 or S3 ) would not make it unobservable. However, a deletion of any of the following two sets (S1 , S2 ), (S1 , S3 ), would render it unobservable. Note, however, that the elimination of the set (S2 , S3 ) would not make S6 unobservable. Degree of Redundancy of Variables For convenience, we denote ϑ( p, Si ) as the set of all possible combinations of p measured variables, not including the measured stream Si . We call ϑ j ( p, Si ) the jth element (combination) of this set. The following definition is then presented. Definition: A redundant measured variable Si has degree of redundancy Ri if: a. It remains redundant after the elimination of any combination ϑ j (Ri −1, Si ) ∈ ϑ(Ri − 1, Si ), and b. It becomes nonredundant when at least one set ϑ j (Ri , Si ) ∈ ϑ(Ri , Si ) is eliminated.

98

VARIABLE CLASSIFICATION

Remark 4: Accordingly, the degree of redundancy of a nonredundant measurement is zero. In the system of Figure 5.7, for x M = {S1 , S2 }, variable S1 has degree of redundancy R1 = 0 because it is already nonredundant. If S3 is measured, that is, if x M = {S1 , S2 , S3 }, then S1 has degree of redundancy R1 = 1, because it is sufficient to eliminate S2 or S3 to make it nonredundant. A redundant measurement is such that the variable becomes observable when the measurement is eliminated. A nonredundant variable, in turn, becomes unobservable if its measurement is eliminated. Thus, if a variable has degree of redundancy Ri , the elimination of its measurement will make it a variable with degree of observability Oi = Ri . For example if for Figure 5.7, x M = {S1 , S2 , S3 }, then the elimination of S1 , makes it a nonmeasured variable with degree of observability O1 = 1, because it is enough to eliminate S2 or S3 to make it unobservable. These definitions of degree of observability and redundancy were first introduced by Maquin et al. (1991, 1995). The version presented above is the result of slight modifications made by Bagajewicz and S´anchez (1999). In addition, Maquin et al. (1991, 1995) introduced the concept of degree of redundancy of unmeasured variables. This concept is not used in this book, because it is preferred to leave the concept of redundancy of variables confined to measured ones. The concept of estimability, which unifies degree of observability and degree of redundancy into a single concept, is presented next. This will become important when the design of sensor networks is attempted, as requirements can be easier established mathematically. Degree of Estimability of Variables Definition: A variable Si (measured or not) has degree of estimability E i if: a. It remains estimable after the elimination of any combination 2 j (E i − 1) ∈ 2(E i − 1), and b. It becomes unobservable when at least one set 2 j (E i ) ∈ 2(E i ) is eliminated. To illustrate the above definition, consider the process graph in Figure 5.8, which includes the environment as a node, and assume all flow rates are measured, that is, x M = {S1 , S2 , S3 , S4 }. In this case, ϑ(1, S2 ) = {(S1 ), (S3 ), (S4 )}. Therefore, stream S2 has degree of redundancy R2 = 1 because just the elimination of S1 makes S2 nonredundant. Because the elimination of S1 makes S2 nonredundant, and the elimination of either S3 or S4 does not alter its redundancy status, its degree of estimability is larger than 1. Consequently, it is necessary to evaluate the elements of the set 2(2) = {(S1 , S2 ), (S1 , S3 ), (S1 , S4 ), (S2 , S3 ), (S2 , S4 ), (S3 , S4 )}. From

Quantification of Observability and Redundancy

99

S1 U1

U2 S2

S3

S4

ENV FIGURE 5.8. Illustration of degree of estimability.

this analysis, it can be easily seen that the elimination of (S1 , S2 ) makes S2 unobservable, thus E 2 = 2. In a similar way it can be shown that R3 = 2 because if all the elements of ϑ(1, S3 ) = {(S1 ), (S2 ), (S4 )} are individually eliminated, variable S3 stays redundant but becomes nonredundant if the elements (S1 , S4 ) or (S2 , S4 ) from ϑ(2, S3 ) = {(S1 , S2 ), (S1 , S4 ), (S2 , S4 )} are deleted. The inspection of 2(2) and 2(3) = {S1 , S2 , S3 ), (S1 , S2 , S4 ), (S2 , S3 , S4 )} helps to conclude that E 3 = 3, because the elimination of (S2 , S3 , S4 ) renders S3 unobservable. The following properties are natural consequences of the definition. Property: The degree of estimability of a nonmeasured variable is equal to the degree of observability. Property: The degree of estimability of a measured redundant variable is its degree of redundancy plus one. Property: A measured nonredundant variable has degree of redundancy zero and degree of estimability one. The degree of estimability is an important property, because it does not distinguish whether a variable is measured or not. Systems with all variables with degree of estimability one are nonredundant, and as we shall see later, feature a minimum number of sensors. Lemma 5.1: A system where all variables have degree of estimability equal to one, is a system of nonredundant measured variables and observable unmeasured ones. Proof: If a variable is measured and has degree of estimability one, then it is nonredundant. If in turn, it is unmeasured, it is observable (because its estimability is one) and its observability is given by nonredundant variables. Q.E.D.

100

VARIABLE CLASSIFICATION

Property: When the degree of estimability is larger than one for any variable, then the number of sensors that need to fail before the variable in question becomes unobservable is equal to its degree of estimability. The property follows from the definition, and it has connections with the concept of reliability as it will become apparent in Chapters 10 and 11. GRAPHS AND CANONICAL MATRICES In this section, we review a few graph theory concepts related to the canonical matrix. Most of this material has been extracted from textbooks (Harary, 1972; Even, 1979) and adapted to our case. Spanning Trees Definition: A cycle of a graph is a set of connected edges (streams) such that the last edge is connected to the first. Definition: A tree of a graph is a set of connected edges (streams) that does not form a cycle. Definition: A spanning tree T of a graph is a tree that connects all vertices (units) of the graph. Lemma 5.2: The identity part of D 0 , the canonical representation of matrix D, corresponds to a set of variables that form a spanning tree. Proof: Note first that we refer to a canonical form that does not distinguish measured from unmeasured, that is D 0 = [I D ∗ ]. The canonical part of D 0 (the identity matrix) corresponds to a tree because there are no cycles, and because it has a nonzero value in every row, it is connected to all vertices (units). Q.E.D. Definition: A branch of a spanning tree is any edge (stream) that belongs to the spanning tree. Definition: A chord of a spanning tree is any edge (stream) that does not belongs to the spanning tree. Lemma 5.3: A system where all variables have degree of estimability E i = 1, corresponds to a system where all unmeasured streams are given by a spanning tree. Proof: If all variables have estimability E i = 1, this implies that all unmeasured variables are observable and measured variables are nonredundant. If all unmeasured variables are observable, then the identity matrix contains all of them, and therefore, because all the measured variables are nonredundant, there

Graphs and Canonical Matrices

S2 S1

U2

U1

S4 S6

U4

S3

U3

101

S5

ENV

FIGURE 5.9. Augmented graph for Figure 5.7.

is no row with zeros corresponding to the columns of the unmeasured variables. Thus, the identity matrix corresponds to all unmeasured variables that form a spanning tree. Q.E.D. Adding one measurement to a system can have a different effect on the estimability of the rest of the variables, depending on which measurement is added. Consider the system of Figure 5.7 and assume the measured variables are given by x M = {S1 , S2 }. The augmented graph for this system is shown in Figure 5.9. The set of unmeasured variables is a spanning tree, as it can be observed in Figure 5.10. The degree of estimability of all variables is one, because they are all observable, and x M is a set of nonredundant measurements. The canonical form of the incidence matrix is:

S3 S4 S5 S6 S1 S2 -1

1

1 -1 -1 1 1 -1

1 1

U2 U1

S3

S4 U4

U3

S6

S5

ENV

FIGURE 5.10. One spanning tree of the system of Figure 5.9.

(5.45)

102

VARIABLE CLASSIFICATION

Assume now that S4 becomes measured. Then, the new canonical form is:

S3 S5 S6 S4 S2 S1 1 1 1

1 -1 1 -1 -1 1 -1

(5.46)

The set {S2 , S4 } is now redundant, whereas S1 remains nonredundant. However, S3 , S5 and S6 have still a degree of estimability E 3 = E 5 = E 6 = 1, because it is enough to delete S1 to make any one of them unobservable. In turn, S2 and S4 increased their degree of estimability to E 2 = E 4 = 2. Consider now that instead of S4 , stream S5 is measured, that is, x M = {S1 , S2 , S5 }. Then, the new canonical matrix is:

S3 S4 S6 S5 S1 S2 -1

1 1 1

-1 1 -1

1 -1

(5.47)

1

The set {S1 , S2 , S5 } is now redundant, and their degree of estimability is E 1 = E 2 = E 5 = 2. There are no nonredundant variables and the degrees of estimability of all the unmeasured variables is E 3 = E 4 = E 6 = 2. We now introduce the notion of distance between spanning trees. Definition: The distance between spanning trees is the given by the number of streams that are needed to be removed from one of the spanning trees and repositioned in the other. Consider the spanning trees of Figure 5.11. The spanning tree of Figure 5.11(a) and the one on Figure 5.10 have distance one, because it is only necessary to eliminate S3 from Figure 5.10 and add S2 to get Figure 5.11(a). The spanning trees shown in Figure 5.11(a) and 5.11(b) have also distance one from each other. However, the spanning tree of Figure 5.11(b) and the spanning tree from Figure 5.10 have distance two. As we shall see in future chapters, spanning trees can be used to find sensor networks featuring minimum number of instruments. These systems have all variables with degree of estimability equal to one. It is tempting to think that an enumeration of all spanning trees to later pick the cheapest can be a sound procedure to design such systems. Unfortunately, this could be computationally

103

Graphs and Canonical Matrices

S2

U2

U1

S4

S2

U3

S1

S6

U4

U2

U1

S5

S4 U4

U3

ENV

ENV

(a)

(b)

S5

FIGURE 5.11. Two spanning tree of the system of Figure 5.9.

costly. The number of spanning trees of a given graph is given by the determinant of L = A A T (Chen, 1971). Using this, Madron (1992) shows several examples of how this number can climb to very large values, as large as of the order 109 for regular plants (17 nodes and 49 streams). Cutsets Definition: Given a graph, a cutset is defined as the set of edges (streams) that when eliminated, separate the graph in two disjoint subgraphs. Consider for example the system of Figure 5.9. The set {S1 , S6 } is a cutset. When these streams are eliminated, two subgraphs are left, the original flowsheet (Figure 5.7) and the environmental node (Figure 5.12). The set {S2 , S4 } is another cutset. It separates all the nodes from U2 . Finally, the set {S2 , S5 } is not a cutset because it does not separate the graph into two sets, that is, after elimination of these two streams, all the units of the graph remain connected.

Set {S2, S5}

S1

Cutset {S2, S4}

S2

U2

U1

S3

S4 U4

U3

s6

S5

ENV

Cutset {S1, S6}

FIGURE 5.12. Cutsets of the system in Figure 5.9.

104

VARIABLE CLASSIFICATION

FIGURE 5.13. A cut that is not a cutset.

In turn, the set {S1 , S3 , S5 , S6 } (Figure 5.13) is not a cutset because the elimination of all these streams leaves three disjoint sets of units, {ENV}, {U3 } and {U1 , U2 , U4 }. In fact this set is a union of two cutsets, {S1 , S6 } and {S3 , S5 }. Remark 5: A cutset corresponds to a set of variables with which a material balance involving a certain number of units can be written. This was pointed out by Kretsovalis and Mah (1987b) in the context of process systems. A cutset is, by construction, a set of streams that connects two subsystems of the graph. Thus, because no other stream is leaving or entering the subsystems, aside from the ones of the cutset, the sum of all the flows of the cutset should be equal to zero. This is no other than a material balance. Definition: A cutset is estimable if at least all but one of its streams are measured. This definition relates to the estimability of the variables in the cutset. If all the variables in the cutset are measured, then all the variables are redundant, with degree of redundancy one. If one variable is unmeasured, then it is observable, and all the measured variables are nonredundant. Thus, an estimable cutset is a cutset in which all the variables have degree of estimability of at least one. Definition: An estimable cutset is redundant if all its streams are measured. Property: The number of all cutsets containing a variable Si is equal to the number of material balances that can be written involving variable Si . This property is self-evident. Lemma 5.4: The maximum possible degree of estimability of an unmeasured variable Si is given by the number of estimable cutsets containing Si . Proof: Consider all the cutsets containing variable Si . Consider any nonestimable cutset. If only such cutset is considered, the degree of estimability of Si is zero. Thus, the elimination of any variable of this cutset, except Si ,

Graphs and Canonical Matrices

105

has no effect in the estimability of Si . In fact, the estimability of Si is due only to estimable cutsets. Assume now that Si belongs to n estimable cutsets K 1 (i), K 2 (i), . . . , K n (i). Assume first that K 1 (i) ∩ K 2 (i) ∩ . . . · · ∩ K n (i) = Si . Then, the elimination of a measurement belonging to K 1 (i) makes this cutset nonestimable, but it has no effect on the observability of Si , because it will still be guaranteed by the rest of the estimable cutsets. In fact, one variable per cutset will have to be eliminated to render Si unobservable. Thus, Si has degree of estimability n. However, if the intersection of any subset of estimable cutsets contains at least one more variable in addition to Si , then these subset of cutsets can be made nonestimable by just eliminating any measured variable belonging to such intersection. Thus, if Si is not measured, the maximum possible degree of estimability cannot be larger than the number of estimable cutsets. Q.E.D. Lemma 5.5: The maximum possible degree of estimability of a measured variable Si is given by the number of redundant cutsets containing Si plus one. Proof: Assume first that out of the n estimable cutsets, the first m are redundant. Consider again the case that K 1 (i) ∩ K 2 (i) ∩ . . . · · ∩ K n (i) = Si . Then, the elimination of a measurement belonging to K j (i)( j ≤ m) makes this cutset nonredundant, therefore the observability of Si does not change. The elimination of Si itself also has no effect on the observability of this variable, because it will still be guaranteed by the rest of the measurements of the cutsets. In fact, two variables per redundant cutset will have to be eliminated to render Si unobservable. In general, if the measurement of Si is eliminated, then the m redundant cutsets will become nonredundant, and to make Si unobservable one additional measurement per redundant cutset will have to be eliminated. Q.E.D. Remark: The maximum degree of estimability is an upper bound of the degree of estimability. As we shall see in the examples below, this does not mean this degree of estimability is actually possible to be achieved for a particular flowsheet. Property: The degree of estimability of an unmeasured variable Si whose estimable cutsets have only Si as intersection is equal to the number of these estimable cutsets. If the variable is measured, then its degree of estimability is the number of the estimable and redundant cutsets plus one. This property is actually a corollary of the two previous lemmas. To illustrate the above properties, consider variable S1 in the system of Figure 5.9. All the cutsets containing S1 are: K 1 (S1 ) = {S1 , S6 }, K 2 (S1 ) = {S1 , S2 , S3 }, K 3 (S1 ) = {S1 , S2 , S5 }, K 4 (S1 ) = {S1 , S4 , S3 }, K 5 (S1 ) = {S1 , S4 , S5 }. All these cutsets are shown in Figure 5.14. Assume all variables, except S1 are measured. This set of measurements should make estimable all the cutsets containing S1 . In particular, a deletion

106

VARIABLE CLASSIFICATION

S2 S1 K2

K3

U2

K5

S6

U1

S3

K4

S4 U4

U3

S5

ENV

K1

FIGURE 5.14. Cutsets containing stream S 1 .

of one variable per cutset will certainly make S1 unobservable. However, the minimum number of variables needed to render S1 unobservable is even smaller. The elimination of the measurements in variables S2 , S4 , and S6 will render S1 unobservable. Thus, in this case, the degree of estimability of S1 is E 1 = 3, lower than the bound of five given by the number of estimable cutsets. However, if x M = {S2 , S3 , S6 }, then only K 1 (S1 ) and K 2 (S1 ) are estimable cutsets. Thus, in this last case, the bound on the degree of estimability of S1 is two, and indeed this is the minimum number of variables required to render it unobservable. If in the same example all variables are measured, then all cutsets are redundant, and the bound on the degree of estimability is six. However, the degree of estimability of S1 is four. Elimination of S1 , S2 , S4 , and S6 is enough to render S1 unobservable. However, if x M = {S1 , S2 , S3 , S6 }, then only K 1 (S1 ) and K 2 (S1 ) are redundant cutsets. Now the maximum degree of estimability is three, which is indeed the degree of estimability of S1 . Fundamental Cutsets Consider the canonical form of the incidence matrix, D 0 = [I D ∗ ]. This matrix contains as many rows as units are in the system. Thus, it represents as many linear independent balance equations as it is possible to write in the system. In other words, it contains n U cutsets, where n U is the number of units. We call these, fundamental cutsets. Definition: The nonzero entries of each row of the canonical matrix represent the fundamental cutsets of the system. The relation between fundamental cutsets and spanning trees is that a fundamental cutset contains exactly one branch of a spanning tree. This property is well known.

Graphs and Canonical Matrices

107

Determination of Cutsets Because the rows of the canonical matrix represent all the linearly independent cutsets of the system, then they are the base of a linear space of vectors representing linear combinations of balance equations. Because a cutset is a balance equation, then all cutsets are included in this space. However, cutsets are represented by entries in each position of the matrix that are restricted to +1, −1, or 0 such that they leave exactly two disjoint subgraphs after the elimination of the variables of the cutset. Thus, to find all the cutsets we resort to the following procedure: (1) Create the cutset list by putting all fundamental cutsets in the list. Set the counter k = 2. (2) Create all linear combinations of k fundamental cutsets, restricting the coefficients of such combinations to the numbers 1 and −1. This linear combinations have to be such that absolute values of the entries of the resulting vector are binary (0,1). If the result has any other existing cutset as a subset, eliminate this result. Otherwise, include it in the list of cutsets. (3) k = k + 1 (4) If k < n U perform step (2) (n U is the number of units). To find all the cutsets that contain a specific variable, step (1) is modified to include only the row that contains the variable of interest. A canonical form of the incidence matrix can always be constructed such that the variable of interest has only one nonzero entry in its corresponding column, that is, it is included in the identity matrix. Other methods to obtain cutsets exist (Tsukiyama and Verma, 1980; Fong and Buzacott, 1987). Consider the system of Figure 5.9. Assume that one wants to determine all the cutsets containing stream S3 . Then we start from Equation (5.45). The following combinations of rows have to be explored: (1 ± 2), (1 ± 3), (1 ± 4), (1 ± 2 ± 3), (1 ± 2 ± 4), (1 ± 3 ± 4). The fundamental cutset containing S3 is the first row in the cutset list, that is K 1 (S3 ) = {S1 , S2 , S3 }. If the fundamental cutset containing the variable of interest is not in the first row, then a simple permutation of rows accomplishes the objective. Of all the linear combinations of two rows the following are successful.

r Row one plus row two (1 + 2). The new cutset is K 2 (S3 ) = {S1 , S3 , S4 }. r Row one minus row three (1 − 3). The new cutset is K 3 (S3 ) = {S3 , S5 }. r Row one minus row four (1 − 4). The new cutset is K 4 (S3 ) = {S3 , S2 , S6 }. As one proceeds to perform all the linear combinations of three cutsets, one obtain five combinations that render elements that are 1 or −1. They are: (1 + 2 − 3), (1 − 2 − 3), (1 + 2 − 4), (1 − 3 + 4), and (1 − 3 − 4). All of these,

108

VARIABLE CLASSIFICATION

K2

S2 S1

U2

U1

K5

S6

U4

K1

S3

S4

U3

S5

K4 K3

ENV

FIGURE 5.15. Cutsets containing S 3 .

except (1 + 2 − 4) contain cutset K 3 (S3 ) as a subset and, therefore, should be disregarded. The combination (1 + 2 − 4) renders the the cutset K 5 (S3 ) = {S3 , S4 , S6 }. These cutsets are illustrated in Figure 5.15. Efficiency of Estimability of Variables We are interested in the ability of the sensor network to provide the value of a certain variable, even when a certain number of sensors fail, regardless of position. As it was described above, this is given by the degree of estimability and is equal to the number of estimable cutsets whose intersection is the variable in question. Assume now that no other cutset exists that could be made estimable, such that its intersection with the existing estimable cutsets is only variable Si . In such case, we say that the maximum efficiency of the installed measurements has been obtained. Any other cutset made estimable will increase the number of measurements, but it will not increase the estimability of the variable. We now formally define this efficiency. Definition: The cutset efficiency of estimability of a variable is given by the quotient of the degree of estimability and the maximum degree of estimability. For example, consider again the case of Figure 5.7. Consider the case where S1 is unmeasured and x M = {S2 , S3 , S6 }. Then, only two estimable cutsets exist that contain variable S1 , namely, K 1 (S1 ) = {S1 , S6 }, K 2 (S1 ) = {S1 , S2 , S3 } (Figure 5.16). The intersection of these two cutsets is only S1 ; therefore, its degree of estimability is two. The maximum number of estimable cutsets one can obtain with three measurements is two. Thus, the maximum degree of estimability is two and the efficiency is one. If one adds measurements in S4 and S5 , the number of estimable cutsets is now five (see Figure 5.14). However, the degree of estimability of S1 is three. In other words, the efficiency of the estimability is now 3/5, smaller than before.

Graphs and Canonical Matrices

S2 S1 K2

U2

U1

S3

S4 U4

U3

109

K5

S6

S5

ENV

K1

( ):Measured flow rates FIGURE 5.16. Fundamental cutsets of the system in Figure 5.10.

Thus, when a variable has efficiency of estimability one, no unnecessary estimable cutsets have been used. In other words, any additional cutset made estimable by the addition of measurements may contribute to other goals, such as the increase of precision, but will not affect the degree of estimability. We are now in a position to discuss another type of efficiency. Consider the case in which a certain unmeasured variable has a certain degree of estimability. Furthermore, assume that cutsets exist whose intersection with all the estimable cutsets that contain Si is {Si }. Thus, one can eliminate the measurements of one estimable cutset and introduce measurements in another of these nonintersecting cutsets to make it estimable, and, therefore, maintain the same degree of estimability. This exchange of cutsets can continue until the minimum number of measurements is used. This motivates the following definition. Definition: The minimum estimation cardinality of degree k of a nonmeasured variable is the smallest number of measurements that are needed to obtain a degree of estimability k for the variable. The minimum cardinality can thus be obtained by finding a combination of k cutsets whose intersection is variable Si only and whose union has the minimum number of variables. Once the cutsets that include the variable in question are known, this is a simple task. It is only required to enumerate the cutsets in increasing order of cardinality, pick the first k sets and count all measurements involved. Efficiency, defined as the quotient between the actual number of measurements involved and the minimum needed, can also be introduced. Consider again the case of Figure 5.9. Consider again the case where S1 is unmeasured and x M = {S2 , S3 }. Then, only one estimable cutset exists, namely K 2 = {S1 , S2 , S3 } and the degree of estimability of S1 is E 1 = 1. However, since K 1 (S1 ) ∩ K 2 (S1 ) = S1 , then the set x M = {S6 } can achieve the same degree of estimability, and therefore the minimum estimation cardinality of S1 is one (Figure 5.17).

110

VARIABLE CLASSIFICATION

S2 S1

U2

U4

U1

K2

S3

S4

U3

S6

S5

ENV

K1

( ):Measured flow rates FIGURE 5.17. Estimable cutsets.

NONLINEAR SYSTEMS Component Balances In addition to material balances, component balances can be performed. Assume there are p components. Then the system equations are now: dwcj

= A f jc j = 1, . . . , p dt C f jc = 0 j = 1, . . . , p P X

(5.48) (5.49)

f jc = f

(5.50)

wcj = w

(5.51)

j=1 P X j=1

Typically, concentrations of species and not component flows are measured; thus, the relation between component flows and molar (or weight) fractions is needed. This relation is expressed as follows: f j = f ⊗ cj

(5.52)

where ⊗ indicates the Hadamard product of the two vectors, that is a ⊗ b = [a1 b1 a2 b2 . . . . . . an bn ]. Now that concentrations have been introduced, the set of Equations (5.49) contains equations corresponding to splitters. These equations can be substituted by equations containing equalities of concentrations. The component

Nonlinear Systems

111

balance model becomes:

¡

¢ w

dw = Af dt Cf = 0

d w ⊗ cj

(5.53) (5.54)

¡ f¢ = A f ⊗ cj

dt

¡ f¢ C1 f ⊗ c j = 0 f

C2 c j = 0

j = 1, . . . , p

(5.55)

j = 1, . . . , p

(5.56)

j = 1, . . . , p

(5.57)

Equation (5.57) represents now the equality of concentrations in splitters. In addition, note that Equation (5.56) can also be written as follows: ¡ ¡ f¢ f f¢ C1 f ⊗ c j = (C1 ∗ f )c j = C1 ∗ c j f = 0

j = 1, . . . , p

(5.58)

where ∗ represents the Hadamard product of matrix and a vector. This product is defined as follows: A ∗ b = [a1 a2 . . . an ]b = [(a1 ⊗ b)(a2 ⊗ b) . . . (an ⊗ b)]. In the case of steady state the above system reduces to Mf = 0 M1 [ f ⊗ c j ] = 0 C2 c j = 0

(5.59)

j = 1, . . . , p

(5.60)

j = 1, . . . , p

(5.61)

f

where the superscript f in c j has been dropped, as it is no longer needed to distinguish concentrations of streams from concentrations in units. Also: · ¸ A (5.62) M1 = C1 · ¸ A M= (5.63) C To illustrate the form of these matrices we now resort to the systems in Figure 5.18(a) and 5.18(b). These systems represent a flash unit followed by a mixer for Figure 5.18(a) and a splitter for Figure 5.18(b). The matrices M for both systems are: · Ma =

S1 1

S2

S3

−1 −1 1

S4

S5

1 −1

¸

· Mb =

S1

S2

1 −1 1

S3 −1

S4 −1

S5 ¸ −1

(5.64)

112

VARIABLE CLASSIFICATION

S3

S3

S1

S1 S2

S2

S4

S5

S4

S5

(a)

(b)

FIGURE 5.18. A flash and a mixer.

For the system in Figure 5.12(a) we have M1 = Ma , and matrix C2 does not exist because there is no splitter. However, for system 5.12(b), matrices M1 and C2 are: S1 M1 = [1

S2 S3 −1 −1

S4

S5 ]

·

S1

C2 =

S2 1 1

S3

S4 −1

S5 ¸

(5.65)

−1

Energy Balances In addition to material and component balances, energy balances can be performed. The energy balance equations are: d(w ⊗ h w ) = A[ f ⊗ h f ] dt

(5.66)

C1 [ f ⊗ h f ] = 0

(5.67)

C2 h = 0

(5.68)

f

where h f is the vector of stream specific enthalpies. Typically, temperatures are measured. Thus, the relation between enthalpy flows and temperature is needed. This relation is sometimes expressed assuming constant and known specific heats as follows: h f = cp ⊗ T

(5.69)

Thus, the same expression can be used for the steady-state version of component balances and energy balances. In particular for steady-state systems we have: Mf = 0

(5.70)

M1 [ f ⊗ (cp ⊗ T )] = 0

(5.71)

C2 T = 0

(5.72)

Nonlinear Systems

113

The above analysis does not include heat exchangers, because they obey entirely different balances. In particular, one has to write two algebraic balance equations for each heat exchanger and one algebraic heat balance for it. Thus, assume ¸

·

MD M= MI

(5.73)

where M I corresponds to heat exchangers and any other equipment where heat is exchanged indirectly, and M D corresponds to the rest of the units in the system where heat is directly exchanged, except splitters. We assume for simplicity that the balance equations for units where heat is indirectly transferred do not contain accumulation terms. Then the material and energy equations corresponding to units where indirect heat transfer takes places are written as follows: MI f = 0

(5.74)

M1,I [ f ⊗ h ] = 0

(5.75)

f

where matrix M1,I contains one row per heat exchanger, that is, half the rows of M I . We illustrate the structure of this matrix through the following small example. Consider one heat exchanger (Figure 5.19). In this case the corresponding matrices are: · MI =

S2

S1

S3

1 −1

S1 M1,I = [1

S3 −1

S4 1]

which will render one energy balance.

S3 S1

(5.76)

−1

1

S2 −1

S4 ¸

S2

S4

FIGURE 5.19. A heat exchanger.

(5.77)

114

VARIABLE CLASSIFICATION

Observability Analysis of Bilinear Systems We will consider for simplicity the steady state case. The dynamic case can be treated the same way as in the case of linear systems, that is, by performing the same manipulations on D instead of M. Several articles have studied this type of systems in detail. A simple method, based on Gauss-Jordan factorization, is presented first. A discussion of other methods follows. Consider a canonical representation for the bilinear case. The analysis is restricted to a single component. It also applies to the multicomponent case, provided that the measurement of concentrations, when performed, is done on all components of interest. The case of heat exchange will be analyzed as an extension. Before proceeding with the analysis, a preliminary classification is required. We classify the set of streams into the following sets: S = S MU ∪ S UU ∪ S UM ∪ S MM

(5.78)

where S MU is the set of streams where the concentration is measured and the flow rate is unmeasured, S UU is the set of streams where no flow rate or concentration is measured, S UM is the set of streams where the concentration is unmeasured and the flow rate is measured and S MM is the set of streams where both the concentration and the flow rate are measured. We also considered as measured all unmeasured streams that are connected to a splitter where at least one concentration is measured. We now partition matrices M, M1 and C2 into four parts following the same classification. £ M1 = M1MU ¥ C2 = C2MU

M1UU

M1UM

C2UU

C2UM

¤ M1MM ¦ C2MM

(5.79) (5.80)

We now write the system of Equations (5.59) through (5.61) as follows: M MU f MU + M UU f UU = −(M UM f UM + M MM f MM ) ¡ ¢ ¡ MU MU ¢ MU + M1UU ( f UU ⊗ cUU ) + M1UM ∗ f UM cUM f M1 ∗ c

(5.81)

= −M1MM ( f MM ⊗ cMM )

(5.82)

C2UM cUM + C2UU cUU

(5.83)

¡ ¢ = − C2MU cMU + C2MM cMM

In these equations, the measured quantities are put in the right-hand side. This will leave the left-hand side with unmeasured quantities. We rewrite these

Nonlinear Systems

115

equations in matrix form as follows:   f MU   MU UU  f UU  ¡ M ¢ M ¡ UM ¢  UU UM  M MU ∗ C MU  UU ∗ f M M  1 1 1 f ⊗ cUU     UM  C2UU  c C2UM cUU   UM UM + M MM f MM M f   (5.84) = −  M1MM ( f MM ⊗ cMM )  C2MU cMU + C2MM cMM Thus, the system will be observable if the rank of the above matrix is equal to the dimension of the vector. Otherwise, some variables will not be observable. Gauss-Jordan factorization, much in the same way as it was described above for linear systems, can be performed on the system matrix obtained from Equation (5.84). The result needs further analysis. Indeed, within these variables, there will be a set of unobservable variables, one of which will be ( f UU ⊗ cUU )UO . However, some values of f UU and/or cUU in these products may be observable, which will render the other also observable. Therefore, an additional analysis has to be performed to obtain the final results. To illustrate the procedure consider a flash unit followed by a mixer as shown in Figure 5.18(a). Measured and unmeasured flow rates and compositions of one component are shown in Table 5.1. Matrix M = M1 is now decomposed into the following matrices (matrix M UM does not exist): · S4 M

MU

=

1

S5¸ −1

M

UU

=

· S3 ¸ −1

M

MM

· S1 S2¸ 1 −1 = 1

TABLE 5.1. Measured Variables in Figure 5.18.

S1 S2 S3 S4 S5

Flow Rate

Composition

M M

M M M M

(5.85)

116

VARIABLE CLASSIFICATION

Thus, the observability analysis can be done by analyzing the following matrix,

M MU S4

S5

S ˜3

S4

f2 - f1

-1 -1

1

f2 f1c1 - f2c2

-1 - c5

c4

M1MU ˜ cMU

f2c2

M UU = M1UU (5.86)

In this matrix, S3⊗ indicates the column that corresponds to matrix M1UU , that is, the column corresponding to ( f 3UU ⊗ c3UU ), whereas the last column is the right-hand side of Equation (5.84). The canonical form is:          

S3 S5 S3⊗ 1

S4

f2 − f1



      f 2 c2 − f 1 c1   (c2 − c5 )  f2 (c4 − c5 )

1

f2

1 1

(c2 − c4 ) (c4 − c5 )

(5.87)

which gives the following correct results: f3 = f1 − f2 f5 = f2

(c2 − c4 ) (c4 − c5 )

(5.88) (5.89)

f 3 c3 = f 1 c1 − f 2 c2

(5.90)

(c2 − c5 ) f4 = f2 (c4 − c5 )

(5.91)

Nonlinear Systems

117

The analysis can be extended further by realizing that from Equations (5.90) and (5.88) one can calculate c3 . We now repeat the same analysis for the case of Figure 5.18(b) and the same measurements as in the previous case. The submatrices of M, M1 and C2 are: S5 ¸

· S4 M MU =

−1

M UU =

−1

S4 M1MU = [0

· S3 ¸ −1

S3 M1UU = [−1]

S5 0]

S2 · ¸ 1 = 1

C2MM

M MM

· S1 S2¸ 1 −1 = 1

S1 M1MM = [1

S2 −1]

· S4 −1

S5 ¸

C2MU =

(5.92)

(5.93)

(5.94)

−1

Note that C2UU , M1UM and C2UM do not exist. Thus, the observability analysis can be done by analyzing the following matrix:

M MU S4

S5

M UU S3˜

S3 -1

-1

-1

f1 - f2 f2

-1

f1c1 - f2c2

(5.95)

c4 - c2 c4 - c5

M1UU which has the following canonical form      

S3 −1

S3⊗ −1

S4

S5

−1

−1

 f1 − f2 f 1 c1 − f 2 c2    f2  c4 − c 2  c4 − c 5

(5.96)

118

VARIABLE CLASSIFICATION

S4 S1

S6 S2

S5

S3 S7

FIGURE 5.20. Two heat exchangers in series.

From this matrix, we conclude that only f 3 and f 3 c3 are observable, whereas, f 4 and f 5 are unobservable. A final analysis renders c3 observable. In addition, the last two rows of the matrix indicate that the concentrations of S4 and S5 are redundant. Presence of Heat Exchangers In the case where energy balances are being performed surrounding heat exchangers, the procedure can be applied, but it requires a slight modification. The equations equivalent to Equations (5.81) and (5.82) are: ¡ ¢ M IMU f MU + M IUU f UU = − M IUM f 2UM + M IMM f 2MM ¡ UM ¢ ¢ ¡ MU UU ( f UU ⊗ h UU + M1,I ∗ f UM h UM M1,I ∗ h MU f MU + M1,I MM = −M1,I ( f MM ⊗ h MM )

(5.97)

(5.98)

We illustrate these equations through an example. Consider the two heat exchangers in Figure 5.20. Table 5.2 summarizes which variables are measured. TABLE 5.2. Measured Variables in Figure 5.20.

S1 S2 S3 S4 S5 S6 S7

Flow Rate

Temperature

M

M M

M

M M M M

Nonlinear Systems

119

The material and energy balance equations are: F1 = F2 = F3

(5.99)

F4 = F5

(5.100)

F6 = F7

(5.101)

F1 (h 2 − h 1 ) = F4 (h 4 − h 5 )

(5.102)

F2 (h 3 − h 2 ) = F6 (h 6 − h 7 )

(5.103)

Equations (5.102) and (5.103) can be rewritten in a form of products of the flow rates and enthalpies of the same streams. Matrix M I and M1,I for this system are:  S1 S2 1 −1  1 M1 =  

M1,I

· S1 S2 1 −1 = 1

S3 −1

S3 −1

S4 1

S5 −1

S4 S5 −1 1

S6

S7    

1

(5.104)

−1

S6

S7 ¸

−1

1

(5.105)

Application of the above procedure renders all variables in the system observable. The theory of observability of bilinear systems started with the articles by Vaclavek (1969), Vaclavek et al. (1976a, 1976b) and Vaclavek and Loucka (1976) who first offered the definitions of redundancy and observability and provided a graph-based necessary and sufficient conditions of unobservability. All this work has some limitations because of the simplifying assumptions made (Crowe, 1989). Romagnoli and Stephanopoulos (1980), proposed the first classification method based on the solvability of the nodal balance equations by using the output set assignment algorithm (Stadherr et al., 1974). Joris and Kalitventzeff (1987) also proposed a structural rearrangement of the incidence matrix, but their method may fail when loops occur leading to singularities Later, Crowe et al. (1986) proposed the application of matrix projection, and Kretsovalis and Mah (1987b, 1988a, 1988b) made use of graph theory. Finally, Maquin et al. (1991) and S´anchez and Romagnoli (1996) make use of orthogonal transformations, specifically QR decomposition.

120

VARIABLE CLASSIFICATION

FULL NONLINEAR SYSTEMS A general nonlinear system of equations can be classified by using reduction to canonical forms too. One way that also works for bilinear systems consists of linearizing the system around the design or operating point. Once this is performed, a Gauss-Jordan factorization or a Q-R decomposition can be done easily. We concentrate only on steady state systems. Consider the model f (x) = 0. Linearization around its expected operation point yields Jx ∼ =d

(5.106)

where matrix J represents the Jacobian of f (x) around x0 and d is the corresponding constant. Matrix J may be partitioned in submatrixes JM and JU , which are related to the vector of measured variables (x M ) and unmeasured parameters and state variables (xU ), respectively ·

[JM

¸ xM JU ] =d xU

(5.107)

We will illustrate the use of Q-R factorization in the following example. The flash tank model of Figure 5.21 is taken from Van Winkle (1967). F1 = F2 + F3 X

F1 yi1 = F2 yi2 + F3 yi3 X X yi2 = yi3 = 1 yi1 = yi3 = ηi yi2 Pi (sat)/P

(5.108) (5.109) (5.110) (5.111)

Assume the following structure for vector z: x T = [F1 y11 y21 y31 F2 y12 y22 y32 F3 y13 y23 y33 P η1 η2 η3 ] F3, y3 T, P, h F1, y1

F2, y2 FIGURE 5.21. A flash unit.

(5.112)

121

Full Nonlinear Systems

TABLE 5.3. Data for Figure 5.21. Composition (Molar Fraction) Saturation Pressures (mmHg) Feed Vapor Liquid

Stream Flow (mol/hr) Ethanol 1-Propanol 2-Propanol Temperature (◦ C) Pressure (mmHg) Vaporization efficiency (%)

100 0.2 0.5 0.3 137 3600 95

5287 2932 4651

49.50 0.233 0.436 0.331

50.50 0.167 0.563 0.270

We now assume that the following variables are measured x M = {F1 , F2 , y12 , F3 , y13 , P}

(5.113)

Then, the corresponding matrices (JM , JU , and d) of the linearized model around the nominal operation point are the following: 

0.199 −0.167 −50.5 0.533 −0.595  0.267 −0.238    1 JM =    −1.395    



−0.233 −0.47 −0.297

−49.5

1 1

d T = [0.033

          0.001  0.001 0.001

(5.114)



100

     1 JU =      



100 1

100 1

−50.5 1 −0.79

−0.0135 −0.0195

−50.5

−49.5

1 1 1 −1.247

1. 1.

1. 0.

  −49.5      1    1

(5.115)

0. 0.]

(5.116)

122

VARIABLE CLASSIFICATION

Using Q-R orthogonal factorizations, matrices Q 1 , Q 2 , R11 , R12 and 5 are: 

−1

  −0.01  Q1 =    

0.0001 −1 −0.01

0.0001 0.0001 −1 −0.01

0.003 0.003 0.003 −0.301 −0.597

0.744

−0.003 0.004 0.0004 −0.003 0.004 0.0004 −0.003 0.004 0.0004 0.302 −0.387 −0.037 −0.060 −0.496 0.473 0.671 −0.276 −0.557 0.671 0.074

0.468 −0.555

0.575 0.365

        −1

 −0.0082 −0.0082   −0.0082    0.8164     Q2 =   −0.412   −0.404      0     0 0

(5.117)



−100.005    R11 =   

−0.01 −100.005

(5.118)

−0.01 −0.01 49.497 50.4975 −100.005 50.497 −0.0049 −0.005 −1.6761 −0.1491 −0.7487 1.4909 −0.4372 −0.325 −1.0618 −0.2267 −0.2788



R12

49.4975  0.5947   0.8945 =  −1.023  −0.2108 0

     (5.119)   0.2453

 −0.2288  −0.0229  0.1708   −0.1122

(5.120)

Nomenclature

        5=       

123



1

              

1 1 1 1 1 1 1 1

(5.121)

1 After using the permutation matrix one discovers that y33 and η3 are unobserv−1 R12 to determine able, that is xU,2 = {y33 , η3 }. We now inspect the matrix R11 which other variable is unobservable. Therefore, 

−1 R12 R11

0.9  −0.9  −0.802 =  1   0.802 0.7561

 −0.1246 0.1246   0.2467     −0.2467 0.4023

(5.122)

Thus, the only two nonzero rows correspond to variables y11 and η1 . These are the two only observable variables. The rest are either measured or unobservable. A word of caution must be included concerning the above procedures. The Gauss-Jordan factorization method presented first for bilinear systems, requires additional manipulations after the canonical form of the matrix is obtained. And, in the case of component balances, the use of the normalization equation for multicomponent systems was not included, and the analysis for such systems was restricted to a special type of concentration measurements where all components of interest are measured. Extensions to complete systems have not been produced, although some discussion has occurred (Sanchez and Romagnoli, 1996). For fully nonlinear systems the above presented technique of linearization followed by observability analysis has been questioned (Sanchez et al., 1992), and alternatives have been proposed (Ponzoni et al., 1997). NOMENCLATURE A: AR: B: BR :

incidence matrix incidence matrix for redundant systems incidence matrix incidence matrix for redundant systems

124

cic : C: CR: D: Di : Ei : f: fR: f ic : Fi : Gi : h(x): hf: hw : I: J: K j: K j (i): M: Oi : P: P: Q: Ri : Ri j : Si : S MU : S UU : S UM : S MM : T: Ui : w: wic : x: e x: xO : xUO : xM : xU :

VARIABLE CLASSIFICATION

concentration of component i in all streams incidence matrix incidence matrix for redundant systems incidence matrix for linear dynamic systems submatrices of D estimability of variable i vector of flow rate of streams vector of redundant flow rate of streams flow of component c in stream i flow rate of stream i submatrices of the canonical matrix measurement function enthalpy of material in streams enthalpy of material in units identity matrix Jacobian matrix jth cutset of a graph jth cutset containing variable i incidence matrix in bilinear systems degree of observability of variable i projection matrix pressure matrix in Q-R decomposition degree of redundancy of variable i submatrices of Q-R decomposition stream i set of streams where the concentration is measured and the flow rate is unmeasured set of streams where no flow rate or concentration is measured set of streams where the concentration is unmeasured and the flow rate is measured set of streams where both the concentration and the flow rate are measured spanning tree unit i vector of hold-up of units hold-up of component c in unit i vector of state variables estimates of state variables x set of observable unmeasured variables set of unobservable variables set of measured variables set of unmeasured variables

References

xR: xNR : yi j : z:

125

set of measured and redundant variables set of measured and nonredundant variables molar fraction of component i in stream j measurement values corresponding to x

Greek Letters ϑ( p, Si ): ϑ j ( p, Si ): 5: 2( p): 2 j ( p):

set of all possible combinations of p measured variables, not including Si jth element (combination) of ϑ( p, Si ) permutation matrix in Q-R decomposition set of all possible combinations of p measurements jth element (combination) of 2( p)

REFERENCES Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Linear Processes. AIChE J., 39, 5, 2237–2249 (1993). Bagajewicz M. and M. S´anchez. Design and Upgrade of Nonredundant and Redundant Linear Sensor Networks. AIChE J., 45, 9, 1927–1939 (1999). Chen W. K. Applied Graph Theory. North Holland Publ., Amsterdam (1971). Crowe C. M. Reconciliation of Process Flow Rates by Matrix Projection. II. The Nonlinear Case. AIChE J., 32, 616–623 (1986). Crowe C. M. Observability and Redundancy of Process Data for Steady State Reconciliation. Chem. Eng. Sci., 44, 12, 2909–2917 (1989). Crowe C. M., Y. A. Garcia Campos and A. Hrymak. Reconciliation of Process Flow Rates by Matrix Projection. I. The Linear Case. AIChE J., 29, 818 (1983). Even S. Graph Algorithms. Computer Science Press (1979). Fong C. and J. Buzacott. An Algorithm for Symbolic Reliability Computation with Pathsets and Cutsets. IEEE Trans. Rel., R-36, 34 (1987). Griffith E. W. and K. S. P. Kumar. J. Math. Anal. Appl., 35, 135 (1971). Harary F. Graph Theory. Addison-Wesley (1972). Joris P. and B. Kalitventzeff. Process Measurement Analysis and Validation. XVIII Congress on the Use of Computers in Chemical Engineering. CEF’87, 41–46 (1987). Kalman R. E. New Approach to Linear Filtering and Prediction Problems. J. Basic Eng., ASME, 82D, 35 (1960). Kou S. R., D. L. Elliot and T. J. Tarn. Inf. Contr., 22, 89 (1973). Kretsovalis A. and R. S. H. Mah. Effect of Redundancy on Estimation Accuracy in Process Data Reconciliation. Chem. Eng. Sci., 42, 2115 (1987a). Kretsovalis A. and R. S. H. Mah. Observability and Redundancy Classification in Multicomponent Process Networks. AIChE J., 33, 70–82 (1987b).

126

VARIABLE CLASSIFICATION

Kretsovalis A. and R. S. H. Mah. Observability and Redundancy Classification in Generalized Process Networks. I. Theorems. Comp. & Chem. Eng., 12, 7, 671–687 (1988a). Kretsovalis A. and R. S. H. Mah. Observability and Redundancy Classification in Generalized Process Networks. II. Algorithms. Comp. & Chem. Eng., 12, 7, 689–703 (1988b). Madron F. Process Plant Performance, Measurement Data Processing for Optimization and Retrofits. Ellis Horwood, West Sussex, England (1992). Maquin D., G. Bloch, J. Ragot. Data Reconciliation of Measurements. Revue Diagnostic et Surete de Fonctionnement., 1, 2, pp. 145–181 (1991). Maquin D., M. Luong and J. Paris. Dependability and Analytical Redundancy. IFAC Symposium on On-Line Fault Detection in the Chemical Process Industries. Newcastle, UK (1995). Ponzoni I., M. C. S´anchez, N. B. Brignole. A New Partitioning Algorithm for Classification of Variables in Process Plant Monitoring. AIChE’97 Annual Meeting, LA, USA (1997). Romagnoli J. and G. Stephanopoulos. On the Rectification of Measurement Errors for Complex Chemical Plants. Chem. Eng. Sci., 35, 5, 1067–1081 (1980). S´anchez M. C., A. J. Bandoni and J. A. Romagnoli. PLADAT: A Package for Process Variable Classification and Plant Data Reconciliation. Comp. & Chem. Eng., S499–S506 (1992). S´anchez M. and J. Romagnoli. Use of Orthogonal Transformations in Classification/Data Reconciliation. Comp. & Chem. Eng., 20, 483–493 (1996). Singh S. N. Int. J. Syst. Sci., 6, 723 (1975). Stadherr M., W. Gifford and L. Scriven. Efficient Solution of Sparse Sets of Design Equations. Chem. Eng. Sci., 29, 1025–1034 (1974). Stanley G. M. and R. H. S. Mah. Observability and Redundancy in Process Data Estimation. CES, 36, 259–272 (1981). Stewart G. W. Introduction to Matrix Computations. Academic Press, NY (1973). Swartz C. L. E. Data Reconciliation for Generalized Flowsheet Applications. American Chemical Society of National Meeting. Dallas, TX (1989). Tsukiyama R. and M. Verma. An Algorithm to Enumerate All Cutsets of a Graph in Linear Time per Cutset. J. Assoc. Comput. Mach., 27, 619 (1980). Vaclaveck, V. Studies on System Engineering. III. Optimal Choice of the Balance Measurements in Complicated Chemical Systems. CES, 24, 947–955 (1969). Vaclaveck V. and M. Loucka. Selection of Measurements Necessary to Achieve Multicomponent Mass Balances in Chemical Plants. CES, 31, 1199–1205 (1976). Vaclaveck V., M. Kubicek and M. Loucka. Calculation of Material Balances for Chemical Engineering Systems with Due Allowance for Measurement Errors. Theor. Found. Chem. Eng., 9, 242–245 (1976a). Vaclaveck V., M. Kubicek and M. Loucka. Calculation of Material Balances for Chemical Engineering Systems with Due Allowance for Measurement Errors—Classification of Stream Parameters. Theor. Found. Chem. Eng., 10, 256–260 (1976b). Van Winkle M. Distillation. McGraw-Hill, New York (1967).

CHAPTER 6

Design and Upgrade of Nonredundant and Redundant Sensor Networks

INTRODUCTION The purpose of this chapter is to present methodologies for the grassroots design and the upgrade of sensor networks, with the goal of achieving a certain degree of estimability for specific variables. After reviewing sensor location goals, the first part of the chapter discusses the design of linear networks. Methods to design networks for bilinear systems are discussed afterward. UPGRADE AND/OR DESIGN GOALS When the value of all variables is of interest, then the objective becomes to design a system for which all variables have a degree of estimability of at least one, that is, all unmeasured variables are observable and all measured variables are nonredundant. But one may not necessarily be interested in the value of all variables. Some may remain unobservable. Thus, a design goal should be estimability of order at least one in only the variables of interest. In the absence of other goals, estimability of order one should be sufficient. However, sensors may fail too often, and some reassurance that data for a variable will remain observable when sensors fail may be desired. Requesting estimability of higher order is one simple way of addressing this concern. It might also seem that it is not necessary to distinguish whether a variable needs to be measured or not, as long as it has a certain degree of estimability. However, certain variables cannot be measured because there is no space to install the instrument, there is lack of proper access for calibration, etc. Cost is excluded, because this issue will be handled by the design procedure itself. In some other cases, the measurement of some variables may be made compulsory. 127

128

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

Aside from the economic reasons, which will be handled by the design procedure, reasons for such compulsory choice can be related to ease and/or small frequency of maintenance. They even can be political, because many production accounting personnel feel uneasy about not measuring certain variables they consider of importance. These fears may not be unfounded, because the degree of estimability does not take into account the frequency of failure of each sensor, and, therefore, one very reliable instrument may fail less often than, for example, a set of two instruments. These issues will be addressed in detail in future chapters when they can be assessed mathematically using the concept of instrument reliability.

DESIGN FOR ESTIMABILITY If for each variable xi there is only one potential measuring device with associated cost ci , then the total cost is given by: C(q) =

X

(6.1)

ci qi

∀i

where q is a vector of binary variables defined by: ½ qi =

1 if xi is measured 0 otherwise

(6.2)

Then, the design of the sensor network is an optimization problem that can be written as follows (Bagajewicz and S´anchez, 1999): Min s.t.

X

ci qi

∀i∈M1

E k (q) ≥ E k∗ qi ∈ {0,1}

        ∀k ∈ M E      ∀i ∈ M1

(6.3)

where E i is the degree of estimability of variable i, E k∗ is the minimum degree of estimability imposed, M E is the set of variables with required degree of estimability and M1 is the set of variables where measurements can be placed. The inequality in the constraint of estimability is essential, because sometimes to achieve a certain degree of estimability in one variable, a larger degree of estimability than the threshold may be required in others. If instead of inequalities, equalities are used, then the problem may be overly constrained and become infeasible.

Design for Estimability

129

One needs to use the above model with caution. Even though a variable can have a high degree of estimability, its value may be obtained through differences of large numbers, and, therefore, the precision of such variables may not be satisfactory. Such shortcomings come from the fact that estimability cannot directly replace precision goals, nor can it fully replace reliability targets. Thus, in the context of the above model, if these problems arise, one can increase the degree of estimability required for variables that exhibit inadequate precision. This will result in the activation of more cutsets and the consequent improvement of precision and/or reliability. Different types of problems arise depending on the degrees of estimability required for the variables. All these aspects will be explored next. In addition, if for some reason a measurement in a certain variable should be forbidden (because of safety, space and other constraints), the corresponding binary variable qi can be a priori set to zero. Similarly, if the variable is to be compulsory measured, then the corresponding binary variable qi can be a priori set to one. This is actually one case of instrumentation upgrade in which the existing instrumentation is not changed and only additions of new instrumentation are considered. Several approaches have been proposed to address this problem. In principle, the problem is a mixed integer nonlinear programming problem (MINLP), but it has the added inconvenience that the functions E i (q) cannot be explicitly represented by using formal mathematical expressions in terms of q, so one has to determine estimability by special ad hoc enumeration procedures. The use of graph theory for special cases is first presented. However, in general, this problem can be solved in general by using tree search procedures (Meyer et al., 1994; Bagajewicz, 1997). Chmielewski et al. (1999) proved that the problem can in principle be reduced to a traditional MINLP formulation when precision constraints are used. The tree searching procedure proposed by Bagajewicz (1997) is presented in this book. Finally, genetic algorithms are presented in Chapter 16. Minimal Networks In this section we present networks that feature the minimum possible number of sensors, hence the title. These networks arise from requesting estimability of order one for all the variables. Indeed, consider a network where all the variables have estimability of order one. The deletion of one measurement will cause the loss of observability of at least one unmeasured variable, because it will make that variable unobservable. It was shown in Chapter 5 that the identity part of the canonical matrix corresponds to a set of variables that form a spanning tree. It was also shown that a system where all variables have degree of estimability E i = 1, corresponds to a system where all unmeasured streams are given by a spanning tree. Thus,

130

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

the task of designing such a system consists of determining what spanning tree has the largest cost. Once that is picked, the rest of the variables will be nonredundant and will carry the lowest cost. In the special case of equal cost, any spanning tree suffices. Direct inspection, however, is costly. As pointed out in Chapter 5, a given graph can have a large number of spanning trees, and, therefore, direct inspection can be prohibitive, even when using powerful computers. The number of sensors of minimal networks is therefore equal to the number of chords of a spanning tree. Thus, the number of sensors is obtained by subtracting the number of branches of a spanning tree (n U − 1) from the total number of streams. n min = n S − n U + 1

(6.4)

The following algorithm, which avoids the enumeration of all spanning trees, was presented by Madron (1992) and is based on the notion of Minimum Spanning Tree (Even, 1979) (1) Pick the edge with the largest cost. This is the first edge of the tree. (2) Make a list of all the remaining streams (edges) that form a tree when added to the list of chosen variables. Pick the one with the largest cost. (3) Repeat step (2) until n U streams have been picked. The proof that the above procedure guarantees optimality relies on picking the stream with largest cost first (Even, 1979). This is one necessary and sufficient condition for optimality. To explain this, let us consider a spanning tree that does not contain the stream with largest cost. Because there is a spanning tree of distance 1 (distance between spanning trees was defined in Chapter 5) that contains this stream, it is clear that the spanning tree without the largest cost stream is not optimal. The same argument can be made for any of the rest of the elements of the spanning tree. Example 6.1 To illustrate the procedure, we pick again the system of Figure 5.7. Consider the following costs: c = [20 10 40 10 30 20] (1) Stream S3 is picked first, because it has the largest cost. (2) Streams S1 , S2 and S5 form a tree with S3 . S5 is selected because it has the largest cost. (3) Streams S1 , S2 , S4 and S6 form a tree with {S3 , S5 }. Streams S1 and S6 have the higher costs, so both can be included in the spanning tree. Arbitrarily stream S1 is selected.

Design for Estimability

S2 S1

S4

U2

U1

S6

U4

S3

131

S5

U3

( ):Measured flow rates FIGURE 6.1. Minimal network.

(4) Streams S2 and S4 form a tree with {S3 , S5 , S1 } (S6 does not form a tree with {S3 , S5 , S1 } because it would result in a cycle). Both have equal cost, thus stream S4 is selected. The final spanning tree (unmeasured variables) is: xU = {S1 , S3 , S4 , S5 }. The measured variables are x M = {S2 , S6 } and are shown in Figure 6.1. Example 6.2 Consider the simplified process of hydrodealkylation (HDA) of toluene (Douglas, 1988) shown in Figure 6.2. Instrumentation costs are indicated between parentheses after the label of the stream. By applying the above procedure the set of unmeasured variables obtained are xU = [S1 S5 S14 S12 S13 S8 S7 ], in which the streams are listed in the order they have been selected. The measured variables are x M = [S2 S3 S4 S6 S9 S10 S11 ]. The solution is shown in Figure 6.3. The unmeasured variables form a spanning tree of the undirected graph that corresponds to the more expensive sensors. The canonical form of the incidence U4 S8 (250)

S7 (230) S9 (130)

U5

U3 S11(150)

S6 (160)

U7 S10 (160)

S12 (270)

S14 (270)

S4 (100)

U8 S13 (250)

S5 (200)

U6

U2 S3 (180) S1 (300)

S2 (150)

U1 FIGURE 6.2. Simplified hydrodealkylation of toluene. Adapted from Douglas, 1988.

132

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

U4 S8 (250)

S7 (230) S9 (130)

U5

U3 S11(150)

S6 (160)

U7 S10 (160)

S12 (270)

S14 (270)

S4 (100)

U8 S13 (250)

S5 (200)

U6

U2 S3 (180) S1 (300)

S2 (150)

U1 ( ):Measured flow rates FIGURE 6.3. Minimal network for the HDA process.

matrix is the following: S1 S5 S7 S8 S12 S13 S14 1 0 0 0 0 0 0  0 1 0 0 0 0 0   0 0 1 0 0 0 0 C0 =  0 0 0 1 0 0 0  0 0 0 0 1 0 0  0 0 0 0 0 1 0 0 0 0 0 0 0 1

S4 S9 S10 S11 S2 S6 S3  0 0 0 0 −1 0 −1  −1 0 0 0 1 0 1  −1 0 0 0 0 −1 0  −1 −1 0 0 0 −1 0  −1 −1 1 1 0 −1 0  0 0 1 0 −1 0 0 0 −1 0 1 0 −1 0

Subminimal Networks When only certain variables are of interest, at least estimability of order one should be requested for these variables, whereas the rest of the variables can remain unobservable. These types of networks are here called subminimal because less than the minimum number of sensors will be needed. Note, however, that this may not be true if estimability of order larger than one is requested in a subset of variables. The term minimal is reserved to relate to networks in which the minimum number of sensors is installed to achieve the lowest possible estimability of all variables. Unfortunately, no equivalent methodology based on a concept similar to the spanning tree was yet developed. Thus, mathematical programming seems

Design for Estimability

133

to be the only existing choice to this date. Bagajewicz (1997) proposed an efficient tree-searching algorithm. Tree-Searching Algorithm We now concentrate again in the mathematical programming form of this design problem given by Figure 6.3. Because the constraints on estimability cannot be put in explicit analytic form, a relaxation of the integer constraints is not possible, and, consequently, lower bounds cannot be generated. Therefore, Branch and Bound procedures cannot be implemented. Although implicit tree-type enumeration is impractical for fairly large systems, in the case of this problem its special characteristics allow a fairly efficient search. Consider the tree depicted in Figure 6.4. This tree has some important properties: (1) Node q = 0 is trivially infeasible. (2) As measurements are added to form a branch the nodes are infeasible because the key variables are unobservable, or not enough estimability has been achieved in them. Eventually, as measured variables are added a node will become feasible. The node level at which feasibility is attained varies from branch to branch. (3) Cost increases from one level to the next. Therefore, the first feasible node in each branch is the one with lowest cost in the tree its spans. (4) Finally, retrofit problems, where instrumentation is simply added, can be handled easily by freezing the corresponding elements of q. In future chapters, the notion of reallocation will be included, and solution procedures will be presented.

LEVEL 0

LEVEL 1

LEVEL 2

q = (0,0,0,...)

q = (1,0,0,..)

q = (1,1,0,..)

q = (0,1,0,...)

q = (1,0,1,...)

FIGURE 6.4. Tree of solutions.

134

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

Consider now the following tree enumeration scheme:

r Start with a root node containing no variables being measured (q = 0). r Use the branch first rule, that is, construct the tree by developing each branch (making one element of q active) until a feasible node is found. Then back up one level and develop the next branch. Constraint Checking Assume that the vector q is given. The following procedure checks the feasibility of constraint E k (q) ≥ E k∗ : (1) Identify all the estimable cutsets for variable Sk . Assume the number of these cutsets is s and the total number of variables involved in all these cutsets is b, excluding Sk . (2) Construct the (s × b) matrix Mk by including all the cutsets, one per row. The entries of these rows are binary values (1, 0), 1 if a stream belongs to the cutset and zero otherwise. Variable Sk is excluded. (3) Determine the minimum number of measurements n k whose deletion makes Sk an unobservable variable. This can be accomplished by determining the minimum number of columns that add up to a column of ones, that is, solving the following problem:  b X    α pk Min     p=1 s.t. (6.5)  b  X   α pk m pk ≥ 1   p=1

where α pk = binary variable (0, 1); α pk = 1 implies that measurement p participates in the set of measurements which deletion makes Sk unobservable; α pk = 0 otherwise. m pk = pth column of the (s × b) estimable cutset matrix, Mk . 1 = (s × 1) vector of ones. The problem is solved by inspecting combinations of increasing number of measurements until the constraint of Equation (6.5) is satisfied. (4) Determine the degree of estimability of variable Sk . If Sk is unmeasured, its degree of estimability is n k , but if it is measured its degree of estimability is E k = n k + 1.

Design for Estimability

135

FIGURE 6.5. Simplified ammonia plant network.

Example 6.3 We now illustrate this procedure to check feasibility using the simplified ammonia network included in Figure 6.5. All cutsets for this network are extracted from Ali and Narasimhan (1993) and presented in Table 6.1. Consider that E 3 ≥ 1 and E 5 ≥ 1 are the constraints of Equation (6.3). Assume now that q = (1, 2, 4) is under feasibility analysis. The estimable cutsets that contain S3 are K 3 and K 4 . Thus, matrix M3 is S1 · 1 M3 = 0

S2 0 1

S4 0 0

S5 0 0

S6 0 0

S7 0 0

S8 ¸ 0 0

(6.6)

Similarly, the estimable cutsets that contain S5 are K 10 and K 13 . Thus, matrix M3 is S1 · 0 M5 = 1

S2 1 0

S3 0 0

S4 1 1

S6 0 0

S7 0 0

S8 ¸ 0 0

(6.7)

TABLE 6.1. All Cutsets of Ammonia Plant Network. No.

Streams

No.

Streams

No.

Streams

K1 K2 K3 K4 K5 K6

S5 S6 S7 S1 S2 S1 S3 S2 S3 S2 S6 S8 S3 S6 S8

K7 K8 K9 K 10 K 11 K 12

S1 S6 S8 S1 S4 S6 S7 S1 S5 S7 S8 S2 S4 S5 S4 S7 S8 S4 S5 S6 S8

K 13 K 14 K 15 K 16 K 17 K 18

S1 S4 S5 S2 S4 S6 S7 S3 S4 S6 S7 S3 S4 S5 S2 S5 S7 S8 S3 S5 S7 S8

136

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

It can be seen from matrix M3 that the deletion of only one measurement at a time does not make S3 unobservable (because the other cutset remains estimable) and, therefore, does not satisfy the constraint of Equation (6.5). However, the simultaneous deletion of measurements 1 and 2, that is α13 = 1, α23 = 1, αu3 = 0 ∀u = 3, . . . , 7), satisfy the constraint of Equation (6.5), that is, b X

α p3 m p3 =

p=1

· ¸ 1 1

(6.8)

so the degree of estimability of variable S3 is two for this set of instruments. For variable S5 , the deletion of measurement S4 only allows the constraint in Equation (6.5) to be satisfied and renders this variable unobservable, so the degree of estimability of variable S5 is one. Consequently, the set of instruments, represented by vector q, is feasible. Although the procedure is combinatorial, the nature of the design problem avoids a significant increase in the number of combinations. Because the design follows a minimum cost criterion, the number of b measurements involved in the s estimable cutsets for a variable Sk are low. Example of a Subminimal Network Design Consider again the hydrodealkylation process (Figure 6.2) with the instrumentation costs provided earlier. Assume that a degree of estimability of one is first required for streams S1 , S8 and S9 . For this case the minimum cost solution is 590, which corresponds to the installation of sensors in streams [S4 S5 S6 S9 ], shown in Figure 6.6. Matrices M1 , M8 and M9 are the following: S2 M1 = [ 0

S3 0

S4 1

S5 1

S6 0

S7 0

S8 0

S9 0

S10 0

S11 0

S12 0

S13 0

S14  0 ]  

S1 M8 = [ 0

S2 0

S3 0

S4 1

S5 0

S6 1

S7 0

S9 1

S10 0

S11 0

S12 0

S13 0

S14    0] (6.9)

M9 = ø

Design for Estimability

137

U4 S8 (250)

S7 (230) S9 (130)

U5

U3 S6 (160)

S11(150)

U7 S10 (160)

S12 (270)

S14 (270)

S4 (100)

U8 S13 (250)

S5 (200)

U6

U2 S3 (180) S2 (150)

S1 (300)

U1 ( ):Measured flow rates FIGURE 6.6. Subminimal solution for the HDA process.

It is easy to see that the constraint of Equation (6.5) is satisfied. In addition, the reader can verify that the measured streams are not a subset of the set of measured streams for the minimal network. General Networks When the required degrees of estimability of some variables in a network are greater than one, the networks are called general. Estimability constraints may be imposed on some or all variables. The general sensor network design problem is stated by Equation (6.3.) The feasibility of constraint E k (q) ≥ E k∗ is checked by using the procedure described previously. Example 6.4 If a larger degree of estimability is required for streams S1 , S8 and S9 of the hydrodealkylation process, higher instrumentation cost will result. For example, if the degree of estimability lower bounds are E 1∗ = 2, E 9∗ = 1 the feasible set of instruments corresponding to the minimum cost solution is x M = [S1 S5 S7 S8 S9 S11 S14 ], which has a cost of 1530 (Figure 6.7).

138

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

U4 S8 (250)

S7 (230) S9 (130)

U5

U3 S11(150)

S6 (160)

U7 S10 (160)

S12 (270)

S14 (270)

S4 (100)

U8 S13 (250)

S5 (200)

U6

U2 S3 (180) S1 (300)

S2 (150)

U1 ( ):Measured flow rates FIGURE 6.7. Subminimal solution for the HDA process.

The corresponding matrices of estimable cutsets for this example are:

S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 · ¸  0 0 0 1 0 0 1 0 0 1 0 0 1   M1 =  0 0 0 1 0 1 0 1 0 1 0 0 1         S1 S2 S3 S4 S5 S6 S7 S9 S10 S11 S12 S13 S14   · ¸  0 0 0 0 0 0 1 1 0 0 0 0 0 M8 = 1 0 0 0 1 0 0 0 0 1 0 0 1         S1 S2 S3 S4 S5 S6 S7 S8 S10 S11 S12 S13 S14   · ¸  0 0 0 0 0 0 1 1 0 0 0 0 0    M9 =  1 0 0 0 1 0 1 0 0 1 0 0 1

(6.10)

These matrices show that constraints for variables S1 and S8 are satisfied as equalities. In contrast, variable S9 has a degree of estimability of E 9 = 2, which is higher than its lower bound E 9∗ = 1.

Design for Estimability Efficiency

139

Other Approaches Luong et al. (1994) presented a sensor network design procedure for a restricted class of problems. They consider the case where observability of at least degree one is required for variables that are of interest for control purposes. In addition, they propose to add the requirement that a certain subset of these variables has degree of redundancy k. Their solution strategy is based on two steps. First, they propose to determine the subminimal network that will guarantee observability of order one. In a second step, they propose to choose supplementary variables to be measured to achieve the degree of redundancy of the desired subset of variables of interest. The method is based on the exhaustive investigation of cycles of the graph. However, cycles are directly related to cutsets, because a cutset that contains a certain variable also contains one variable from each cycle that includes this variable. Therefore, the procedure is closely related to the above-presented algorithm based on cutsets. However, an exhaustive enumeration of cycles is required. Meyer et al. (1994) proposed a slightly more general strategy where an exhaustive enumeration of cycles is also proposed to achieve estimability of order one. To reduce the search, they propose a branch and bound procedure that appears to be effective to solve the problem. The method is suitable to design subminimal networks and it can be generalized to other general estimable networks. Finally, Maquin et al. (1995) used a cost minimization problem where the concept of cycles is used to build proper constraints. This method is a particular case of Figure (6.3).

DESIGN FOR ESTIMABILITY EFFICIENCY If instrumentation costs are not available, a good goal for mass sensor network design is the selection of the minimum number of sensors that fulfill estimability constraints for key variables. This kind of sensor network design problem may be posed as follows:  Min Ns    s.t. ∗ ∀k ∈ M E  Ek ≥ Ek   qi ∈ {0,1} ∀i ∈ M1

(6.11)

where Ns is the number of sensors of the network. The tree type enumeration strategy with stopping criteria proposed by Bagajewicz (1997) and described above is applied to solve the MINLP

140

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

U5 S8

S4 S7

S6 U1

S5 U6

U4

S1

S3 U2

S2

U3

( ):Measured flow rates FIGURE 6.8. One estimable efficient solution.

problem. Although the stopping criterion avoids the enumeration to be exhaustive, this procedure is still not efficient for large-scale systems. The implementation of numerical efficient strategies is an area of current active research. Example 6.5 Consider the sensor network design for the simplified process flowsheet of ammonia production (Figure 6.5). Assume that the estimability constraints are the following E 2∗ = 2, E 5∗ = 3. These constraints are satisfied when a minimum number of five instruments are installed. Three alternative sets of instruments fulfill the estimability requirements: [S1 S4 S5 S6 S7 ], [S2 S4 S5 S6 S7 ] and [S3 S4 S5 S6 S7 ]. The first one is shown in Figure 6.8. Consider now the hydrodealkylation process and assume various different estimability constraints (Table 6.2). For each case, the lower estimability bounds on streams, the minimum number of sensors and the solution set of instruments are presented. It can be seen from the results of cases 1 and 2 that a lower number of sensors are required by decreasing the estimability bounds for the same sets of streams. Obviously the tendency is that the minimum number of instruments increases when greater requirements of estimability are imposed.

COMPULSORY MEASUREMENTS AND THE UPGRADE CASE It is usually mandatory to install instruments on some streams to satisfy control, balance accounting or safety requirements. This situation is taken into

141

Compulsory Measurements and the Upgrade Case

TABLE 6.2. HDA Process—Results for the Minimum Number of Sensors Problem. Case 1

2

3

Streams

Ei∗

S1 S8 S9 S1 S8 S9 S7 S13 S11

2 3 1 2 2 2 2 3 2

Ns

Solution(s)

7

S1 S5 S7 S8 S9 S11 S14

5

S1 S5 S6 S8 S9

9

S1 S3 S5 S6 S9 S10 S11 S12 S13 S2 S3 S5 S6 S9 S10 S11 S12 S13

account by setting a priori the corresponding binary variable qi to one. Thus, the design of minimum-cost sensor networks subject to estimability and location constraints is stated as follows: Min s.t.

X

ci qi

∀i

E k (q) ≥ E k∗ qj = 1 qt = {0,1}

       ∀k ∈ M E     ∀ j ∈ MU   ∀t ∈ (M1 − MU )

(6.12)

where MU contains all variables that should be measured. A similar problem arises when the currently installed set of instruments does not fulfill the estimability requirements, so it is necessary to incorporate others. The formulation of the optimization problem is the same. The solution involves the same set of instruments, but the objective function value is lower, because the cost of the already existing instruments is zero. Example 6.6 Consider for example where two flowmeters are installed on streams [S1 S4 ] of the ammonia process flowsheet (Figure 6.5). Assume again that the estimability constraints are E 2∗ = 2, E 5∗ = 3. These requirements are not satisfied with the initial set of instruments, so the location of new instruments is obtained by solving the problem in Equation (6.12). Instrumentation costs are given by the vector c = [0 300 300 0 220 280 250 250], where the already located sensors have a zero cost. The solution indicates that constraints may be fulfilled by incorporating sensors on streams [S5 S6 S7 ]. The optimal cost is 750.

142

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

TABLE 6.3. HDA Process—Results for the Upgrading Design Problem. Case 1

2

3

Streams

Ei∗

S1 S8 S9 S1 S8 S9 S7 S13 S11

2 3 1 2 3 1 2 3 2

Cost

MU (Streams with Installed Sensors)

1080

S2 S11 S12

S3 S5 S6 S8 S9 S10

1260

S2 S11

S3 S5 S7 S8 S9 S14

1350

S2 S11

S3 S5 S6 S9 S10 S12 S13

Streams with New Sensors

Examples of upgrading of the hydrodealkylation process (Figure 6.2) are shown in Table 6.3. Cases 1 and 2 have the same estimability constraints, but the number of sensors already installed is different. The flow rate on stream S12 is considered unmeasured for case 2. Even though this flowmeter has a cost of 270, the optimal solution of case 2 is only 180 more expensive than the optimum value of case 1. The optimal solution of case 3 is the same as the one obtained for the design of minimum number of sensors in the previous section (case 3 in Table 6.2).

SENSOR NETWORKS FOR BILINEAR SYSTEMS The difficulty in designing sensor networks for bilinear system stems from the complicated procedures needed to classify these systems. Extensions of the concept of observable and redundant variables are straightforward, and the issue of degree of estimability can be handled in the same way as for linear systems. However, the extension of the connection between observability and spanning trees and cutsets is not straightforward. As shown in Chapter 5, concentrations or temperatures, can be sometimes used to make some flow rates observable. Therefore, there is an interaction between these two types of variable that needs to be taken into account to determine estimability. Thus, problem (6.3) has to be solved by using the tree-searching procedure presented in this chapter, together with an algorithm for bilinear system variable classification for node feasibility. Unfortunately, no alternative method exists, especially for large systems where the tree-searching algorithm may be time consuming. We now concentrate on special types of networks, for which relatively simple design procedures exist.

Sensor Networks for Bilinear Systems

143

Linearly Estimable Sensor Networks Consider first the case where flow rates are estimable based on flow measurement only. We call these Linearly estimable sensor networks. In particular, if the degree of estimability is one, then the flowmeters are located in the chords of a spanning tree. We will now prove that any spanning tree solution for the concentration sensors will render complete system observability. Lemma 6.1: The minimum number of sensors needed to make all concentrations estimable in a bilinear sensor network that is at the same time a linearly estimable sensor network, corresponds to flowmeters located on a chord of a spanning tree and concentration sensors located in the chord of another, not necessarily different, spanning tree. Proof: Under the conditions of linear estimability of order one, the flowmeters are positioned in the chords of a spanning tree. Then, the component balance equations become linear. That is ¡ f¢ f f M f ⊗ c j = (M ∗ f )c j = M f c j

(6.13)

Consider now the canonical form of matrix M f : G f = [I

G ∗f ]

(6.14)

If concentration sensors are located in the streams that correspond to G ∗f , all the rest of the concentrations are observable. In other words, locating the sensor measurements in the chords of any spanning tree renders all concentrations observable. Q.E.D. Thus, if cost is to be considered, the task at hand is to select the pair of spanning trees that will render the system estimable at the minimum cost. Because one spanning tree has no influence in the other, the choice is to select the most expensive spanning trees for flow rates and concentration sensors independently and locate the sensors in their chords. Minimally Estimable Sensor Networks We now consider the class of sensor networks for bilinear systems featuring a minimum number of sensors and nonzero minimum degree of estimability for all the variables. Linearly estimable sensor networks featuring a minimum number of sensors are a subset of these networks. Because the number of sensors remains the same, we can start from the design based on spanning trees for both flow rates and concentration sensors.

144

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

Assume that one wants to exchange a flowmeter by a concentration sensor. Eliminating one flowmeter and adding one concentration sensor in an appropriate position will lead to an observable system. The unmeasured chord of the flow rate spanning tree will be estimated through the use of concentration measurement. The converse is not true, that is, the elimination of a concentration sensor and the addition of a flowmeter do not make the concentration sensor observable. Networks without Splitters or Heat Exchangers In what follows we present a methodology to design bilinear systems without splitters or heat exchangers. Later, extensions to cover these cases will be presented. We now define the following: Definition: A set of flowmeters and concentration sensors that makes all variables in a bilinear system estimable is k linearly estimable deficient if the addition of k flowmeters, but not less, makes the system linearly estimable. f

Definition: Consider a set of measured flowrate variables x M , a set of conf centration sensors y M and a stream Si with unobservable flowrate f i . A concentration sensor y j in stream S j that makes the flowrate of Si observable is f f a flowrate observability enabler of f i with respect to x M and y M , which are called the reference sets. Vice versa, a flowmeter in stream Si is the concentraf f tion observability enabler of y j with respect to the reference sets x M and y M . First note that a flowmeter can have many enablers. In addition, two unobservable flow rates can have the same enabler. In other words, one concentration sensor makes both observable. Finally, an unmeasured flow rate that is observable by using only flow rate measurements does not have an enabler, because it is already observable. Consider the example of a simple flash (Figure 6.9), where only the inlet flow rate and the compositions of the feed and vapor are measured F3, y3 T, P, h F1, y1

F2, y2

( ):Measured flow rates ( ):Measured compositions. FIGURE 6.9. A flash unit.

Sensor Networks for Bilinear Systems

145

The flow rates F2 and F3 are unobservable if only total material balances are used. However, if one measures the concentration y2 , these two flow rates can be calculated by using the total material and component balance equations. Thus, y3 is a flow rate observability enabler of F2 and F3 . Similarly, F2 and F3 are each concentration observability enablers of y2 . Definition: Given a stream Si with unobservable flow rate and its concentration enabler, the difference between the costs of the flowmeter and the concentration enabler is called flowmeter enabling cost. Note that the enabling cost can be negative. Indeed, the flowmeter enabling cost of the example of Figure 6.9 is given by the difference between the cost of the flowmeter for F3 and the cost of a concentration measurement for y2 . This value can be positive or negative. Definition: Given a stream Si with observable flow rate, the flowmeter exchange cost of this sensor is given by the smallest cost of all the enabling costs of this flow rate sensor. The concentration sensor exchange cost is just the opposite. In the example in Figure 6.10 there are two enablers of F3 , a measurement of y2 or y5 . Therefore, the flowmeter exchange cost of F3 is given by the smaller of these two enabling costs. Consider now the following Lemmas. Lemma 6.2: A necessary condition of optimality of a sensor network that makes bilinear systems completely observable is that all unmeasured flow rates have positive exchange cost. Proof: The property is fairly obvious. If the flowmeter exchange cost of the other streams with unmeasured flow rates is positive, the flowmeter can be introduced and its enabler eliminated by lowering the overall cost, condition that contradicts the assumption of optimality. Q.E.D. F3, y3 T, P, h F1, y1

F4, y4

F2, y2

( ):Measured flow rates ( ):Measured compositions. FIGURE 6.10. A flash unit and a mixer.

F5, y5

146

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS y

Lemma 6.3: Let Tle be a spanning tree corresponding to the concentration sensor network of a linearly estimable network. A necessary condition of optimality for a k linearly estimable deficient system is that the set of measured y concentrations sensors contains all chords of Tle . y

Proof: The linearly estimable spanning tree Tle has the property that any exchange of a measurement from chords to branches to obtain a new spanning tree is such that the cost increases. Any other solution to the k-deficient estimable network corresponds to a spanning tree T y where the concentrations of all chords are measured and the concentrations of some branches are measured. y First, consider that the measured branches of T y contain a chord of Tle . In such y case, one can exchange a chord of T y that is not a chord of Tle by this measured branch of T y without affecting observability of the flow rates (both are measured) or changing the cost. Thus, we can assume without loss of generality that y the measured branches of T y , the enablers, do not contain chords of Tle . In other y words, the chords of Tle are either chords of T y , or its unmeasured branches. y Therefore, to transform one solution Tle to another T y , one has to exchange y the measured chords of Tle with unmeasured branches of T y , which implies an increase in cost. To complete the proof, we will now show that the unmeasured y branches of Tle cannot be used to enable the unmeasured chords of T f . Assume the measured and unmeasured flowrates allow the following decomposition variables: fO = G1 f M

(6.15)

f UO,1 = G 2 f UO,2 + G 3 f M

(6.16)

where f UO,1 and where f UO,2 correspond to the unobservable flows represented by the chords and branches of T f , respectively. We can also write the same component balance equations f O ⊗ cO = G 1 f M ⊗ cM

(6.17)

f UO,1 ⊗ cUO,1 = G 2 ( f UO,2 ⊗ cUO,2 ) + G 3 ( f M ⊗ c M )

(6.18)

where c M , c O , cUO,1 and cUO,2 correspond to the same streams as f M , f O , f UO,1 and f UO,2 . Substitution of Equations (6.16) into (6.18) followed by some manipulations gives: [G 2 ∗ cUO,1 − G 2 ∗ cUO,2 ] f UO,2 = (G 3 ∗ f M )c M − (G 3 ∗ f M )cUO,1

(6.19)

Thus, all elements of cUO,1 or cUO,2 need to be measured (we assume that the columns of G 2 and G 3 are not zero). They are either chords or measured

Sensor Networks for Bilinear Systems

147

branches. A similar argument can be made for those elements of c M for which G 3 does not have a zero column. Thus, the exchange between chords and y unmeasured branches of Tle needs to take place between elements of c O and the elements of c M that are not enablers. Thus, the enablers are not touched. Q.E.D. Lemma 6.4: A necessary condition of optimality of a system that is k linearly estimable deficient is that the measured flow rates are chords of the linearly f estimable solution Tle of minimum cost. Proof: Suppose that f M , f O , f UO,1 and f UO,2 correspond to the linearly estimable solution. Consider any other solution with measured chords not being f chords of Tle . Assume now that one exchanges certain number of measured f chords by the same number of unmeasured branches of Tle . Exchanging with elements of f O will only increase cost and no changes are needed in the concentration measurements. Consider now an exchange with elements of f UO,1 . This is possible but will again lead to an increased cost and no needed changes in the concentration measurements. From Equation (6.19), one can conclude that an exchange with elements of f UO,2 does not require changes in concentration measurements. Therefore, an exchange with elements of f UO,2 will again lead to an increase in cost. Q.E.D. Lemma 6.3 and the proof of Lemma 6.4 suggest the following constructive step-by-step procedure for an optimal estimable sensor network.

r Step 1: Locate flowmeters and concentration sensors on the chords of the most expensive flow and concentration spanning trees.

r Step 2: Determine the exchange cost of all flowmeters. If the cost is negative, then eliminate the flowmeter and place a concentration sensor in its enabler. Example 6.7 We illustrate the algorithm by using the ammonia plant network (Figure 6.5). Assume the costs to be c f = [100 300 300 200 220 280 250 250] and c y = [70 70 90 80 10 30 5 3]. The linearly estimable solution has the following f y chords: x M = [ f 1 f 4 f 7 ], x M = [y5 y7 y8 ] and are shown in Figure 6.11(a). Its cost is 568 (550 from the flowmeters). The optimal solution is shown in Figure 6.11(b) and its cost is 388. The large reduction in cost comes from the elimination of the flowmeter in f 7 and the introduction of an enabler in y1 .

148

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

U5

U5

S8

S4

S4

S8

S7

S7

S6 U1

S5 U6

S6 U4

U1

S3

S1 U2

S2

S5 U6

U4

S1

U3

S3 U2

(a)

S2

U3

(b)

( ):Measured flow rates ( ):Measured compositions. FIGURE 6.11. Bilinear ammonia plant sensor network. (a) Linearly estimable solution. (b) Minimal solution.

Sensor Networks for Bilinear Systems with Splitters Splitter units require special considerations in bilinear processes, because a component balance around a splitter has to be replaced by an equality of all compositions (temperatures) of all streams incident to the node. Thus, if the compositions of some stream incident on a splitter are observable, then the compositions of all other streams incident on the splitter are also observable. Obviously, if splitter nodes are present in a process network, then the minimum number of composition sensors that are required to observe compositions in all streams is less than n S − n U + 1. Instead, the minimum number of composition sensors is given by (Ali and Narasimhan, 1996): n y = (n S − n U + 1) −

n sp X

m s,i + 2n sp

(6.20)

i=1

where rs,i is the number of edges that are incident on splitter Ui and n sp is the number of splitter nodes in a process. This equation can be obtained easily by observing that for splitter Ui , there are (rs,i − 1) composition equality constraints instead of one component flow balance. This gives rs,i − 2 additional equations relating the compositions, which may prompt a corresponding reduction in the number of composition sensors. The preceding equation is applicable to process networks that do not contain splitters in series. However, such configurations seldom, if at all, occur in practical processes. In addition, a single splitter can perform the same function as splitters in series, so the exception is a moot one.

Sensor Networks for Bilinear Systems

149

FIGURE 6.12. Splitter with one branch of the spanning tree.

When splitters are present, the streams with unmeasured compositions do not form a spanning tree, because less than n S − n U + 1 composition sensors are required to observe all mass fractions. However, we make some observations that enable us to take advantage of the notion of a spanning tree, and hence, use the same concepts described above to obtain a sensor network design. These observations were made by Ali and Narasimhan (1996). We assume that only one splitter is present in the process; the extension to multiple splitters follows naturally from the arguments presented below. To maintain the minimum number of composition sensors when splitters are present, some of the chords of T y should also be unmeasured. The following three cases are possible:

r Case (a). Only one of the streams incident on the splitter is a branch of T y . r Case (b). Exactly two of the streams incident on the splitter are branches of T y.

r Case (c). More than two streams incident on the splitter are branches of T y . For case (a), the fundamental cutset corresponding to the splitter branch, is identical to the cutset that separates the splitter unit from the rest of the process. This is shown in Figure 6.12. In these figures, the node marked S represents the splitter unit. The solid line is the branch of T y , and the dotted lines are the chords. Ordinarily, for a unit that is not a splitter, all the chords would need to be measured. However, in the case of a splitter, only one chord is needed. Thus, a composition sensor must be placed on only one of the splitter streams in this fundamental cutset, while the composition of the remaining streams can be unmeasured. Case (b) is depicted in Figure 6.13. In this case both the fundamental cutsets, K S1 and K S2 , contain the same chords c1 , c2 , . . . , cT not connected to the splitter. For all practical purposes the pairs (S1 , S3 ) and (S2 , S4 ) can be replaced by just the branches S1 and S2 because knowledge of these two immediately gives the other two. Moreover, by the same argument, one can even eliminate the splitter and replace it by a single stream connecting U1 and U2 . This process is shown in Figure 6.14.

150

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

U1 S1 KS2

S3

S

c1 S4

S2

KS1 c2 cT

U2 FIGURE 6.13. Splitter connected to two branches of the spanning tree.

Thus, the measurement of the chords c1 , c2 , . . . , cT provide complete observability of all the branches and chords connected to the splitter. Incidentally, note that an exchange of any chord ci connecting U1 and U2 with one of the branches S1 or S2 leads to a spanning tree that falls into case (a). We now show that the spanning tree corresponding to case (c) can be analyzed in the same way as case (b). First, the chords connected to the splitter can be eliminated, and later the splitter can be merged successively with U2 and U3 . This is shown in Figure 6.15. The net result is the same situation as in case (b). Therefore, the evaluation of a cost of a spanning tree containing splitters can be performed by distinguishing two cases. If only one branch of the spanning tree is connected to the splitter, then a concentration sensor should be located in the cheapest chord. Otherwise, if more than one branch connects the splitter, no chord should be measured. Example 6.8 We return to the example of the graph corresponding to the ammonia network. Consider, the same costs, but now assume that unit U5 is a splitter. Then, the linearly estimable solution does not contain more than one concentration sensor

FIGURE 6.14. Splitter connected to two branches of the spanning tree.

Sensor Networks for Bilinear Systems

151

FIGURE 6.15. Splitter relation with branches of the spanning tree.

around the splitter (Figure 6.16). Indeed, the concentration measurement in stream S7 can be eliminated [case (b)]. Energy Sensor Networks Two different graphs can be used to represent energy networks, so that sensor network design can be performed. A mass flow graph is used to represent the flow balances of the process and is derived from the flowsheet by replacing each exchanger by two disjoint nodes representing the tube and shell sides of the exchangers and adding an environment node to which all inputs and outputs of the process are connected. The energy network remains the same as the flowsheet. As an example, a heat exchanger network is shown in Figure 6.17(a). All the algorithms developed for bilinear networks can now be applied to obtain the optimal placement of flow and temperature sensors. The streams with unmeasured flows will form a spanning tree of the flow rate graphs, whereas the streams with unmeasured temperatures will form a spanning tree of the energy network.

FIGURE 6.16. Linearly estimable solution. Splitter case.

152

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

S1

S2 U1

S3 U2

ENV S4 S2

S1

(b)

S6

U1

S3 U2

S5

ENV

S7

S4

(a)

S6

U1

U2

S5

S7 ENV

(c) FIGURE 6.17. Energy networks decomposition (a) flowsheet and energy graph. (b) Hot streams. (c) Cold streams.

Upgrade of Bilinear Sensor Networks If one wants to upgrade a system to make it linearly estimable, all one needs to do is to determine the spanning trees whose chords contain all the existing measurements. Once this is performed, one can locate instruments in the rest of the chords. Although this will render a minimum number of sensors, the new instrumentation added might be more expensive than other solutions that feature a larger number of added instruments but at a smaller cost. For such an alternative, as well as for more general ones, the model given in Figure (6.3) can be used to perform this upgrade. We now concentrate on a series of rules developed by Ragot et al. (1992) for the upgrade of bilinear sensor networks. This method is based on the following observations of the overall balance and component balance equation around a node. For component j and unit Uk , these equations have the following form: ¡ f¢ Dk f ⊗ c j = 0

(6.21)

Dk f = 0

(6.22)

where Dk is a row of the incidence matrix. For the time being we will exclude heat exchangers and any other units in which indirect heat is transferred.

Sensor Networks for Bilinear Systems

153

Consider the following situations where three variables are unobservable and one-sensor placement actions are proposed to make these variables observable.

r Situation A: A couple of equations with two unobservable flow rates and one unobservable composition. —Placement action 1: Place a sensor in a flow rate. This leaves a system of two equations with two unknowns, a concentration and a flow rate. —Placement action 2: Place a sensor in a concentration. This leaves a system of two equations with two unknowns, the two flow rates. r Situation B: A couple of equations with two unobservable concentrations and all observable or measured flowrates. —Placement action: Place a sensor in one of the concentrations. This leaves a system where a concentration can be calculated. r Situation C: A couple of equations with three unobservable flow rates and all observable or measured concentrations. —Placement action: Place a sensor in a flow rate. This leaves a system of two equations with two unknowns, the two flow rates. Note that for a situation where two concentrations and one flow rate are unobservable or when three concentrations are unobservable, no placement of one sensor will make all the variables observable. For situations with more than three unobservable variables, there is no one-sensor placement action that will render all variables in that pair of equations observable. We now concentrate on the upgrade algorithm suggested by Ragot et al. (1992). The steps are:

r Step 1: Determine all observable variables. For this purpose, use the method presented in Chapter 5. If all variables are observable, stop.

r Step 2: Search through all pairs of node balance equations and determine which sensor location, if realized, makes the larger number of variables observable. If no sensor makes this happen, that is, when no pair of equations falls in situations A, B or C above, then place a sensor at random in the cheapest location, but avoid making a flow rate redundant. Go to step 1. Ragot et al. (1992) used an observability analysis based on the construction of two canonical forms of the incidence matrix, one for the flow rates and the other for the concentrations. The former is the standard form presented in Chapter 5, whereas the latter is based on the same procedure but using the measured concentrations instead of the measured flow rates. For step 2, they propose the application of the above placement actions for all pairs simultaneously. Finally, because they realize that there are different options, they suggest that the choice be driven by cost.

154

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

S12

S25

U4

U9

S11 S4

S10 U3

S1

U1

S5 S9

U2

S2 S6

S16 S17

S3

U5

S13

S22 S15

S 14

U8

S24

S23

U6

U 10

S27

S18 S7 S8

S19 U7

S21

S26 S20 U11

S28

( ):Measured flow rates

( ):Measured compositions. FIGURE 6.18. Steam metering network.

Example 6.9 We now show the application of this algorithm [as shown by Ragot et al. (1992)] to the steam system of the methanol production system (Serth and Heenan, 1986) depicted in Figure 6.18. Assume that one desires to determine flow rates and the composition of one possible contaminant in the steam. The measured streams are: f

x M = [ f 4 f 9 f 12 f 15 f 18 f 19 f 21 f 22 f 24 f 27 f 28 ] y xM

(6.23)

= [y1 y5 y7 y10 y14 y15 y17 y18 y19 y20 y22 y24 y25 ] (6.24)

Assume now that sensors can be positioned only in the following sets of candidate variables z f and z y . z f = [ f 1 f 6 f 7 f 10 f 14 f 16 f 17 f 23 f 26 ]

(6.25)

z = [y4 y6 y16 y23 y27 ]

(6.26)

y

The classification algorithm renders flow rates f 23 and f 26 , as well as concentration y23 , observable. The flow rates f 23 and f 26 would be observable if only flow-rate measurements were used in the observability analysis. We consider them as “measured” to construct new canonical matrices corresponding

155

Sensor Networks for Bilinear Systems

to flows and concentrations. The structure of these matrices is shown next. GU UO

I D=

I

GM UO

GRO,1

GNR,1

GRO,2

GNR,2

(6.27)

GR

Unmeasurable Variables

Measurable Variables

Because the observable variables have been lumped with the measured variables, these two matrices contain “unmeasured” variables that are unobservable. In addition, the unobservable columns are partitioned and further rearranged so that measurable variables and unmeasurable variables are separated. These correspond to the superscripts M and U , respectively. Finally, rows with unobservable variables are put first. The two resulting canonical matrices are shown next: S2 S3 S13 S8 S11S5 S20 S25S1S16 S10 S6 S7 S14 S17 S4 S9 S12 S15 S18 S19 S21 S22 S24 S27 S28 S23 S26 1

1 1 1

1

-1

1 1 Df =

1 1

1

-1 -1 1

1 1

-1

1 1 1 1

-1 1-1

1 1 -1

-1 -1

1 1

1 1

-1 1 1 1 -1 -1 1 1 1 1 -1 1 1 1 1 -1 1 1 1 1 1 1 1 -1 1 1 1 1 1 -1 1 -1

1

-1 -1

-1 -1

-1

-1

-1 -1 -1

-1 -1 -1 1 -1 -1

(6.28) S2 S3 S9 S28 S13 S8 S12 S21S26 S11 S4 S16 S6 S27 S1 S5 S7 S10 S14 S15 S17 S18 S19 S20 S22 S24 S25 S23 -1 1 1 1 1-1 -1 -1 -1 -1 1 1 -1 1- 1 1 -1 -1 1 -1 1 -1 -1 1 -1 1 1 -1 -1 y D = 1 -1 -1 1 -1 1 1 -1 -1 -1 1 1 1 -1 1 -1 -1 -1 1 1 1 -1 1 -1 1 1 -1 -1 -1 1 -1

(6.29)

156

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

Measurable unobservable, and unmeasurable unobservable flow rates are U M = { f 2 , f 3 , f 13 }, f UO = { f 1 , f 5 , f 8 , f 11 , f 16 , f 20 , f 25 }, respecgiven by f UO tively. Similarly, measurable unobservable, and unmeasurable unobservable U M = { f 2 , f 3 , f 13 }, yUO = { f 1 , f 5 , f 8 , f 11 , f 16 , concentrations are given by yUO f 20 , f 25 }, respectively. The possibility of locating new instruments is explored next. The last two rows of D f are candidates for situation B. Of these:

r The first row has variables f 19 , f 27 , f 23 and f 26 . Of these, y19 and y23 are

observable, y26 can be measured, but y27 cannot. Thus, the possible action is to place a concentration sensor in S27 . This will make y26 observable. r The second row has variables f 15 , f 18 , f 22 , f 24 and f 23 . Of these, all concentrations are observable. We now search for equations with two unknown flow rates. These are candidates for situation A. These equations are given by line 4 and 9 of D f but, unfortunately, two concentrations are unobservable. Finally, we look for an equation with three unmeasured flow rates for which all concentrations are measured. This is given by line 11 of D y . The measured variables are y1 , y5 and y10 . One can locate a flowmeter in either S1 or S10 , but not on S5 because it is unmeasurable. Thus, situation A and C were applied and after action is taken to put a flowmeter in either S1 or S10 and a concentration sensor in S27; an observability analysis reveals that no further action is possible. The following variables remain unobservable: f 2 , f 3 , f 13 , y2 , y3 , y8 , y9 , y13 , y28 . Unfortunately, these are all unmeasurable and the system cannot be upgraded any further.

NOMENCLATURE cic : ci : CR: C(.): ci : D: Dk : Ei : E k∗ : f: M: M1 : ME :

concentration of component i in all streams chord i incidence matrix for redundant systems cost function cost of sensor i incidence matrix for linear dynamic systems row incidence matrix estimability of variable i threshold of estimability vector of flow rate of streams incidence matrix in bilinear systems set of variables where sensors can be placed set of key variables for estimability

References

MK : m pk : nk : Ns : n min : nS: n sp : nU : ny: q: rs,i : Si : T: T f: T y: y

Tle : f

Tle : Ui : x: f xM : f yM : f z : zf:

157

matrix of all cutsets pth column of MK minimum number of measurements whose deletion makes stream Sk unobservable minimum number of measurements in a network minimum number of stream that need to be measured to obtain observability of all variables of interest number of streams or edges in a graph number of splitter nodes number of units or nodes in a graph minimum number of composition sensors binary vector indicating whether sensors are located (1) or not (0) in variables number of edges incident on splitter Ui stream i spanning tree; temperature spanning tree corresponding to flow rate measurements in its chords spanning tree corresponding to concentration measurements in its chords spanning tree of a concentration sensor network for a linearly estimable network spanning tree corresponding to flow rate sensors of a linearly estimable network unit i vector of state variables set of measured flow rates set of streams with concentration measurement candidate sets for new measured flow rates candidate sets for new measured concentration

Greek Letter α pk :

binary variable indicating if sensor p is in the set that makes Sk unobservable

REFERENCES Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Linear Processes. AIChE J., 39, 5, pp. 2237–2249 (1993). Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Bilinear Processes. AIChE J., 42, 9, pp. 2563–2575 (1996).

158

NONREDUNDANT AND REDUNDANT SENSOR NETWORKS

Bagajewicz M. Design and Retrofit of Sensor Networks in Process Plants. AIChE J., 43, 9, pp. 2300– 2306 (1997). Bagajewicz M. and M. S´anchez. Design and Upgrade of Non-Redundant Linear Sensor Networks. AIChE J., 45, 9, pp. 1927–1939 (1999). Chmielewski D., T. E. Palmer and V. Manousiouthakis. Cost Optimal Retrofit of Sensor Networks with Loss Estimation Accuracy. AIChE Annual Meeting, Dallas (1999). Douglas J. Conceptual Design of Chemical Process. McGraw Hill (1988). Even S. Graph Algorithms. Computer Science Press (1979). Luong M., D. Maquin, C. T. Huynh and J. Ragot. Observability, Redundancy, Reliability and Integrated Design of Measurement Systems. 2nd IFAC Symposium on Intelligent Components and Instrument Control Applications, Budapest (1994). Madron F. Process Plant Performance, Measurement Data Processing for Optimization and Retrofits. Ellis Horwood, West Sussex, England (1992). Maquin D., M. Luong and J. Paris. Dependability and Analytical Redundancy. IFAC Symposium on On-Line Fault Detection in the Chemical Process Industries, Newcastle, UK (1995). Meyer M. J. M. Le Lann, B. Koehret and M. Enjalbert. Optimal Selection of Sensor Location on a Complex Plant Using a Graph Oriented Approach. Comp. & Chem. Eng., 18, Suppl., pp. S535–S540 (1994). Ragot J., D. Maquin and G Bloch. Sensor Positioning for Processes Described by Bilinear Equations. Revue Diagnostic et Surete de Fonctionnement, 2, 2, pp. 115–132 (1992). Serth R. and W. Heenan. Gross Error Detection and Data Reconciliation in Steam Metering Systems. AIChE J., 32,733 (1986).

CHAPTER 7

Data Reconciliation

DATA RECONCILIATION The purpose of this chapter is to present the basic concepts of data reconciliation. Attention is given to those methods that provide analytical background for future chapters. However, for the purpose of completeness, other methods for performing data reconciliation are nonetheless briefly discussed. The reader is also referred to three good books in the field (Madron, 1992; Narasimhan and Jordache, 2000; S´anchez and Romagnoli, 2000).

BACKGROUND Because measurements contain errors, they need to be adjusted. Data reconciliation is a term used to refer to the problem of correcting plant measurements (flow rates, temperatures, pressures, concentrations, etc.) so that they conform to a certain chosen model. Usually, this model is given by a mathematical relation, typically a set of differential algebraic equations (DAE), as the one shown in Chapter 5 [Equations (5.1) and (5.2)]: d x1 = g1 (x1 , x2 ) dt

(7.1)

g2 (x1 , x2 ) = 0

(7.2)

The general data reconciliation problem is stated as follows: Given a set of measurement values of a subset of state variables zM = (zM,1 , zM,2 ) it is desired 159

160

DATA RECONCILIATION

to obtain the best estimate of these measured state variables e xM and as many of the unmeasured variables e xU as possible. When a statistical approach is taken, the problem of reconciliation consists of minimizing the weighted square of the difference between the measurements xM at the N instances of time at which the measurements z M and the estimates e were made, using as weight the variance of the measurements Q. That is, it consists of solving the following optimization problem:  N X  T −1  Min [e xM (tk ) − z M,k ] Q [e xM (tk ) − z M,k ]    k=0   s.t.  de x1   x1 ,e x2 ) = g1 (e   dt    x1 ,e x2 ) = 0 g2 (e

(7.3)

This least square problem can be derived from Bayesian theory by using the assumption that the distribution of errors is normal. The reader can refer to the work done by Johnston and Kramer (1995) for a maximum likelihood derivation of the steady-state linear reconciliation model. In addition, Crowe (1996) showed that the same result can be derived by using information theory. For the case where steady state is assumed, only one measurement is used, usually an average of several measurements. In addition, in some cases, accumulation terms, especially hold-up changes in tanks, are still included. The model represented by Equations (7.1) and (7.2) is based on first principles. However, there are some dangerous exceptions that have permeated into practice. Notoriously, the petroleum-refining industry is used to measure production in volumetric units (barrels). This has prompted practitioners to believe that a “volumetric balance” is the proper model. The pressure has been so intense that some data reconciliation software vendors, like DATACONTM (from Simulation Sciences, Brea, CA), agreed to introduce such models. Hardware Redundancy We consider now the case where more than one set of measurements is performed for each measured variable, that is, more than one instrument is used. In such case, Equations (7.1) and (7.2) are not altered, and the objective function includes the additional terms. Without loss of generality, assume that each variable has m measurements. Let Q s be the variance of each set of

Background

161

measurements s. Then, Equation (7.3) becomes: ( Min

N X m X

[e x M 0 (ti ) − z M,s,k ]

T

Q −1 x M 0 (ti ) s [e

k=0 s=1

s.t.

)    − z M,s,k ]      

(7.4)

        

de x1 = g1 (e x1 ,e x2 ) dt x1 ,e x2 ) = 0 g2 (e

It will be shown next that this can also be accomplished by assuming that one measurement per variable is made. The new “measurement” that needs to be used is z M,k =

m X

Q −1 s z M,s,k

(7.5)

s=1

and its new variance should be ( Q=

m X

)−1 Q −1 s

(7.6)

s=1

Indeed, m X s=1

( [e xM (ti ) − z M,s,k ]

T

Q −1 xM (ti ) s [e

− z M,s,k ] = [e xM (ti )]

T

m X

) Q −1 s

e xM (ti )

s=1

) m m X X T −1 [z M,s,k ] Q s e [z M,s,k ]T Q −1 xM (ti ) + −2 s z M,s,k (

s=1

(7.7)

s=1

In turn, the proposed alternative objective function Z = {[e xM (ti ) − z M,k ]T Q

−1

[e xM (ti ) − z M,k ]}

(7.8)

can be rewritten in terms of the definition of Q and z M,k as follows: Z = [e xM (ti ) − z M,k ]T Q − 2[e xM (ti )]T Q

−1

−1

−1

xM (ti ) [e xM (ti ) − z M,k ] = [e xM (ti )]T Q e

z M,k + [z M,k ]T Q

−1

z M,k

(7.9)

162

DATA RECONCILIATION

FIGURE 7.1. Ghost units.

Thus, according to the assumption, the first two terms of Equations (7.7) and (7.9) are the same and the third term is a constant, both problems have objective functions that differ only by a constant and, therefore, can be considered equivalent. Alternatively, one might introduce a ghost unit (Figure 7.1) and work within the framework of Equation (7.4) and a new equation representing just the equality of the properties of both streams.

LINEAR DATA RECONCILIATION In commercial applications data reconciliation is mostly performed involving flow rates and tank inventories, that is, using material balances. The dynamic model of a material balance in a process plant can be represented by the following differential algebraic system of equations (DAE): dw = Af dt

(7.10)

Cf = 0

(7.11)

where f are the flows of the different streams connecting the units and w are the hold-ups of the respective units. We saw in Chapter 5 that observable unmeasured variables can only be calculated by using redundant and nonredundant measured variables through Equation (5.20). Thus, the only measurements that are in conflict are the redundant measurements, and, therefore, data reconciliation only makes sense for redundant variables, which should satisfy (see Chapter 5): GR x R = 0

(7.12)

When there is redundant measured holdup, matrix GR has the following structure · GR =

AR CR

−B R 0

¸ (7.13)

Steady-State Linear Data Reconciliation

163

Therefore, Equations (7.10) and (7.11) are rewritten as follows: BR

dw R = AR fR dt

(7.14)

CR f R = 0

(7.15)

In addition vector x M is now partitioned into redundant and nonredundant components, that is, x M = [x R x N R ]. As for the variance matrix we have: ¸ · 0 QR (7.16) Q= 0 Q NR Finally, we partition Q R and Q N R into its flow and holdup parts, that is: ·

QR QN R

¸ Q R,F 0 = 0 Q R,W · ¸ Q NR,F 0 = 0 Q NR,W

(7.17)

It is a common practical (but not theoretical) assumption in steady-state data reconciliation that measurements are independent, that is, there is no covariance between different variables. In those cases, nonredundant variables have to be accepted at measured face value. When covariances are not zero, adjustments of nonredundant variables can be made. In practice, however, there is no good information about the values of these variances and covariances so that the usual assumption is that these matrices are diagonal. In the dynamic case, because several measurements are available, it is said that they present temporal redundancy. This redundancy allows certain fitting or filtering to be performed. These variables are called self-redundant. STEADY-STATE LINEAR DATA RECONCILIATION We now analyze the solution of the linear data reconciliation problem. Consider the case where no holdup change takes place or negligible holdup is assumed. In such case only flow rates are estimated. Therefore, we write the problem as follows  f M − f M+ ] Min [ e f M − f M+ ]T Q −1 [ e  s.t.   De f =0

(7.18)

164

DATA RECONCILIATION

f U ]T . Once D has been put into its canonical form one can where e f = [e fMe write:  +  e Min [ e f R − fR+ ]T Q −1 R,F [ f R − f R ] s.t. (7.19)   fR = 0 CR e This problem and its solution was the object of the seminal article of this field written by Kuehn and Davidson (1961). It is a quadratic programming problem with linear equality constraints. The solution can be obtained by determining the stationary point of the Lagrangian function: + T e e f R − f R+ ]T Q −1 L = [e R,F [ f R − f R ] + λ CR f R

(7.20)

This stationary point is: ¥ ¡ ¢−1 ¦ + e f R = I − Q R,F CRT CR Q R,F CRT CR f R

(7.21)

After reconciliation is performed, the following estimates, similar to Equation (5.20), are obtained: e f N R = f N+R

(7.22)

e f R + CNRO e f NR f O = CRO e

(7.23)

The steady-state model has posed some practical challenges: (1) Plants are never truly at a steady state. Thus, as practitioners have resorted to perform averages of several measurements to obtain one single number per stream to use in the objective function of Equation (7.19), they are averaging process variations as well. Therefore, these variations are considered random errors. (2) Tank holdup measurements cannot be used in steady-state data reconciliation models. To add redundancy, holdup changes are modeled as pseudostreams. This will be discussed later in this chapter. (3) It has always been a problem to pick the values of the variance matrix Q. Typically, covariances are ignored and a diagonal form of Q is used. In the absence of hard data, these variances are chosen by using vendor information or in some cases, the standard deviation of the signal. (4) Because process variations are included in the measurements, the variance Q should also be adjusted to take them into account. This issue is also discussed later in this chapter.

Steady-State Linear Data Reconciliation

165

Precision of the Estimates Once the steady-state data reconciliation problem is solved, it is desired to know the precision X of the estimates obtained. The covariance of linear combiγik xk , is defined as follows: nation of variables k

Z e σi j =



−∞

where z i =

Z .... X



Ã

X

−∞

!Ã γik xk − z i

X

! γ js x j − z j p(x) d x1 d x2 . . . d xn

s

k

(7.24)

γik x k and x k is the mean value of the measurements of variable

k

xk . After some bookkeeping one can write e σi j =

XX k

s

Z γik γ js



Z



−∞ −∞

(xk − x k )(xs − x s ) p(x) d xk d xs =

XX k

s

γik γ js σi j (7.25)

Thus, in general, if z = 0x , then the variance of z is given by: e = 0 Q0 T Q

(7.26)

and consequently, the variances of the flow rate estimates obtained from reconciliation are given by: ¡ ¢ eR,F = Q R,F − Q R,F CRT CR Q R,F CRT −1 CR Q R,F Q

(7.27)

Note that:

r CR Q eR C T = 0, that is, they are orthogonal. eR = Q R rQ eR is singular. r Diag Q eR < diag QR , that is, diag [QR C T (CR QR C T )−1 CR QR ] > 0. In other R

R

words, the standard deviations of the estimates are always smaller than those of the measurements. This is the confirmation of a somehow intuitive knowledge that the more measurements one uses to estimate a value the more precise one gets.

Variance of Observable Quantities Using Equation (7.26) one obtains: · ¸ e eO = [CRO CSRO ] QR [CRO CSRO ]T Q Q NR

(7.28)

166

DATA RECONCILIATION

FIGURE 7.2. Flowsheet for Example 7.1.

Example 7.1 Consider first the system of Figure 7.2 and the corresponding set of measured data given in Table 7.1. The system matrix for this case is: S1 S2 S3 S4 1 −1 −1  1 −1  1 C =   1 

S5

S6

−1 1 −1 1

S7      

(7.29)

−1

The reader can verify that the canonical form of this matrix is: S7 S2

S3 S4

1 1

C=

1

S5 S1 S6

-1 1 -1 -1 1 -1 1 1 -1

(7.30)

From this canonical form one can conclude that all measured streams are redundant, that nonredundant variables are not present, and that S2 , S3 , S4 and S5 are unobservable variables, whereas S7 is observable. Once data reconciliation is performed, the following results are obtained (Table 7.2): TABLE 7.1. Measurements for Figure 7.2. Stream S1 S6

Measurement

Standard Deviation

101.3 102.7

2.1 1.9

Steady-State Linear Data Reconciliation

167

TABLE 7.2. Data Reconciliation Results for Figure 7.2.

Stream

Measurement

Standard Deviation

S1 S6 S7

101.3 102.7 —

2.1 1.9 —

Reconciled/ Estimated Value

Standard Deviation of Estimate

100.61 100.61 100.61

1.985 1.985 1.985

Assume now that a measurement is added in stream S5 . Then the new canonical form of the system matrix is:

S7 S2 S3 S4 S1 S6 S5 1 C=

-1

1 1

-1

1 -1 1 -1

1 -1 1

(7.31)

As a result of the addition of this single measurement, which is nonredundant, all the unmeasured variables become observable. Reconciled values are the same, because the redundant system has not changed. Finally, the previously observable variable S7 is not calculated by using the new nonredundant measurement. Thus, its variance does not change. Presence of Tanks Tanks are very often an important part of a chemical plant. It has been, therefore, of great interest to practitioners to be able to include all the transfers of raw material between tanks, from tanks to processes or vice versa (called transactions or custody transfer when done at battery limits) as part of data reconciliation. However, tank holdup changes do not fit in the description of a steady state. Thus, when steady state is assumed over a period of time, the changes in holdup are usually divided by the time elapsed and considered as a pseudostream leaving (or entering) the system (Figure 7.3). If the level is measured, then the pseudostream is considered a measured stream. Real streams entering or leaving the tank are considered separately. Because transactions between tanks are reported, this conversion has been very useful in adding substantial amount of redundant streams to refinery installations and has proven to be valuable to perform refinery-wide oil accounting and oil loss assessment.

168

DATA RECONCILIATION

FIGURE 7.3. Use of pseudostreams in tanks.

Variance Estimation If measurements are independent and are not correlated, the variancecovariance matrix Q is diagonal. In such case, the elements of the diagonal are the variances of the individual measurements. In the case of steady-state data reconciliation, estimates of these values can be obtained by calculating the variance of the distribution of data around the mean value, that is: xi = cov (xi , x j ) =

n 1X xi,k n k=1

n 1 X (xi,k − x i )(x j,k − x j ) n − 1 k=1

(7.32) (7.33)

This is called the direct method. This is a correct procedure if the two aforementioned assumptions, independence and steady state, hold. In addition, outliers must not be present. Because the system is never at a true steady state, the above formulas of the direct method incorporate the process variations, that is, the variance of the natural process oscillations or changes as part of the measurement variance. In a simple case like a ramp function, for example, the variance will be a composite of one half the change in true value of the measured value during the sampling interval and the true variance. To ameliorate this problem and to assess the existence of variable interdependence (nondiagonal variance matrix), an indirect method was proposed. This method was originally proposed by Almasy and Mah (1984) who used the covariance matrix of the constraint residuals (r = C f + ) and minimized the sum of the squares of the off-diagonal elements. The method was later slightly modified by Darouach et al. (1989) who proposed an iterative procedure based on the solution of a nonlinear optimization resulting from using a maximum likelihood estimator. Finally, Keller et al. (1992) extended this work to nondiagonal covariance matrices. All these approaches still suffer from the problem that they do not consider the possible presence of outliers.

Nonlinear Steady-State Data Reconciliation

169

Chen et al. (1997) proposed the application of an M-estimator that applies a weight to each data based on its distance to the mean. They called this robust indirect method. The discussion of the estimation using dynamic data is omitted. Although all these methods for variance estimation have been tested using computer simulations and have shown their power in these controlled experiments, there is no assessment of how they behave in practice. In particular, data reconciliation software notoriously lacks of any module to perform such estimation, and there are no published results about the efficiency of these methods in practice. NONLINEAR STEADY-STATE DATA RECONCILIATION Nonlinear data reconciliation refers to the solution of the problem outlined by Equations (7.3) through (7.7). Its steady-state version is:  xM − z M ] Min [e xM − z M ]T Q −1 [e s.t.  g(e x) = 0

(7.34)

For process plants z M includes the typical state variables, flow rates, concentrations, temperatures and pressures, and the model g(e x) can include any type of unit operations and equipment. In addition, e x usually contains parameters that are not measured directly. Several methods have been proposed to solve this problem, especially when g(•) is bilinear. Because the solution is supposed to be close to the measurement (unless gross errors are present), then one can linearize g(e x), perform a classification of variables and extract the redundant system of equations. Once this is done, one can solve successively updating the Jacobian in each iteration until convergence is achieved. Other approaches using nonlinear programming, such as the popular sequential quadratic programming codes (SQP), can also be used. Commercially available software performs nonlinear steady-state data reconciliation to a good extent. For example, DATACON (version 3.0) offered by Simulation Sciences, Brea, CA, is capable of performing material, energy and component balances, all of them simultaneously, and it reconciles temperature, pressure, concentration and flow rates. In addition, it performs certain parameter estimations, such as heat transfer coefficients of heat exchangers. In the absence of systematic errors and leaks in the system, there is no reason why these models cannot be made as sophisticated and complex as the optimization techniques used to solve permit. However, software vendors are reluctant to introduce such models. One of the reasons is a certain conviction that undetected gross errors may still be largely amplified by the reconciliation, especially

170

DATA RECONCILIATION

when nonlinearities correspond to very nonideal systems. Bagajewicz and Mullick (1995) discussed this issue in more detail. DYNAMIC DATA RECONCILIATION Early work in dynamic data reconciliation is rooted in the problem of process state estimation using the concept of filtering. Lately, the problem has been solved by using the concept of model-based data smoothing. Consider the three types of state estimation problems that are illustrated in Figure 7.4. Assume an estimation of the state of the system is desired at time t. When only measurement values prior to the time of prediction t are used, including the measurement at time t, the estimation is called filtering. When time t is not included, the estimation is called prediction, and finally, when data for times larger than t are used, the estimation process is called smoothing. Finally, when discrete measurements are used, the estimators are called discrete estimators. The design of sensor networks for the specific purpose of using dynamic data reconciliation is an area on incipient research, and there is very little work performed, if any. Thus, we will limit the scope of this section to cite the different work on dynamic data reconciliation performed. For the case of linear systems one-step integration-reconciliation procedures, many of them rooted on Kalman filtering (Kalman, 1960), have been proposed. Stanley and Mah (1977) showed how Kalman filtering could be adapted to take advantage of spatial and temporal redundancy in a quasi-steady-state condition. Darouach and Zasadzinski (1991) proposed a backward difference approximation and a recursive technique to solve the constrained least square

FIGURE 7.4. State estimation problems. Adapted from Gelb, 1974.

Nomenclature

171

optimization problem. Rollins and Devanathan (1993) improved on the estimation accuracy by using a maximum likelihood function and proposing two estimators that are later averaged. The above-summarized line of work on local estimators is useful for control and monitoring purposes, and in the absence of gross errors (biases and leaks), they all perform rather well. However, the detection of gross errors shares some of the difficulties of tests based on steady-state models which are reviewed in Chapter 15. Other methods have been presented for dynamic data reconciliation. Ramamurthi et al. (1993) proposed a technique based on a successively linearized horizon, Liebman et al. (1992) used orthogonal collocation and Albuquerque and Biegler (1995) on a discretization of the system of DAE using Runge-Kutta methods. Work has also been done by using neural networks. Terry and Himmelblau (1993) proposed their use in data rectification, a concept they claimed to be more general than data reconciliation. A comparison with traditional methods was performed later by Karjala and Himmelblau (1994). Later, Himmelblau (1994) and Karjala and Himmelblau (1996) showed more examples of how data can be rectified without the need of models. Later, Du et al. (1997) proposed the use of auto-associative neural network for steady-state systems. Finally Reddy and Mavrovouniotis (1998) proposed the use of input-training neural network for gross error detection and size estimation. Setting aside the computational volume challenges, these methods, as any one-step procedures, will at best lessen the fluctuations of measurement data, but rarely produce a smooth profile. For plant day-to-day management and economics, such fluctuating patterns are not satisfactory. Indeed, a fluctuating, rather than a smooth reconciled pattern does not provide a simple description of data so that, for example, daily balances cannot be performed without further processing. Smooth stream profiles, based on a few parameters, are amenable for integration and thus easy to use and inexpensive to store. To ameliorate this deficiency of the aforementioned methods, Bagajewicz and Jiang (1997) proposed a method based on the formal integration of the system [Equation (7.10)], a polynomial representation and a reconciliation using the polynomial parameters as unknowns. NOMENCLATURE A: AR: B: BR :

incidence matrix incidence matrix for redundant systems incidence matrix incidence matrix for redundant systems

172

C: CR : f: e f: e f +M : e f o: e f R: e f NR : Q: Qs : Q R: eR : Q Q R,F : Q R,W : Si : Ui : w: wR : x: e x: z M,k :

DATA RECONCILIATION

incidence matrix incidence matrix for redundant systems vector of flow rate of streams estimates of flow rates measured values of redundant flow rates estimates of observable unmeasured flow rates estimates of redundant flow rates estimates of nonredundant flow rates variance matrix of measurements variance matrix of measurements of set s variance matrix of redundant measurements variance matrix of estimates variance matrix of redundant flow rate measurements variance matrix of redundant holdup measurements stream i unit i vector of holdup of units vector of redundant holdup of units vector of state variables estimates of state variables x set of measurements at time k

Greek Letters σˆ : σi j :

precision of estimates covariance of measurements of variables xi and x j

REFERENCES Albuquerque J. S. and L. T. Biegler. Decomposition Algorithms for On-line Estimation with Nonlinear DAE Models. Comp. & Chem. Eng, 19, pp. 1031 (1995). Almasy G. A. and R. S. H. Mah. Estimation of Measurement Error Variances from Process Data. Ind. Eng. Chem. Process Des. Dev., 23, pp. 779 (1984). Bagajewicz M. and Q. Jiang. An Integral Approach to Dynamic Data Reconciliation. AIChE J., 43, pp. 2546 (1997). Bagajewicz M. and S. Mullick. Reconciliation of Plant Data. Applications and Future Trends. AIChE Spring Meeting and First International Plant Operations and Design Conference, Houston, Texas, March (1995). Chen J., A. Bandoni and J. A. Romagnoli. Robust Estimation of Measurement Error Variance/Covariance from Process Sampling Data. Comp. Chem. Eng., 21, 6, pp. 593–600 (1997). Crowe C. M. Formulation of Linear Data Reconciliation using Information Theory. Comp. Chem. Eng., 51, 12, pp. 3359–3366 (1996).

References

173

Darouach M., R. Ragot, M. Zasadzinski and G. Krzakala. Maximum Likelihood Estimator of Measurement Error Variances in Data Reconciliation. IFAC. AIPAC Symp. 2, pp. 135–139 (1989). Darouach M. and M. Zasadzinski. Data Reconciliation in Generalized Linear Dynamic Systems. AIChE J., 37, 2, pp. 193 (1991). Du Y., D. Hodouin and J. Thibault. Use of a Novel Auto Associative Neural Network for Nonlinear Steady-State Data Reconciliation. AIChE J., 43, 7, pp. 1785 (1997). Gelb A., editor. Applied Optimal Estimation. The M.I.T. Press, Cambridge, Massachusetts (1974). Himmelblau D. M. Rectification of Data in a Dynamic Process Using Artificial Neural Networks. Proceedings of the Process Systems Engineering (PSE) International Symposium, Kyongju, Korea (1994). Johnston L. P. M. and M. A. Kramer. Maximum Likelihood Data Rectification. Steady state Systems. AIChE J., 41, 11 (1995). Kalman R. E. New Approach to Linear Filtering and Prediction Problems. J. Basic Eng., ASME, 82D, 35 (1960). Karjala T. W. and D. M. Himmelblau. Dynamic Data Rectification by Recurrent Neural Networks vs. Traditional Methods. AIChE J., 40, 11, pp. 1865 (1994). Karjala T. W. and D. M. Himmelblau. Dynamic Rectification of Data via Recurrent Neural Nets and the Extended Kalman Filter. AIChE J., 42, pp. 2225 (1996). Keller J. Y, M. Zasadzinski and M. Darouach. Analytical Estimator of Measurement Error Variances in Data Reconciliation. Comp. & Chem. Eng., 16, pp. 185 (1992). Kuehn D. R. and H. Davidson. Computer Control. II. Mathematics of Control. Chem. Eng. Prog., 57, 44 (1961). Liebman M. J., T. F. Edgar and L. S. Lasdon. Efficient Data Reconciliation and Estimation for Dynamic Process Using Nonlinear Programming Techniques. Chem. Eng. Sci., Vol. 16, No. 10/11, pp. 963 (1992). Madron F. Process Plant Performance, Measurement Data Processing for Optimization and Retrofits. Ellis Horwood, West Sussex, England (1992). Narasimhan S. and C. Jordache. Data Reconciliation & Gross Error Detection. Gulf Publishing Company, Houston, Texas (2000). Ramamurthi Y., P. B. Situ and B. W. Bequette. Control-Relevant Dynamic Data Reconciliation and Parameter Estimation. Comp. & Chem. Eng., 17, 1, pp. 41–59 (1993). Reddy V. N. and M. L. Mavrovouniotis. An Input-Training Neural Network Approach for Gross Error Detection and Sensor Replacement. Inst. of Chem. Eng. Trans IchemE, Vol. 76, part A, May (1998). Rollins D. K. and S. Devanathan. Unbiased Estimation in Dynamic Data Reconciliation. AIChE J., 39, 8, pp. 1330 (1993). S´anchez M. and J. Romagnoli. Data Processing and Reconciliation for Chemical Process Operations. Academic Press, San Diego, California (2000). Stanley G. M. and R. H. S. Mah. Estimation of Flows and Temperatures in Process Networks. AIChE J., 23, 5, pp. 642 (1977). Terry P. A. and D. M. Himmelblau. Data Rectification and Gross Error Detection in a Steady state Process via Artificial Neural Networks. Ind. Eng. Chem. Res., 32, pp. 3020–3028 (1993).

CHAPTER 8

Design of Precise Sensor Networks

INTRODUCTION In Chapter 6, the design of sensors networks to guarantee estimability was covered. In this chapter, a methodology for the grassroots design of a sensor network in a process plant with the goal of achieving certain degree of precision in key variables is presented.

COST-OPTIMAL DESIGN The cost-optimal design of a precise sensor network is obtained by solving the following problem (Bagajewicz, 1997): X  ci qi Min     i∈M1  s.t. (8.1)  σ j (q) ≤ σ j∗ ∀ j ∈ M P     qi ∈ {0, 1} ∀i ∈ M1 In the formulation of the objective function, it is assumed that there is only one potential measuring device with associated cost ci for each variable. Furthermore σ j (q) represents the variance of the estimated value of variable x j obtained after data reconciliation using Equation (7.27) or (7.28). This should be kept lower than the threshold value σ j∗ for each variable or parameter in the set MP . In Chapter 5, the degree of observability of unmeasured variables and the degree of redundancy of measured variables were introduced. These two concepts were merged into the concept of degree of estimability. In Chapter 6, these 175

176

DESIGN OF PRECISE SENSOR NETWORKS

properties were used as goals to design and upgrade sensor networks. Estimability can of course be added as a separate constraint, as in the model below. X  Min ci qi     i∈M1     s.t. ∗ (8.2) σ j (q) ≤ σ j ∀ j ∈ M P    ∗ E j (q) ≥ E j ∀ j ∈ M E      qi ∈ {0, 1} ∀i ∈ M1 To obtain a smaller value of σ j∗ , it may be required to increase the estimability of a variable. Therefore, when both constraints, precision and estimability are included, very rarely both will be binding, that is, at their limiting values simultaneously. However, estimability constraints can be a way of requesting a level of reliability that cannot be quantified otherwise. These models can be solved by using the tree enumeration procedure described in Chapter 6. Recall that the constraint on precision (σ j (q) ≤ σ j∗ ) cannot be explicitly written in an analytical form because the matrices involved change in dimension with the choices for q. Thus, this tree enumeration procedure requires the evaluation of the constraints at each node. In a recent work, Chmielewski et al. (1999) showed that the problem can be in principle reduced to a traditional MINLP formulation. Because details of this work are unpublished as this time, they are omitted. MULTIPLE INSTRUMENTS AND HARDWARE REDUNDANCY If more than one device is being considered as a potential candidate to be used in each variable measurement, the objective function requires the use of additional binary variables and additional constraints. Indeed, let n im be the number of different alternative candidates of measurement devices and let the cost of each of these candidates be given by ci,k (k = 1, . . . , n im ). Finally for each variable xi , introduce binary variables pi,k (k = 1, . . . , n im ) to determine which candidate will be used, that is, ½ pi,k =

1 0

if device k is used to measure variable i otherwise

(8.3)

Then, the total cost is now a function of p and is given by: m

C( p) =

ni XX ∀i

k=1

ci,k pi,k

(8.4)

Multiple Instruments and Hardware Redundancy

177

Therefore, the optimization problem is written as:              

m

Min

ni XX ∀i

ci,k pi,k

k=1

s.t. σ j ( p) < σ j∗ n im

X

∀ j ∈ MP

p j,k ≤ m j

∀ j ∈ M1

k=1

pi,k ∈ {0, 1}

            m

(8.5)

∀ i ∈ M1 , ∀k = 1, n i

where the constraint m

ni X

p j,k ≤ m j

(8.6)

k=1

guarantees that at the most, m j devices are assigned to each variable. When m j = 1, constraint Equation (8.6) restricts the model by not allowing more than one measurement per variable, that is, only systems with spatial redundancy and no hardware redundancy are considered. The standard deviation σ j ( p) can be easily obtained for m j > 1 through slight modifications of the variance estimation Equation (7.27). In Chapter 7, two ways of performing data reconciliation, by appropriately lumping the measurements into one and reformulating its variance, or by adding a ghost stream, were presented. Variance estimation formulas were also included. Example 8.1 Consider the process flow diagram of Figure 8.1. Flow rates are given by: x = (150.1, 52.3, 97.8, 97.8). Assume that for each rate, flowmeters of precision 3%, 2% and 1% are available at costs 800, 1500 and 2500, respectively, regardless of size. Precision is only required for variables f 1 and f 4 , that is, M P = {S1 , S4 }, with σ1∗ = 1.5% and σ4∗ = 2.0%.

FIGURE 8.1. Example 8.1.

178

DESIGN OF PRECISE SENSOR NETWORKS

TABLE 8.1. Solutions of the Precision Constrained Problem. Solution

S1

S2

S3

S4

A B

— —

2% 2%

2% —

— 2%

Two solutions are obtained featuring a cost of C = 3000. The corresponding meters are shown in Table 8.1. Assume now that the cost of the 3% sensors drops to 700. Then the optimal solution is no longer the one shown in Table 8.1. In this case two solutions of equal cost (C = 2900) are shown in Table 8.2. This solution is redundant. Although the solutions obtained with this method are low in cost, some feature nonredundant sensor networks. Though precision is achieved, gross errors are impossible to detect. Therefore, the feasible region should contain redundant networks. If at least one degree of redundancy is requested, that is, the threshold for the estimability of key variables is two, then there are two solutions with a cost of C = 3100 (Table 8.3). MAXIMUM PRECISION MODELS Maximum precision models include all those models developed for sensor network design that contain a measure of the estimation quality of parameters or state variables in the objective function. In most models, precision is minimized, sometimes taking into account the cost as a constraint. Madron and Veverka (1992) proposed to design sensor networks to minimize the mean square error of the required quantities, by solving the following problem: Min

1 X 2 σ n M i=1 i

(8.7)

TABLE 8.2. Solutions of the Precision Constrained Problem (New Cost of Instruments). Solution

S1

S2

S3

S4

C D

3% 3%

3% 3%

2% —

— 2%

Maximum Precision Models

179

TABLE 8.3. Solutions of the Precision Constrained Problem (Redundancy Required). Solution

S1

S2

S3

S4

E F

3% 3%

3% 3%

2% —

— 2%

where σi is the standard deviation of the ith required quantity and n M is the total number of required measurements. This model was later efficiently solved by Madron (1992), who used the concept of minimum spanning tree. In essence, the method is the same as the one presented in Chapter 6 for minimal sensor networks. Indeed, all is needed is to use σi2 /n M as a cost of each stream. If all variables need to be observable, then the same method applies. The algorithm stops when all the spanning trees of distance one are more expensive than the current solution. However, the costs are not fixed, and, therefore, optimality is not guaranteed. Alh´eriti`ere et al. (1997) proposed the following nonlinear optimization problem to design a system that will maximize the precision of one variable. Min σθ s.t. X c j (σ ) = cT j∈M1

c j ≤ cUj

∀ j ∈ M1

              

(8.8)

where cT is the total resource allocated to all sensors and σ is the vector of measurement standard deviations. The standard deviation of the parameter estimated, σθ , can be expressed in terms of σ through: σθ2 ≈

X

si σi2

(8.9)

i

where the coefficients si take into account the redundancy of the system and the instrument location is assumed a priori. This type on nonlinear programming formulation has some deficiencies: (1) The continuous representation of variables leads to nondiscrete values for the number of sensors. (2) If set of measurements leading to the estimation of the parameter is redundant, a smaller variance for each variable can be obtained, and, therefore, fewer measurements can accomplish the same parameter variance.

180

DESIGN OF PRECISE SENSOR NETWORKS

(3) Different sets of measurements can lead to the estimation of the same parameter. The procedure chooses one a priori. In other words, the coefficients si in Equation (8.9) are not fixed, but rather vary with the choice of instrumentation. GENERALIZED MAXIMUM PRECISION MODEL In this section, a generalized model for maximum precision is presented. The model considers the minimization of a weighted sum of the precision of the parameters. X  Min a j σ j2 (q)     j∈M p    s.t. X (8.10)  ci qi ≤ cT     i∈M1   qi ∈ {0, 1} ∀i ∈ M1 When hardware redundancy is used, then an upper bound on the cost for each instrument (or on number of instruments, if the costs are all equal) can be imposed. Lower bounds on cost are not needed. The generalized model

r Can provide a design for multiple parameter estimation, which Equation (8.8) cannot.

r Because binary variables are used, more realistic results in accordance with the discrete nature of sensors can be obtained.

r Takes into account redundancy and all possible forms of obtaining the parameters.

r The minimum overall variance model (Madron and Veverka, 1992; Madron, 1992) is a particular case where cT is a large value, that is, the constraint on total cost is dropped. Equation (8.8) is also a particular case of Equation (8.10). Indeed, the objective function in Equation (8.8) can be obtained by assuming one parameter in Equation (8.10). In addition, when q is a priori selected, as in Equation (8.8), then the problem becomes NLP; the only difference between Equation (8.10) and Equation (8.8) is that the cost constraint is an equality in Equation (8.8). However, it can easily be proved that both problems have the same solution. Indeed, assume that a solution e σ of Equation (8.10) is such that the constraint on cost is not binding, that is, X ci (e σ ) < cT (8.11) i∈M1

Relation Between Sensor Network Models

181

Because the functions c j (σ ) are continuous functions of the variance σ , then there exists a solution σˆ in the neighborhood of σˆ , such that cT >

X

ci (σˆ ) >

i∈M1

X

ci (e σ)

(8.12)

i∈M1

Because the cost is monotonically decreasing with σ j , then the values of precision can be lowered until the cost constraint is binding, producing a reduction in objective function. Q.E.D

RELATION BETWEEN SENSOR NETWORK MODELS In this section a mathematical connection between the maximum precision model, given by Equation (8.10) and the minimum cost model given by Equation (8.5) is presented. We first modify the generalized maximum precision model by adding upper bounds on precision. X  a j σ j2 (q) Min     j∈M p    s.t.    X ci qi ≤ cT (8.13)   i∈M1    σ j (q) ≤ σ j∗ (q) ∀ j ∈ M p      qi ∈ {0, 1} ∀i ∈ M1 The added constraints are trivial if these upper bounds are properly selected. Similarly, we modify the minimum cost model by adding a trivial constraint consisting of the weighted average sum of existing precision constraints (σ j (q) ≤ σ j∗ ). Min s.t. X

X

ci qi

i∈M1

a j σ j2 ≤

X

a j (σ j∗ )

j∈M P

j∈M P

σ j (q) ≤ σ j∗

∀ j ∈ MP

qi ∈ {0, 1}

∀i ∈ M1

           2          

(8.14)

We are now in a position to show that the minimum cost model is the dual of the maximum precision model in the Tuy sense. This duality was established in general by Tuy (1987) and a short version of it is reproduced next.

182

DESIGN OF PRECISE SENSOR NETWORKS

Tuy Duality Theorem Consider the problems Pβ : Inf { f (x): x ∈ V, g(x) ≥ β} and Q α : Sup{g(x): x ∈ V, f (x) ≤ α} where V is an arbitrary set in R n , f : R n → R and g: R n → R are two arbitrary functions, and α and β are two real numbers. Then: If the solutions of both Q α and Pβ are bounded, then α ≤ min Pβ ⇔ β ≥ max Q α

(8.15)

Duality in Sensor Network Design Applying Tuy Duality Theorem we obtain (Bagajewicz and S´anchez, 1999) X   ci qi cT = α ≤ Min f (x) = Min         i∈M1       s.t.     X X   2 ∗ 2 a j σ ji ≥ − a j (σ ji ) = β g(x) = −   j∈M p j ∈ Mp         ∗   σ (q) ≤ σ ∀ j ∈ M   j p j       qi = {0, 1} ∀i ∈ M1 X X   a j (σ j∗ )2 ≥ Max g(x) = −Min a j σ j2  β=−        j∈M p j∈M p         s.t.     X ci qi ≤ cT = α (8.16) ⇔ f (x) =   i∈M1           σ j (q) ≤ σ j∗ ∀ j ∈ M p         qi = {0, 1} ∀i ∈ M1 This implies, in simple terms that,

r If the constraint on cost (cT ) for the maximum precision model is smaller

than the optimum cost obtained from the minimum cost model, then the weighted sum of variances in the maximum precision model does not reach its maximum possible value given by the weighted sum of bounds. r If the optimum value of the weighted sum of variances obtained in the maximum precision model is smaller than the weighted sum of bounds, then the solution of the minimum cost model is larger than the maximum cost used in the maximum precision models. r When the minimum cost obtained from the minimum cost model is used as an upper bound on cost in the maximum precision model, then the weighted sum of variances is equal to the weighted sum of bounds. This also implies

Solution Procedures for Linear Systems

183

then that the constraint on weighted averaged is binding in the minimum cost model. In other words, the solution of one problem is one solution of the other and vice versa. SOLUTION PROCEDURES FOR LINEAR SYSTEMS Consider now the tree-searching algorithm outlined in Chapter 6 (Figure 6.4). It was stated that once a node becomes feasible, the enumeration of the rest of the branches of that node are not necessary because the addition of new instrumentation will only increase cost. However, cost increases from one level to the next if only one type of instrument is used (n im = 1). In such case, the first feasible node in each branch is the one with lowest cost in the tree its spans. If more than one type of instrument is used (n im ≥ 1), then the cost of each node has a lower bound of X C(q) = c i qi (8.17) ∀i

where ci = Min {ci,k } and qi = Max { pi,k }. This property is used below in the ∀k

branching stopping criterion.

∀k

Stopping Criteria In the case where more than one instrument is available for each stream (n im ≥ 1), it is possible that some nodes belonging to the tree spanned by a feasible node can be cheaper. If the solution of corresponding to the feasible node has larger cost than its lower bound, a cheaper solution can only exist in a few levels below this node. This number of levels is given by the nearest Cb − C(q) , wheree c is the cost of the cheapest instrument and lower integer of e c Cb is the cost of the current node. Thus, after a feasible node is obtained, the maximum number of additional levels to explore is determined. The stopping criterion becomes, therefore, to stop after the maximum level of each feasible branch is reached and the node is evaluated. In the case where n im > 1 the enumeration of the tree can also be based on developing a tree for p, rather than for q. In such case the simple criteria of stopping the tree at the first feasible node should be used. However, the tree will contain more nodes at each level, and, therefore, it will be larger. This stopping criteria scheme prevents the enumeration procedure from being exhaustive and impractical. However, it is still not an efficient procedure for large systems. Because the focus of early articles in cost models was the conceptual development of the problem, investigations of numerical efficient schemes were not addressed, and this aspect is still a matter of research.

184

DESIGN OF PRECISE SENSOR NETWORKS

PARAMETER ESTIMATION IN NONLINEAR SYSTEMS Considerable attention is given today to the issue of parameter estimation, especially in the context of the increasing popularity of on-line optimization, which requires these parameter values to tune a model. Several authors have addressed the problem of parameter estimation. One of the early articles (Britt and Luecke, 1973) proposed the use of maximum likelihood principle to obtain parameters in implicit models. Reilly and PatinoLeal (1981) initiated a line of work that bases parameter estimation on linearization, and Kim et al. (1997) proposed the use nonlinear programming. The effect of data reconciliation and gross error detection in parameter estimation was analyzed by MacDonald and Howat (1988), Serth et al. (1993) and Pages et al. (1994). The optimization of the existing resources allocated to the sensors for improving the accuracy of a parameter was attempted by Alh´eriti`ere et al. (1997). It was reported that this method was successfully applied to a crude distillation unit (Alheritiere et al., 1998a,b). Among the approaches based on linear algebra, Kretsovalis and Mah (1987) proposed a combinatorial search based on the effect of the variance of measurements on the precision of reconciled values. Tjoa and Biegler (1991) explored methods for the estimation of parameters in differential-algebraic equation systems. However, no method to select measurements for such systems has been proposed. Finally, Krishnan et al. (1992a,b) presented a strategy of sensor network design to satisfy the needs of on-line optimization, and Loeblein and Perkins (1998) discussed the economics of this issue in light of the needs of on-line optimization systems. The method proposed by Krishnan et al. (1992a) relies on a screening procedure that involves three steps. (1) A first step performs a structural analysis (singular value decomposition) that disregards measurements with little or no effect on the parameters. (2) A second step disregards measurements that have insignificant effect on the axis length of the confidence region of the parameter estimates. (3) The last step determines the interaction between the parameter estimates by means of calculating a covariance matrix (Beck and Arnold, 1977). If the off-diagonal elements are too large, then the parameters are highly interactive, and, therefore, any problem with the set of measurements that affects one parameter will also affect the other. The “best” set of measurements will have a small confidence region and lead to low interaction between the parameters. Unfortunately, this method does not take into account cost and does not offer a systematic procedure to make a final selection of the “best” set. Notwithstanding its importance, the method is not discussed further in this book. Instead, attention is turned towards the minimum cost model.

Parameter Estimation in Nonlinear Systems

185

Consider a given process whose steady-state operation is described by the nonlinear algebraic system of equations f (z) = 0, where z contains the vector x of state variables and the vector θ of process parameters z=

· ¸ x θ

(8.18)

In the design stage of a given process, nominal values x0 are available such that they are consistent with mass and energy balances. As it was pointed out in Chapter 5, the model can be linearized around its expected operation point so that the following linear system is obtained: Jz ∼ =d

(8.19)

where matrix J represents the Jacobian of f (z) around z 0 and d is the corresponding constant. Matrix J may be partitioned in submatrices related to the vectors of measured variables and unmeasured variables. Precision of Estimates The observability of unmeasured variables is determined through a variable classification procedure that allows categorizing them in observable and unobservable variables. Several strategies to do this classification have been outlined in Chapter 4. If the observability analysis indicates that all parameters in M P are observable, then the set of proposed instruments represented by q is feasible and the following step, the accuracy estimation of the parameters, can be undertaken. Expressions that allow the estimation of unmeasured state variables and parameters in terms of reconciled measurements are obtained from the observability analysis, as it was described in Chapter 5. If data reconciliation procedures are not applied, then, in the presence of redundant equations, there are several different sets of nonredundant measurements that can be used to obtain the parameter. In the absence of information about how good each of these sets are, one can only resort to using the set that will provide the best precision or resort to averaging. No option is, however, better than straight data reconciliation, especially because data reconciliation is always accompanied with gross error detection. Example 8.2 We now return to the example of a single flash unit introduced in Chapter 5 through Equations (5.108) through (5.111) and Table 5.3, for which it is desired to estimate the vaporization efficiency ηi . For simplicity, we will assume that all

186

DESIGN OF PRECISE SENSOR NETWORKS

components have the same vaporization efficiency η, that is, Equation (5.111) is rewritten as follows: yi3 = ηyi2 Pi (sat)/P

(8.20)

Assume the vector z is given by: z T = [F1 y11 y21 y31 F2 y12 y22 y32 F3 y13 y23 y33 P η]

(8.21)

Then, the corresponding matrices (J, z and d) of the linearized model around the nominal operation point are the following: F1 y11 y21 y31

F2

y12

y22

y32

F3

y13

y23

y33

P

h

0.2 100 0 0 - 0.167 - 50.5 0 - 0.233 - 49.5 0.5 0 100 0 - 0.563 - 50.5 - 0.436 - 49.5 0.3 0 0 100 - 0.270 - 50.5 - 0.331 - 49.5 1 1 1 J= 1 1 1 1 1 1 - 1.395 1 1.e - 4 - 0.245 - 0.77 1 1.e - 4 - 0.458 - 1.227 1 1.e - 4 - 0.348

d T = [0.033 −0.0135

−0.0195

1. 1.

1. 0.

0. 0.]

(8.22)

(8.23)

Assuming all flow rates, all compositions and the pressure are measured, the best precision will be obtained. Using Q-R orthogonal factorizations for variable classification (Chapter 5), the precision of the estimation of parameter η is obtained as a function of measured variables. Table 8.4 shows the results of the minimum cost model when it is solved using different bounds for the standard deviation of the parameter. Table 8.5 presents the results for the minimum cost model, and Table 8.6 presents results for the maximum precision model for different bounds on standard deviation of the parameter and total cost. As the results indicate, instrumentation cost increases with precision requirements. Sometimes, no feasible set is available when cost constraints are imposed. Two examples (cases 1 and 2) are included where the minimum cost for the minimum cost model is considered as a bound for the maximum precision model; both models were run with the same bound for the parameter standard deviation and they produce the same results. We now illustrate the case of multiple parameter estimation. Consider the same flash example, with a different equilibrium correction parameter for each component, that is, yi3 = ηi yi2 Pi (sat)/P. The minimum cost model was run using the maximum variances indicated in Table 8.7, where optimal solutions are presented. The optimal cost of instrumentation increases with the precision

TABLE 8.4. Set of Possible Instruments. Measured Variable

Instrument Cost

Instrument Standard Deviation

F1 y11 y21 y31 F2 y12 y22

250 700 700 700 250 700 700

3.0 0.015 0.015 0.015 1.515 0.01 0.01

Measured Variable

Instrument Cost

Instrument Standard Deviation

y32 F3 y13 y23 y33 P

700 300 800 800 800 100

0.01 1.418 0.01 0.01 0.01 14.0

TABLE 8.5. Results for the Minimum Cost Model. Case No.

σ∗

σ

Cost

Optimal Set

1 2 3 4 5

0.05 0.007 0.006 0.0058 0.005

0.00865 0.005896 0.005896 0.00551 0.004982

1600 1750 1750 2300 2900

y12 , y33 , P F1 , y12 , y22 , P F1 , y12 , y22 , P y22 , y32 , y13 , P F1 , F2 , y22 , y13 , y23 , P

TABLE 8.6. Results for the Maximum Precision Model. Case No.

σ∗

cT∗

σ

Cost

Set

1 2 3 4 5 6

0.05 0.007 0.0058 0.0055 0.005 0.005

1600 1750 3600 1750 2500 3000

0.00865 0.005896 0.0048 — — 0.004982

1600 1750 3600 — — 2950

y12 , y33 , P F1 , y12 , y22 , P F1 y21 F2 y22 y13 y23 P — — F2 y22 F3 y13 y23 P

TABLE 8.7. Results for the Minimum Cost Model. Case No. ση∗1 1 2 3 4

0.1 0.07 0.06 0.04

ση∗2

ση∗3

0.2 0.09 0.06 0.05

0.09 0.0990 0.0265 0.0520 3100 Y22 , y32 , y23 , y33 , P 0.09 0.0660 0.0288 0.0486 3900 y12 , y32 , y13 , y23 , y33 , P 0.06 0.0572 0.0218 0.0425 4600 y12 , y22 , y32 , y13 , y23 , y33 , P 0.05 — — — — —

ση1

ση2

ση3

Cost

Optimal Set

187

188

DESIGN OF PRECISE SENSOR NETWORKS

TABLE 8.8. Results for the Maximum Precision Model. Case No.

ση∗1

ση∗2

ση∗3

cT

ση1

ση2

ση 3

1 2 3 4 5 6 7 8 9

— 0.07 0.06 0.06 0.5 0.5 0.06 0.06 0.05

— 0.09 0.06 0.06 0.5 0.5 0.06 0.03 0.03

— 0.09 0.06 0.06 0.5 0.5 0.06 0.05 0.05

3100 3900 4600 7000 4000 3100 4600 6500 9000

0.0990 0.0660 0.0573 0.0571 0.07008 0.0990 0.0573 0.05713 —

0.0265 0.0288 0.02180 0.02179 0.026572 0.0265 0.02180 0.02178 —

0.05209 0.0486 0.04258 0.04254 0.073572 0.05209 0.04258 0.04256 —

of the estimates of the parameters, but, for the last case, no set of available instruments can fulfill precision requirements. Optimal designs obtained by applying maximum precision models are presented in Tables 8.8 and 8.9. For cases 1–4, all weights in the objective function are considered equal to one. In case 1, bounds on the standard deviation are removed. In cases 2–4, higher costs and higher precision of the parameters are allowed. This leads to lower weighted sum of parameter variances. For cases 5–8, the weights in the objective function values are [5 1000 1]. The same optimal set is obtained for cases 1–6 and 3–7 of Table 8.8. They have the same feasible region and differ only in weight values. Higher precision is required in case 8 and an unfeasible situation is presented in case 9. Case 2 and cases 3–7 of Tables 8.7 and 8.8 are examples of duality. From the results of the above examples, some conclusions and recommendations can be made. If bounds on standard deviation of the parameters are available, then minimum cost models are a better alternative because the selection of weights for the objective function can be avoided. If, in turn, bounds

Case No. 1 2 3 4 5 6 7 8 9

TABLE 8.9. Results for the Maximum Precision Model. X ση2i c Optimal Set i 3100 3900 4600 6800 3600 3100 4600 6500 —

0.01326 0.00750 0.00557 0.00555 0.736044 0.757837 0.493668 0.492814 —

y22 , y32 , y23 , y33 , P y12 , y32 , y13 , y23 , y33 , P y12 , y22 , y32 , y13 , y23 , y33 , P F1 , y11 , y31 , F2 , y12 , y22 , y32 , F3 , y13 , y23 , y33 , P F1 , F2 , y12 , y22 , y13 , y23 , P y22 , y32 , y23 , y33 , P y12 , y22 , y32 , y13 , y23 , y33 , P F1 , y11 , y21 , F2 , y12 , y22 , y32 , y13 , y23 , y33 , P —

Nomenclature

189

are not available, maximum precision models constrained only by cost can be used, using for example all weights equal to one. Different weights may be selected to reflect the relative importance of the precision of the parameters. However, instead of this, after the maximum precision model is run with proposed weights, one can use the minimum cost model and perform a more meaningful sensitivity analysis in terms of cost, by using values suggested by the result of the maximum precision model as bounds on precision.

NOMENCLATURE C(.): C(.): e c: ci : cT : Ei : E k∗ : mj : Ml : MP : nM: n im : P: pi,k : q: Si : Ui : x: e x: y: z:

cost function lower bound on cost cost of the cheapest instrument cost of sensor i bound on total cost estimability of variable i threshold of estimability maximum number of sensors to measure variable i set of variables where sensors can be placed set of key variables for precision number of measurements number of instruments available to measure variable i pressure binary vector indicating whether sensor k is located (1) or not (0) in variable i binary vector indicating whether sensors are located (1) or not (0) in variables stream i unit i vector of state variables estimates of state variables x molar fractions vector of state variables, and parameters

Greek Letters η: σ: σ ∗: e σ: θ:

vaporization efficiency precision precision threshold precision of estimates vector of parameters

190

DESIGN OF PRECISE SENSOR NETWORKS

REFERENCES Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Evaluation of the Contribution of Refinery Process Data to Performance Measures. AIChE Annual Meeting, Los Angeles (1997). Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Cost Benefit Analysis of Refinery Process Data: Case Study. Comp. & Chem. Eng., 22, Suppl., pp. S1031–S1034 (1998a). Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Cost Benefit Analysis of Process Data in Plant Performance Analysis. AIChE Annual Meeting, Miami (1998b). Bagajewicz M. Design and Retrofit of Sensor Networks in Process Plants. AIChE J., 43, 9, pp. 2300– 2306 (1997). Bagajewicz M. and M. S´anchez. Duality of Sensor Network Design Models for Parameter Estimation. AIChE J., 45, 3, pp. 661–664 (1999). Beck J. V. and K. J. Arnold. Parameter Estimation for Engineering and Science. Wiley, New York (1977). Britt H. I. and R. H. Luecke. The Estimation of Parameters in Nonlinear Implicit Models. Technometrics, 15, 2, pp. 233–247 (1973). Chmielewski D., T. E. Palmer and V. Manousiouthakis. Cost Optimal Retrofit of Sensor Networks with Loss Estimation Accuracy. AIChE Annual Meeting, Dallas (1999). Kim I., M. S. Kang, S. Park and T. F. Edgar. Robust Data Reconciliation and Gross Error Detection: The Modified MIMT Using NLP. Comp. & Chem. Eng., 21, 7, pp. 775–782 (1997). Kretsovalis A. and R. S. H. Mah. Observability and Redundancy Classification in Multicomponent Process Networks. AIChE J., 33, pp. 70–82 (1987). Krishnan S., G. Barton and J. Perkins. Robust Parameter Estimation in On-line Optimization. Part I. Methodology and Simulated Case Study. Comp. & Chem. Eng., 16, pp. 545–562 (1992a). Krishnan S., G. Barton and J. Perkins. Robust Parameter Estimation in On-line Optimization. Part II. Application to an Industrial Process. Comp. & Chem. Eng., 17, pp. 663–669 (1992b). Loeblein C. and J. D. Perkins. Economic Analysis of Different Structures of On-Line Process Optimization Systems. Comp. & Chem. Eng., 22, 9, pp. 1257–1269 (1998). MacDonald R. and C. Howat. Data Reconciliation and Parameter Estimation in Plant Performance Analysis. AIChE J., 34, 1 (1988). Madron F. Process Plant Performance, Measurement Data Processing for Optimization and Retrofits. Ellis Horwood, West Sussex, England (1992). Madron F. and V. Veverka. Optimal Selection of Measuring Points in Complex Plants by Linear Models. AIChE J., 38, 2, pp. 227 (1992). Pages A., H. Pingaud, M. Meyer, X. Joulia. A Strategy for Simultaneous Data Reconciliation and Parameter Estimation on Process Flowsheets. Comp. & Chem. Eng., 18, Suppl., pp. S223–S227, (1994). Reilly P. M. and H. Patino-Leal. A Bayesian Study of the Error-in-Variables Model. Technometrics, 23, 3, pp. 221 (1981). Serth R., B. Srinkanth, S. Maronga. Gross Error Detection and Stage Efficiency Estimation in a Separation Process. AIChE J., 39, pp. 1726 (1993). Tjoa I. B. and L. T. Biegler. Simultaneous Solution and Optimization Strategies for Parameter Estimation of Differential-Algebraic Equation Systems. Ind. Eng. Chem. Res., 30, pp. 376–385 (1991). Tuy H. Convex Programs with an Additional Reverse Convex Constraint. JOTA, 52, pp. 463–486 (1987).

CHAPTER 9

Precision Upgrade of Sensor Networks

INTRODUCTION In this chapter the issues related to the upgrade of instrumentation to improve the accuracy of the estimation of key variables are discussed and specific models are presented. As pressure mounts to obtain more reliable and accurate estimates of variables and parameters, the use of data reconciliation techniques is the first level of response. In many cases, the existing redundancy is not enough to guarantee the level of accuracy required. Thus, one is faced with an existing set of sensors, to which an addition of new sensors will provide the desired accuracy. UPGRADE OPTIONS There are three possible ways of performing the upgrade of a sensor network. They are: (1) addition of new instruments (2) substitution of existing instruments by new ones (3) relocation of existing instruments Typically, addition of new instruments has been the response first considered. However, substitution and/or relocation are options that are sometimes substantially cheaper. For example, the substitution of thermocouples by thermoresistances or their relocation is comparatively inexpensive. However, it is in the case of laboratory analysis where the options of substitution and/or relocation should be strongly considered. 191

192

PRECISION UPGRADE OF SENSOR NETWORKS

COST BENEFIT ANALYSIS Because upgrading requires capital expenditure, it must be done on the basis of a cost benefit analysis. The costs of the instrumentation are straightforward to obtain. However, the benefits need to be somehow quantified. In the case of data accuracy needs for accounting purposes, the benefit can be quantified as the decrease of lost revenue due to imprecise data. Describing it in simple terms, the larger the uncertainty in the assessment of the amount of raw materials purchased and/or the products sold, the larger the probability of loss revenue. Thus, every percent accuracy of these flows can be assigned a monetary value. In the case of monitoring and parameter estimation for on-line optimization, an economic measure can also be developed. First, in the case of monitoring, one can associate a revenue loss for product that is not in specification. Finally, in the case of on-line optimization Loeblein and Perkins (1998) discuss the use of measures of loss of economic performance due to offset from the correct optimum due to plant-model mismatches. In other cases, industry is looking less at the benefit in monetary terms and simply plots the increased precision as a function of investment (Alh´eriti`ere et al., 1998b). Although this approach is intuitive, its importance relies on the possibility of visualizing the effect of instrumentation cost.

UPGRADE MODELS BASED ON ADDITION OF SENSORS The upgrading of a sensor network by the simple addition of instrumentation is considered first. This has to be done at minimum cost while reaching the goals of precision in key variables. This task can be accomplished by using the following minimum cost model (Bagajewicz and S´anchez, 2000): Min

XX

cik qikN

i∈M1 k∈K i

s.t. σ j (q) ≤ σ j∗ X qikN + Ni ≤ Ni∗

∀ j ∈ Mp ∀i ∈ M1

k∈K i

qikN ∈ {0, 1}

∀i ∈ M1 , ∀k ∈ K i

                    

where M p = set of all variables for which precision constraints are imposed. M1 = set of all variables that can be measured. K i = set of new sensors available to measure variable i. Ni = number of existing sensors measuring variable i.

(9.1)

Upgrade Models Based on Addition of Sensors

193

cik = cost of the k-new sensor for measuring variable i. Ni∗ = maximum number of sensors that are allowed to be used to measure variable i. q N = vector of binary variables for new sensors. The first constraint establishes a bound on precision, which is a function of the set of fixed existing sensors and new instrumentation. The second constraint establishes an upper bound on the number of sensors used to measure each variable. This number is usually one for the case of flowrates, but it can be larger in the case of laboratory measurements of concentrations. In this case, these constraints can be lumped in one constraint for all concentration measurements to express the overall limitation of the laboratory on a daily (or fraction thereof ) basis. When maximum precision is requested and cost is a constraint, then a maximum precision model of the following type can be used: Min

X

            

a j σ j2 (q N )

j

s.t. XX

cik qikN ≤ cT

i∈M1 k∈K i

X

qikN

+ Ni ≤

Ni∗

∀i ∈ M1

k∈K i

qikN ∈ {0, 1}

∀i ∈ M1 , ∀k ∈ K i

           

(9.2)

where cT is the bound on the capital expenditure. The two models are equivalent in the Tuy sense (Chapter 8). Alh´eriti`ere et al. (1997, 1998a) presented a model without binary variables that can provide a first good approximation to the upgrade. However, this model has some limitations, because it can render fractional instruments and relates the cost to the precision in a continuous manner. These limitations were discussed in Chapter 8. Later, Alh´eriti`ere et al. (1998b) addressed these limitations and presented an MINLP model. Example 9.1 In Chapter 5 a simplified flash tank model and nominal variable values from Van Winkle (1967) were presented. The model has been used to illustrate the design of accurate sensor networks in Chapter 8. In this case, a set of three initial instruments is installed on the unit. The measured variables are [y12 , y33 , P]. New instruments are necessary to satisfy precision requirements on the vaporization efficiency coefficient η. Measurements

194

PRECISION UPGRADE OF SENSOR NETWORKS

TABLE 9.1. Set of Possible Instruments for the Flash. Measured Variable

Instrument Cost

Instrument Standard Deviation

F1 y11 y21 y31 F2 y12 y22 y32 F3 y13 y23 y33 P

250 700 700 700 250 700 700 700 300 800 800 800 100

3.0 0.015 0.015 0.015 1.515 0.01 0.01 0.01 1.418 0.01 0.01 0.01 14.0

are obtained by installing instruments chosen from the set presented in Table 9.1. In this case, hardware redundancy is not considered, and only one type of instrument is available to measure a variable. The optimal set of additional instruments is determined by solving both the minimum cost model and the maximum precision model. Results from each model are provided in Table 9.2 and 9.3, respectively. The standard deviation of the parameter is calculated by using Equation (7.28). In some examples, higher requirements in precision are satisfied by incorporating a large set of instruments and increasing the cost. But in other cases, there is no a feasible set of instruments that fulfils precision constraints. Example 9.2 Consider the industrial heat exchanger network of Figure 9.1. It corresponds to a set of heat exchangers where crude is heated up by using hot gas-oil TABLE 9.2. Minimum Cost Optimal Solutions for Flash Sensor Upgrading. Case

σ∗

σ

Cost

Additional Optimal Set

1 2 3 4 5

0.2 0.006 0.005 0.0046 0.004

0.00866 0.00574 0.00480 0.00459 —

0 1200 2550 4900 —

— F1 , F2 , y22 F1 , y22 , y13 , y23 F1 , y11 , y21 , F2 , y22 , y32 , y13 , y23 —

Upgrade Models Based on Addition of Sensors

195

TABLE 9.3. Maximum Precision Optimal Solutions for Flash Sensor Upgrading. Case

cT

σ∗

σ

Cost

Additional Optimal Set

1 2 3 4 5

1700 1200 2550 3000 5500

∞ 0.006 0.005 0.005 0.0046

0.0087 0.00574 0.00480 0.00474 0.00459

0 1200 2550 3000 5200

6

7000

0.004





— F1 , F2 , y22 F1 , y22 , y13 , y23 y22 , y32 , y13 , y23 F1 , y11 , y21 , F2 , y22 y32 , F3 , y13 , y23 —

coming from a column. In this case, the heat transfer coefficients for the heat exchangers are estimated using temperature and flow rate measurements. The existing instrumentation is given in Table 9.4, and data for the streams and the units are given in Tables 9.5 and 9.6. The standard deviations of heat transfer coefficients calculated by using the installed set of instruments are [12.27 2.96 3.06]. To obtain these values, all redundant measurements have been used. To enhance the precision of the parameter, new instruments should be added. In this example, hardware redundancy is considered. Furthermore, different types of new instruments are available to measure some temperatures. Data for new instrumentation are presented in Table 9.7, where costs and standard deviations are shown. The maximum number of allowed instruments for measuring each variable is given in Table 9.8; a zero value indicates there is a restriction for measuring the corresponding variable. Tables 9.9 and 9.10 present results for the upgrade problem by using both types of models. When there are two possible instruments to measure a variable, the type of instrument is indicated between parenthesis in the optimal solution set. The weights for the maximum precision model are assumed equal to one. Case 1 from Table 9.9 and case 3 from Table 9.10 illustrate the equivalency of the models.

FIGURE 9.1. Industrial heat exchanger network.

TABLE 9.4. Existing Instrumentation for the Heat Exchanger Network.

Streams

No. of Flowmeters

Standard Deviation of Flowmeters

No. of Thermocouples

Standard Deviation of Thermocouples

S1 S3 S4 S5 S7 S9

1 — — 1 1 —

3% — — 3% 3% —

1 1 1 1 1 1

2◦ F 2◦ F 2◦ F 2◦ F 2◦ F 2◦ F

TABLE 9.5. Flow Rate Information for Figure 9.1. Stream

Flowrate (lb/hr)

Temperature (◦ F)

S1 S2 S3 S4 S5 S6 S7 S8 S9

224,677 224,677 224,677 224,677 217,019 217,019 398,008 398,008 398,008

542.8 516.3 448.3 402.2 307.6 339.8 191.2 221.6 266.9

TABLE 9.6. Unit Information for Figure 9.1. Heat Exchanger

Area (ft 2 )

FT

Cph (BTU/lb◦ F)

Cpc (BTU/lb◦ F)

U1 U2 U3

500 1100 700

0.997 0.991 0.995

0.6656 0.6380 0.6095

0.5689 0.5415 0.52

TABLE 9.7. Availability of New Instrumentation.

Stream 1 2 3 4 5 6 7 8 9

196

Flowmeters Standard Deviation Cost 3% 3% 3% 3% 3% 3% 3% 3% 3%

2250 2250 2250 2250 2250 2250 2250 2250 2250

Temperature Sensors Standard Deviation Cost 2F/0.2◦ F 2◦ F 2◦ F 2F/0.2◦ F 2◦ F 2◦ F 2◦ F 2◦ F 2◦ F/0.2◦ F

500/1500 500 500 500/1500 500 500 500 500 500/1500

197

Upgrade Models Based on Addition of Sensors

TABLE 9.8. Maximum Number of Instruments for the Heat Exchanger Network. Variable

Ni∗

Variable

Ni∗

F1 F2 F3 F4 F5 F6 F7 F8 F9

1 1 1 1 1 0 1 0 1

T1 T2 T3 T4 T5 T6 T7 T8 T9

2 1 1 2 1 1 1 0 2

Combinatorial Analysis Even though the above presented model is general enough to cover all possibilities, we briefly review some early attempts to solve the problem for steadystate systems with only random measurement noise. Kretsovalis and Mah (1987) developed a combinatorial strategy to incorporate measurements to an observable system. Each time a new instrument i is added, the following objective function is minimized: ¡ ¢ ¡ ¢ Ri = a c0 + cik + H0 + 1Hik

(9.3)

In this function, a is a weighting factor, c0 the cost of the existing instrumentation, cik the cost associated with the instrument i placed at position k, H0 measure of the estimation error corresponding to the original structure and 1Hik the change in the estimation error when an instrument i is placed at position k. The estimation error H0 is calculated as the trace of the variance matrix, that e is, H0 = tr( Q). This model is equivalent to solving a slightly modified version of model (9.2), using a penalty function approach and in a sequential manner, that is, for the TABLE 9.9. Results for the Minimum Cost Model. Case

σU∗1

σU∗2

σU∗3

σU1

σU2

σU3

Cost

Optimal Set

1 2 3

4.0 3.5 3.0

4.0 2.0 1.5

4.0 2.5 2.5

3.6160 2.7746 2.7230

1.9681 1.6892 1.4972

2.7112 2.3833 2.2844

500 1500 6500

4

3.5

2.0

2.0









T6 (1) T2 (1)T4 (1)T6 (1) F2 F3 T2 (1)T4 (1)T6 (1)T9 (1) F2 F4 T2 (1)T4 (1)T6 (1)T9 (1) F3 F4 T2 (1)T4 (1)T6 (1)T9 (1) —

198

PRECISION UPGRADE OF SENSOR NETWORKS

TABLE 9.10. Results for the Maximum Precision Model. X ∗ U1

Case σ

∗ U3

ση2i

σ

σ

Cost

6250 2.5379 1.5962 2.1341 13.4578 3000 2.6635 1.6854 2.2068 14.8048 500 3.6160 1.9681 2.7112 24.2999

∗ U2

1 2 3

— — 4.0

— — 4.0

— — 4.0

4

3.0

1.5

2.5

5

3.0

1.5

2.5

σU1

σU2

σU3

i

Optimal Set(s)

F2 T1 (2)T2 (1)T4 (2)T6 (1) T1 (1)T2 (1)T4 (2)T6 (1) T6 (1) ½ F2 F3 T2 (1)T4 (1)T6 (1)T9 (1) 6500 2.7230 1.4972 2.2844 14.8749 Alternatives F2 F4 T2 (1)T4 (1)T6 (1)T9 (1) F3 F4 T2 (1)T4 (1)T6 (1)T9 (1) 4500 — — —

addition of one instrument at a time. Indeed, consider the addition to model (9.2) of a constraint stating that the total number of instruments added is only one, with no upper bound on cost, that is, cT → ∞. Then the second term of model (9.3) is the objective function of this modified version of model (9.2) with all weights equal to one. In turn, the first term is the penalty function consisting of the cost. We will not elaborate on this method further, because it is obvious it cannot guarantee optimality. In fact, at any given step, it may lead to the addition of a sensor that is not part of the optimal solution. An improved combinatorial method for linear systems is to first consider the addition of one sensor. The result of this procedure is a sensor to be added and a cost c1 . The minimum cost model, model (9.1), is recommended, but if bounds on precision are not known, model (9.2) can be used. In the next step, one can consider the addition of two instruments, resulting in a pair of new instruments (which does not necessarily have to include the one found when only one instrument was added) and a cost c2 . In this manner, one can continue considering an increasing number of sensors and construct a table of cost benefit. In this way, one can choose how many instruments one wants to add. It is apparent that this procedure is less efficient than the use of MINLP procedures to solve models (9.1) or (9.2) directly. However, one can readily take advantage of this approach when a quick evaluation is needed and the solvers for models (9.1) or (9.2) are not available. MODEL FOR RESOURCE REALLOCATION In many cases, measurements can be easily transferred at no cost from one stream to another. This is the case of concentration measurements that are performed in the laboratory. Pressure gauges and thermocouples can also be transferred from one place to another at a relatively small cost. However, flowmeters are probably an exception. Because one can consider that the resource reallocation does not involve cost, or eventually neglect it, the minimum cost model would reduce to a set of MINLP algebraic inequalities. However,

199

Model for Resource Reallocation

even in the case where cost is not a matter of consideration, one would like to minimize the number of changes. The derivation of a model presented by Bagajewicz and S´anchez (2000) is now presented. Let us introduce the binary variable u t,k,r that indicates sensor k from variable t is relocated to variable r . This sensor is the k-th element of the set K t of originally allocated sensors for measuring variable t. Following, a model that minimizes the number of reallocations is presented. Min

X XX

u t,k,r

t∈MT k∈K i r ∈Mr

s.t. σ j (u) ≤ σ j∗ X X u t,k,r ≤ u tT

∀ j ∈ MP ∀t ∈ MT

r ∈M R k∈K t

X X

u t,k,r ≤ u rR t∈MT k∈K t à ! X X X X u t,k,i + Ni − u i,k,r ≤ Ni∗ t∈MT k∈K t

r ∈M R k∈K i

u t,k,r ∈ {0, 1}

∀r ∈ M R

                     

           ∀i ∈ M1        ∀t ∈ MT , ∀r ∈ M R    ∀k ∈ K t

(9.4)

where Ni = number of originally allocated sensors for measuring variable i, that is, the cardinality of K i . M R = set of streams where instruments from other place can be reallocated. MT = set of streams whose instruments can be reallocated. In this model, bounds are imposed on: (1) The maximum number of instruments that can be reallocated from variable t to other variables (u tT ). (2) The maximum number of instruments that can be reallocated from other variables to variable r (u rR ). (3) The total number of instruments allocated for each variable (Ni∗ ). This is needed because one may allow a large number of instruments to be reallocated to a certain variable r , but this change has to be accompanied with the reallocation to some of the instruments from K r to some other variables to maintain the total number of sensors limited. The value of the parameter standard deviation σ j (t) is obtained by using Equation (7.28). As in the case of instrumentation upgrading, a maximum precision problem that is the dual (in the Tuy sense) of the minimum cost model can be defined

200

PRECISION UPGRADE OF SENSOR NETWORKS

as follows: X a j σ j2 (u) Min j∈M P

s.t. X X X

u t,k,r ≤ Nr

t∈MT k∈K t r ∈M R

σ j (u) ≤ σ j∗ X X u t,k,r ≤ u tT

∀ j ∈ MP ∀t ∈ MT

r ∈M R k∈K t

X X

u t,k,r ≤ u rR

t∈MT k∈K t

X X

à u t,k,i + Ni −

t∈MT k∈K t

                           

                ∀i ∈ M1        ∀t ∈ MT , ∀r ∈ M R     ∀k ∈ K t ∀r ∈ M R

X X r ∈M R k∈K i

u t,k,r ∈ {0, 1}

! u i,k,r

≤ Ni∗

This model:

(9.5)

r Includes binary variables to handle hardware redundancy; therefore, it does not reallocate fractions of sensors.

r Considers software redundancy through data reconciliation. r Does not commit to a specific set of sensors. r Does not include cost as a bound. The last issue of cost is important to be addressed in more detail. It makes sense that the constraint on cost is superfluous, because the problem is the reallocation of existing resources, which already have a fixed cost. When the bounds are too tight, the set of equations becomes infeasible and the reallocation is not possible. When precision bounds are not known, model (9.5) can be used setting the precision bounds to a large value. However, proper weights need to be used. A satisfactory criteria is not yet available to choose these weights. Example 9.3 Let us consider the reallocation of duplicate existing thermocouples as an attempt to fulfill precision requirements for the heat exchanger network presented in Example 9.2. The existing instrumentation is shown in Table 9.11; the standard deviation of the heat transfer coefficients with these sensors is [11.65 2.87 2.46]. The corresponding bounds are shown in Table 9.12 (flowmeters are not reallocated).

Generalized Model for Resource Reallocation and Upgrade

201

TABLE 9.11. Existing Instrumentation for the Heat Exchanger Network.

Streams

No. of Flowmeters

Standard Deviation of Flowmeters

No. of Thermocouples

Standard Deviation of Thermocouples

S1 S3 S4 S5 S7 S9

1 — — 1 1 —

3% — — 3% 3% —

2 1 2 2 1 1

2◦ F/0.2◦ F 2◦ F 2◦ F/0.2◦ F 2◦ F/2◦ F 2◦ F 2◦ F

Results are given in Tables 9.13, 9.14a and 9.14b. Note first that several alternative solutions exist for each case. The example shows that higher requirements in precision may be fulfilled by increasing the number of reallocations of existing thermocouples. Furthermore, three examples are included where the minimum number of reallocations for the minimum number of sensor models is considered as a bound for the maximum precision model, and both models were run with the same bounds for the standard deviation of the parameters. The same solution is obtained for both models in terms of the set of reallocations and standard deviation of the parameters. GENERALIZED MODEL FOR RESOURCE REALLOCATION AND UPGRADE In this section a resource reallocation model based on cost minimizing is presented. Switching from one sample to another in the laboratory may prompt changes of reagents, changes in sampling costs etc. The change of positions of thermocouples and pressure gauges have also rewiring costs and recalibration costs. Finally, one may want to seriously consider reallocating flowmeters, although this may not be a likely outcome because of the relatively higher cost. In addition, these reallocation costs may overcome the simple addition of new instrumentation. Therefore, any reallocation and upgrade program should consider the trade-off between all these decisions. This trade-off TABLE 9.12. Bounds for the Reallocation Problem.

u tT u rR ∗

N

T1

T2

T3

T4

T5

T6

T7

T8

T9

1 0 2

0 2 2

0 1 2

1 0 2

1 0 2

0 1 1

0 1 2

0 1 1

0 1 2

202

σU∗1

4.00

1.35

2.00

1.80

Case

1

2

3

4

1.40

1.40

1.53

1.80

σU∗2

2.00

2.00

2.70

2.40

σU∗3

1.7901

1.3730

1.3904

1.5188

1.3455 1.9057

1.5458 1.5458 1.5236

σU2

3.1683 1.9019 1.3487

σU1

1.9297

1.9411

2.6610

2.2124 2.2124 2.6623

σU3

3

2

2

1 1 2

No. of Reallocations

(T1 , 2, T2 ) (T5 , 2, T2 ) (T1 , 2, T6 ) and (T4 , 2, T2 ) or (T1 , 2, T2 ) and (T4 , 2, T6 ) (T4 , 2, T6 ) and (T5 , 2, T2 ) or (T4 , 2, T2 ) and (T5 , 2, T6 ) (T4 , 2, T2 ) and (T5 , 2, T8 ) or (T4 , 2, T8 ) and (T5 , 2, T2 ) (T1 , 2, T2 )(T4 , 2, T6 ) and (T5 , 2, T8 ) or (T1 , 2, T2 )(T4 , 2, T8 ) and (T5 , 2, T6 ) or (T1 , 2, T6 )(T4 , 2, T2 ) and (T5 , 2, T8 ) or (T1 , 2, T6 )(T4 , 2, T8 ) and (T5 , 2, T2 ) or (T1 , 2, T8 )(T4 , 2, T2 ) and (T5 , 2, T6 ) or (T1 , 2, T8 )(T4 , 2, T6 ) and (T5 , 2, T8 )

Optimal Reallocations (t , k , r )

TABLE 9.13. Reallocation Model—Minimum Number of Reallocations Results for the Heat Exchanger Network.

203

Generalized Model for Resource Reallocation and Upgrade

TABLE 9.14a. Reallocation Model—Maximum Precision Model Results for the Heat Exchanger Network. Case

σU∗1

σU∗2

σU∗3

Nr∗

σU1

σU2

σU3

1 2 3 4 5 6

1.50 1.80 4.00 4.00 4.00 2.00

1.50 1.40 1.80 4.00 4.00 1.40

2.00 2.00 2.40 4.00 4.00 2.00

1 3 1 2 3 2

— 1.7901 1.9019 1.6973 1.7901 1.9057

— 1.3730 1.5458 1.5694 1.3730 1.3904

— 1.9297 2.2124 1.9573 1.9297 1.9411

is taken into account if the following upgrade cost function is used (Bagajewicz and S´anchez, 2000): XX

cik qikN +

i∈M1 k∈K i

X X X

h t,k,r u t,k,r

(9.6)

t∈MT k∈K t r ∈M R

where h t,k,r represents the cost of reallocation of the type k instrument from variable t to variable r . The complete generalized reallocation and upgrading TABLE 9.14b. Reallocation Model—Maximum Precision Model Results for the Heat Exchanger Network. P

ση2i

Case

Nr

1 2

— 3

— 8.8130

3 4

1 1

10.9016 9.1751

5 6

3 2

8.8130 9.3325

i

Optimal Reallocations (t , k , r ) — (T1 , 2, T2 )(T4 , 2, T6 ) and (T5 , 2, T8 ) or (T1 , 2, T2 )(T4 , 2, T8 ) and (T5 , 2, T6 ) or (T1 , 2, T6 )(T4 , 2, T2 ) and (T5 , 2, T8 ) or (T1 , 2, T6 )(T4 , 2, T8 ) and (T5 , 2, T2 ) or (T1 , 2, T8 )(T4 , 2, T2 ) and (T5 , 2, T6 ) or (T1 , 2, T8 )(T4 , 2, T6 ) and (T5 , 2, T8 ) (T5 , 2, T2 ) (T1 , 2, T6 ) and (T4 , 2, T8 ) or (T1 , 2, T8 ) and (T4 , 2, T2 ) Same as in case 2 (T4 , 2, T2 ) and (T5 , 2, T8 ) or (T4 , 2, T8 ) and (T5 , 2, T2 )

204

PRECISION UPGRADE OF SENSOR NETWORKS

model, which is a generalization of model (9.4) is the following (Bagajewicz and S´anchez, 2000): X X X XX  cik qikN + h t,k,r u t,k,r Min     i∈M1 k∈K i t∈MT k∈K t r ∈M R    s.t.     ∗ N  ∀ j∈M P σ j (q , u) ≤ σ j     X X   T  u t,k,r ≤ u t ∀t∈MT     r ∈M R k∈K t   X X  R u t,k,r ≤ u r ∀r ∈M R  t∈MT k∈K t    ! Ã    X X X X X  N ∗   qik + u t,k,i + Ni − u i,k,r ≤ Ni ∀i∈M1    k∈K i t∈M R k∈K t r ∈M R k∈K i      u t,k,r ∈ {0, 1} ∀t ∈ MT , ∀r ∈M R      ∀k∈K t    N qik ∈ {0, 1} ∀i ∈ M1 , ∀k∈K i (9.7) An equivalent maximum precision model can be obtained by finding the dual in the Tuy sense: Min

X

a j σ j2 (q N , u)

j∈M p

s.t. XX

cik qikN +

i∈M1 k∈K i

X X X

h t,k,r u t,k,r ≤ cT

t∈MT k∈K t r ∈M R

N σX , u) ≤ σ j∗ j (q X u t,k,r ≤ u tT

∀ j ∈ MP ∀t ∈ MT

r ∈M R k∈K t

X X

u t,k,r ≤ u rR

t∈M R k∈K t

X

k∈K i

qikN +

X X t∈MT k∈K t

u t,k,r ∈ {0, 1} qikN ∈ {0, 1}

                             

             ∀i ∈ M1         ∀t ∈ MT , ∀r ∈ M R      ∀k ∈ K t    ∀i ∈ M1 , ∀k ∈ K i ∀r ∈ M R

Ã

u t,k,i + Ni −

X X r ∈M R k∈K i

! u i,k,r ≤ Ni∗

(9.8)

Generalized Model for Resource Reallocation and Upgrade

205

TABLE 9.15. Installed and New Instrumentation Data. Variable Index

Variable

Ni

1 2 3 4 5 6 7

F1 y1 F2 y2 F3 y3 P

2 2 1 1 1 1 1

σi ,k 2.5 0.015 1.515 0.01 1.418 0.01 14.

2.5 0.01 — — — — —

ci ,k

σi ,k

350 2700 350 2700 400 2700 100

2. 0.01 1.48 0.01 1.38 0.01 14.

Example 9.4 Consider the flash tank Example 9.1. The number of existing instruments to measure each variable (Ni ) and the corresponding standard deviations (σi,k ) for different types of installed instruments are included in Table 9.15. In this example the mass fractions of all the components of a stream are measured on-line; a laboratory analysis may be done as a second alternative to know their values. The standard deviation of the vaporization efficiency coefficient is estimated using the existing instrumentation is 0.00434. Because this value is not satisfactory, a reallocation and possibly an incorporation of new flowmeters and laboratory composition analysis are proposed. The constraint bounds are presented in Table 9.16. The sets of transferred and received measurements are MT = {1, 2, 3} and M R = {1, 3, 4, 6} respectively. The costs of feasible reallocations between sets MT and M R are given in Table 9.17. For example, the cost of relocating a sensor from F1 to F2 is 80. Infinite costs are used for forbidden reallocations. Typically, this relocation pattern is constructed using engineering judgment. Table 9.18 shows the results for the minimum cost model. The first row represents the case for the existing instrumentation. It is interesting to notice that a reduction of the standard deviation from 0.00438 to 0.00347 results if TABLE 9.16. Flash Drum: Constraints Bounds for Sensor Reallocation and Upgrade. Variable

F1

y1

F2

y2

F3

y3

P

N∗ u iT u iR

2 2 1

2 1 0

2 1 2

2 0 1

2 0 0

2 0 1

2 0 0

TABLE 9.17. Flash Drum: Costs of Relocation. t /r

F1

F2

y2

y3

F1 y1 F2

∞ ∞ 80

80 ∞ ∞

∞ 0 ∞

∞ 50 ∞

TABLE 9.18. Flash Drum: Minimum Model Results for Sensor Reallocation and Upgrade. Case

σ∗

σ

Cost

Reallocations (t , k, r )

New Instruments

1 2

— 0.0038

3 4

0.0033 0.0031

0.00438 0.00352 0.00347 0.00329 —

— 100 100 2800 —

— (2,2,4) (2,2,4) —

P P y3 P —

TABLE 9.19. Maximum Precision Model Results for Sensor Reallocation and Upgrade. σ∗

Case 1 2 3 4 5

σ2

cT

σ

— — 1.918e-5 0.00438 0.0038 100 1.202e-5 0.00347 0.0033 2050 — — 0.0033 2800 1.080e-5 0.00329 0.0033 3550 1.0797e-5 0.00329

Cost

Reallocations

New Instruments

— 100 — 2800 3550

— (y1 , 2, y2 ) — (y1 , 2, y2 ) (y1 , 2, y2 )

— P — y3 –P F2 F3 y3 P

TABLE 9.20. Heat Exchanger Network: Bounds for Reallocation and Upgrade Problems. Variable

Ni

utT

urR

Variable

Ni

uiT

urR

F1 F2 F3 F4 F5 F6 F7 F8 F9

1 1 1 1 1 0 1 0 1

1 0 0 0 0 0 1 0 0

0 0 1 0 0 0 0 0 1

T1 T2 T3 T4 T5 T6 T7 T8 T9

2 2 2 2 2 2 2 0 2

1 0 0 1 1 0 0 0 0

0 1 0 0 0 1 0 0 0

206

TABLE 9.21. Heat Exchanger Network: Relocation Cost. MT /M R

F3

F9

T2

T6

F1 F7 T1 T4 T5

1000 ∞ ∞ ∞ ∞

∞ 1000 ∞ ∞ ∞

∞ ∞ 100 100 100

∞ ∞ 100 100 100

the laboratory analysis for the feed stream is relocated to the liquid stream and a pressure sensor is added. The cost of this case is 100. Higher precision is obtained by means of the reallocation and addition of instruments. In Table 9.19 the results from the application of the maximum precision model are presented. Here two examples are included to show the duality between the minimum cost model for reallocation and upgrade and the maximum precision model. Example 9.5 The reallocation and upgrading of instrumentation is illustrated by using the heat exchanger network presented in Example 9.2. The existing process instrumentation was provided in Table 9.11. The relocation bounds and the maximum number of instruments per stream are included in Table 9.20. In this example, two flowmeters and three thermocouples may be reallocated. The allowable new positions for these instruments and the reallocation costs are indicated in Table 9.21. The cost and standard deviation of new instrumentation is given in Table 9.22. Results are given in Tables 9.23, 9.24 and 9.25. For some cases, reallocation is sufficient to fulfill precision requirements, and the solution contains only new instruments for higher precision requirements. A mixed alternative of medium cost (reallocation and new instrumentation) is achieved in other cases. Two examples are provided to show the duality between both models. TABLE 9.22. Heat Exchanger Network: Available New Instrumentation. M1

Standard Deviation

Cost

F4 F9 T2 T6

2.5% 2.5% 0.2 0.2

2250 2250 1500 1500

207

TABLE 9.23. Heat Exchanger Network: Minimum Cost Model Results for the Reallocation and Upgrade Problem. Case σU∗1 1

σU∗2

σU∗3

σU1

4.00 4.00 4.00 3.2826 2.3947 2.0620 3.6035 2.00 2.00 2.00 — 2.00 2.00 2.20 1.3891 1.50 1.50 2.20 1.3492 1.3839 2.40 2.30 2.20 2.0587 2.20 1.80 2.40 1.7890 2.1503 2.0980 1.8644

2 3 4 5 6

σU2

σU3

c

1.9254 1.5482 1.8277 1.9122 — 1.5148 1.3664 1.4995 1.8174 1.6827 1.5306 1.6816 1.5295

2.2168 2.6690 2.6698 2.2140 — 2.1935 2.1125 2.1606 2.1938 2.2014 2.2031 2.1993 2.2025

100 100 100 100 — 3000 5250 5250 1500 1600 1600 1600 1600

New Reallocations Instruments (T1 , 2, T6 ) (T4 , 2, T2 ) (T4 , 2, T6 ) (T5 , 2, T6 ) — — — — — (T1 , 2, T2 ) (T1 , 2, T6 ) (T5 , 2, T2 ) (T5 , 2, T6 )

— — — — — T2 , T6 F4 , T2 , T6 F9 , T2 , T6 T6 T6 T2 T6 T2

TABLE 9.24. Heat Exchanger Network: Maximum Precision Model Results for the Reallocation and Upgrade Problem. Case No.

σU∗1

σU∗2

σU∗3

c∗

σU1

σU2

σU3

6 7 4 4

2.20 2.80 1.50 1.50

1.80 1.80 1.50 1.50

2.40 2.30 2.20 2.20

1600 3000 5000 5250

1.8644 1.3891 — 1.3492

1.5295 1.5148 — 1.3664

2.2025 2.1935 — 2.1125

TABLE 9.25. Heat Exchanger Network: Maximum Precision Model Results for the Reallocation and Upgrade Problem (Continued). P 2 ση i Case Cost Reallocations New Instruments i 6 7 4 4

208

1600 3000 — 5250

10.6663 9.0354 — 8.1503

(T5 , 2, T6 ) — — —

T2 T2 , T6 — F4 , T2 , T6

References

209

NOMENCLATURE ci,k : cT : C p: Fi : FT : H0 : Ki : M1 : MP : MR : MT : MU : Ni : Ni∗ : e Q: q:

cost of the kth sensor for measuriun variable i bound on total cost heat capacity flow rate of stream i configuration correction factor for heat exchangers measure of the estimation error set of new sensors available for variable i set of variables where sensors can be placed set of key variables for precision set of variables where sensors from other places can be reallocated set of variables whose sensors can be reallocated set of key variables compulsory measured number of existing sensors for variable i maximum number of sensors for variable i covariance matrix of estimates binary vector indicating whether sensors are located (1) or not (0) in variables binary vector indicating whether new sensors are located (1) or not (0) qN: in variables stream i Si : T: temperature unit i Ui : u t,k,r : binary variable indicating if sensor k from variable t is relocated to measure variable r Greek Letters σ: σ ∗:

precision precision threshold

REFERENCES Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Evaluation of the Contribution of Refinery Process Data to Performance Measures. AIChE Annual Meeting, Los Angeles (1997). Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Cost Benefit Analysis of Refinery Process Data: Case Study. Comp. & Chem. Eng., 22, Suppl., pp. S1031–S1034 (1998a). Alh´eriti`ere C., N. Thornhill, S. Fraser and M. Knight. Cost Benefit Analysis of Process Data in Plant Performance Analysis. AIChE Annual Meeting, Miami (1998b). Bagajewicz M. and M. S´anchez. Reallocation and Upgrade of Instrumentation in Process Plants. Comp. & Chem. Eng. (2000).

210

PRECISION UPGRADE OF SENSOR NETWORKS

Kretsovalis A. and R. S. H. Mah. Observability and Redundancy Classification in Multicomponent Process Networks. AIChE J., 33, pp. 70–82 (1987). Loeblein C. and J. D. Perkins. Economic Analysis of Different Structures of On-Line Process Optimization Systems. Comp. & Chem. Eng., 22, 9., pp. 1257–1269 (1998). Van Winkle M. Distillation. McGraw-Hill, New York (1967).

CHAPTER 10

Reliability of Nonrepairable Sensor Networks

INTRODUCTION In this chapter, the concepts of availability and reliability of sensor networks are explored. A review of the concepts of service availability and reliability of individual sensors is presented first. Sensors fail, and they do so with a certain frequency. Because sensors are used to estimate other variables through data reconciliation, when they fail, the observability of some important key variables may be compromised. When the degree of observability of these key variables is larger than one, the failure of one sensor may not compromise observability, but the accuracy of the estimates always changes. This change may make the new standard deviation larger than the original threshold (σ ∗ ). In this context, availability and reliability are concepts that relate to the probability of a variable being estimable and are independent of the loss of accuracy that takes place when some sensors fail.

SENSOR SERVICE AVAILABILITY The probability of failure of sensors is related to many factors. These factors are inherent to the sensor and are in principle independent (or mildly independent) of external factors, such as weather, accidents etc. They manifest during time, that is, a sensor can start in a good condition and a probability of its failure must be connected to the amount of time the sensor has been in service. These concepts are formalized next through the following definitions: Definition: Sensor Service Availability Ais (t) is the probability that at time t, the sensor i is normal, given that is good as new at time t = 0. 211

212

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

Definition: Sensor Service Unavailability Uis (t) is the probability that the sensor i is in failed state at time t, given that it jumped into the normal state at time t = 0. From the definitions, it is obvious that: Ais (t) + Uis (t) = 1

(10.1)

SENSOR SERVICE RELIABILITY Sensor service availability gives an indication of what is the state of a sensor at a given point in time t. One may, however, be interested in determining the probability of a sensor staying at a normal state for a given period of time. This is called reliability and is defined as follows: Definition: Sensor Service Reliability Ris (t) at time t, is the probability that the sensor i is in the normal state in the interval [0, t], given that it was on the normal state at time t = 0. Note first that service availability is a probability expressed in terms of the state of the sensor at time t, regardless of whether the sensor has failed at any given time prior to that or not. Thus, Ais (t) ≥ Ris (t)

(10.2)

This difference will become apparent when corrective and preventive maintenance is analyzed. However, when no maintenance is performed, availability and reliability are equal. Indeed, if a sensor has failed, it will stay in its failed state, because in the absence of maintenance, no corrective action is taken. Then, the probability of a sensor being at its normal state at time t is equal to the probability of the sensor being at the normal state during the whole interval [0, t]. Thus, the rest of this chapter concentrates on the concept of reliability only. A typical reliability curve Ris (t) is shown in Figure 10.1. It is also called survival distribution. It is characterized by a relatively sharp decline at the beginning, a stabilization period and a declining last period. These curves are constructed by simply starting with a large population of devices and monitoring the amount of devices surviving at each time.

FAILURE DENSITY AND FAILURE RATE Assume that a certain large number of identical sensors are put in operation at time t = 0. The number of sensors failing in the interval [t, t + dt) divided by the number of sensors that are at normal state is a measure of an intrinsic rate of

Failure Density and Failure Rate

213

FIGURE 10.1. Reliability or survival distribution.

failure of the system at that given point in time. This thought experiment can be now translated into the realistic condition that one is analyzing one instrument and not many. Thus: Definition: Failure density f i (t) at time t is the derivative of the survival distribution. dRis (t) (10.3) f i (t) = dt The failure density is, therefore, related to reliability. As shown in Figure 10.2, there is a period of early failures, followed by a fairly constant failure rate period and finally by a wear-out failure period. The maximum at the end is attributed to the depletion of the sample. However, one can relate this absolute rate to the existing sensors in the normal state. Thus, failure rate is defined as follows: Definition: The failure rate ri (t) at time t is the probability that the sensor fails per unit time at time t given that it has survived until time t.

FIGURE 10.2. Failure density function.

214

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

FIGURE 10.3. Failure rate.

This rate is expressed in conditional terms by requesting that the sensors be at normal state in the whole interval [0, t). In other words, when looked in terms of the thought experiment of a population of sensors at normal state at time zero, the failure rate is none other than the portion of sensors that fail calculated as a fraction of those that have survived. Indeed, ri (t) =

f i (t) Ris (t)

(10.4)

A typical form of this curve as a function of time is shown in Figure 10.3 and is known as a bathtub curve. This curve indicates a first period of early failure (burn-in period) followed by a constant called prime of life period where failures are random and then, a final wear-out or burnout period phase in which it is assumed that the component has reached its usability limits or simply its lifetime. It is in the phase where the instruments have constant failure rate where one desires the systems to function. Thus, all desired properties established as goals of a design/upgrade procedure are typically based on a constant failure rate.

MARKOVIAN MODEL In this section, a Markovian model of a system that has no repairs is presented. Assume now the model shown in Figure 10.4. The basis for Markovian models is that the derivative of the probability that the system is in state k, Pk (t), at time t is given by: dPk (t) = inflow to state k − outflow from state k dt

(10.5)

Markovian Model

215

FIGURE 10.4. State transition.

In turn, the inflow to state k can be written as the sum over all other states of the rate of transition from these other states to state k, X i,k , multiplied by the probability of the system being in these other states. inflow to state k =

X

X i,k (t)Pi (t)

(10.6)

i

Similarly, the outflow from state k can be written as the sum over all other states of the rate of transition from state k to these other states, X k,i multiplied by the probability of the system being in state k. outflow from state k =

X

X k,i (t)Pk (t)

(10.7)

i

In our case, there are only two states, normal and failed. In addition, the flow from failed state to the normal state is zero because there are no repairs, whereas the outflow from the normal state to the state of failure is related to the failure rate. Finally, the probability of the sensor of being in normal state is its service availability and the rate of transition from normal to failed state is related to the failure rate. Then the following differential equation follows: dRis (t) = −ri (t)Ris (t) dt

(10.8)

which, taking into account that Ris (0) = 0, has the following solution: Ris (t) = e−

Rt 0

ri (t) dt

(10.9)

In the case of constant failure rate, the reliability and the availability of a system without repairs becomes: Ris (t) = Ais (t) = e−ri t

(10.10)

When the failure rate is constant, the assumption is that the failure rate is proportional to the number of sensors. It is, however, an optimistic assumption. Assuming that the sensor failure is only proportional to the number of sensors in the normal state is assuming that there is no deterioration

216

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

of their parts through time. This is by no means true, but it has been considered so far a sufficiently appropriate assumption for the design of sensor networks. In addition, Equation (10.10) does not take into account the burn-in period. Other models to calculate reliability exist. The reader is referred for example to Henley and Kumamoto (1981) or Dhillon (1983) for further details. However, all these other models are far more sophisticated than what has been used for sensor network design. Thus, they are outside the scope of this book.

MEAN TIME TO FAILURE Because reliability is a function of time, a number representative of the reliability that is independent of time is the mean time to failure. This is defined as follows: Z τi =



0

Ris (t) dt

(10.11)

In the case of constant failure intensity this is given by: τi =

1 ri

(10.12)

For example, a sensor that has a failure intensity of two sensors/year, the mean time to failure is half a year. In Table 10.1 some reliability data for instruments is provided. Ranges extracted from the chemical and petroleum industries, as well as from the nuclear industry, are shown. TABLE 10.1. Reliability Values for Instruments. Instrument Flow Temperature Pressure Level

Failure Rate (Failures per 106 Hours) 0.1–10 0.03–7 1–8 2–80

Source: Bloch and Geltner (1999).

217

Estimation Availability and Reliability of Variables

ESTIMATION AVAILABILITY AND RELIABILITY OF VARIABLES The degree of estimability of a variable was defined in Chapter 5, and methods to design sensor networks featuring this property were outlined in Chapter 6. The degree of estimability is in many ways connected to reliability, because it is directly defined as the minimum number of sensors that have to fail to make a variable unobservable. Thus, the more sensors have to fail to make a variable unobservable, the larger will be the probability that this will not happen. These concepts are now extended to include probabilities making the connection with the reliability of individual sensors. Definition: Variable estimation availability Aiv (t) is the probability that variable i can be estimated at time t using its own measurement (if any) or other measurements, through balance equations, given that this estimation is possible at time t = 0. Definition: Variable estimation reliability Riv (t) is the probability that variable i can be estimated at time t using its own measurement (if any) or other measurements, through balance equations in the interval [0, t], given that this estimation is possible at time t = 0. Example 10.1 Consider as an example a simple flowsheet (Figure 10.5). Consider first the case where x M = {S1 , S2 , S3 }. Thus, the only redundant equation of this system is: F1 = F2 + F3

(10.13)

The probability of being able to estimate F1 at time t is the estimation availability of S1 at time t, and the probability of being able to estimate F1 in the interval (0, t), assuming all sensors are working at time zero is called estimation reliability. Using probability theory, one can write: Av1 (t) = Pr{S1 is in normal state OR S2 and S3 are in their normal state simultaneously} which suggests that there are two ways of estimating the flow rate of stream S1 , by measuring it

FIGURE 10.5. Example 10.1.

218

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

directly, or by using the balance Equation (10.13) and the measurements of S2 and S3 . Using sets and probability theory, the estimation availability can then be expressed as follows: Av1 (t) = Pr{S1 ∪ (S2 ∩ S3 )}

(10.14)

where the union of both sets that can estimate F1 represents the event that either one set or the other can be in normal state, or both, whereas the intersection represents the fact that both sets are in normal state. Because the events are not mutually exclusive, one can use a well-known result from elementary probability theory to obtain: Av1 (t) = Pr{S1 } + Pr{S2 ∩ S3 } − Pr{S1 ∩ (S2 ∩ S3 )}

(10.15)

where the intersection of both sets that can estimate F1 now represents the event that both sets are in their normal state. In turn, the probability of sensors S2 and S3 being in their normal state is related to the sensors service availability as follows: Pr{S2 ∩ S3 } = As2 (t)As3 (t)

(10.16)

because both sensors should be active and they are independent. Therefore: Av1 (t) = As1 (t) + As2 (t)As3 (t) − As1 (t)As2 (t)As3 (t)

(10.17)

DETERMINATION OF ESTIMATION RELIABILITY One can generalize the procedure shown in the previous motivating example and construct a framework to determine the estimation availability of a variable in any network. First, the expression in Equation (10.14) includes the measurement of the variable of interest and the rest of the measurements of the cutsets that includes the variable. To generalize Equation (10.17), the jth reduced cutset Z j (i) = K j (i) − Si , which is the set of all elements of the jth cutset containing variable i, K j (i)-except variable i, is defined. Thus, © ª (10.18) Riv (t) = Pr Si ∪ Z 1 (i) ∪ Z 2 (i) ∪ . . . . . ∪ Z n k (i) (i) where n K (i) is the number of cutsets that contain variable i. In other words, the estimation availability/reliability of variable i is related to the service availability/reliability of a sensor measuring this variable and all the sensors through which independent balances can be made. The reliability of variable i when it is unmeasured is now introduced: © ª (10.19) Riu (t) = Z 1 (i) ∪ Z 2 (i) ∪ . . . . . ∪ Z n k (i) (i)

Determination of Estimation Reliability

219

Applying elementary probability theory to the union of two sets, as in Equations (10.15) through (10.17) one obtains: © ª Riv (t) = Ris (t) + Riu (t) 1 − Ris (t)

(10.20)

In turn, the reliability of variable i when unmeasured Riu (t) can be obtained by expanding Equation (10.19)

Riu (t)

=

nX K (i)

Pr{Z s (i)} −

s=1

+

nX K (i) n K (i) n K (i) X X

nX K (i) n K (i) X s=1 j>s

   Pr{Z s (i) ∩ Z j (i)}       

Pr{Z s (i) ∩ Z j (i) ∩ Z t (i)}

s=1 j>s t> j

© · · · + (−1)n K (i)−1 Pr Z s (i) ∩ · · · ∩ Z n K (i) (i)

       ª 

(10.21)

which can now be further expanded by using the expression for each cutset: Pr{K j (i)} = Rms 1 (t)Rms 2 (t) . . . Rms m (t)

(10.22)

where Rms s (t) is the service reliability of sensor m s , which defines cutset K j (i). There is yet other ways to calculate Riv (t). Several algorithms have been published to evaluate Equation (10.18) (Veeraraghavan and Trivedi, 1991; Fong and Buzacott, 1987; Tewari and Verma, 1980). We review the method based on the so-called sum-of-disjoint products (Ali, 1993). We start with the following equality: Z 1 (i) ∪ Z 2 (i) . . . . . ∪ Z n K (i) (i) = Z 1 (i) ∪ Z 2∗ (i) ∪ . . . . Z n∗K (i) (i) (10.23) where Z ∗j (i) = Z 1 (i) ∩ Z 2 (i) ∩ . . . . ∩ Z j−1 (i) ∩ Z j (i)

(10.24)

This equality is illustrated with three sets (Figure 10.6). The subset in gray is indicated immediately below. Note that the sets Z ∗j (i) are disjoint. This is trivial, as the last set in Z ∗j (i) is active while the same set is inactive in Z ∗j+1 (i). Therefore, the following holds: © ª Pr Z 1 (i) ∪ Z 2 (i) . . . . . ∪ Z n K (i) (i) = Pr{Z 1 (i)} + Pr{Z 2∗ (i)} + · · · . ª © (10.25) + Pr Z n∗K (i) (i)

220

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

FIGURE 10.6. Illustration of identity 10.24.

In turn, Pr{Z ∗j (i)} = Pr{Z 1 (i)} Pr{Z 2 (i)} . . . . Pr{Z j−1 } Pr{Z j (i)}

(10.26)

© ª© ª © ª s (t) 1 − Rms 2 (t) . . . . 1 − Rms m (t) Pr{Z j (i)} = 1 − Rmt 1

(10.27)

Finally:

Ali and Narasimhan (1995) compared the speed of this algorithm to others and found it competitive. Example 10.2 Consider the system of Figure 10.5 and assume x M = {S1 , S2 , S3 , S4 }. Note first that there are two cutsets containing S1 and only measured variables, namely: K 1 (1) = {S1 , S2 , S3 }

(10.28)

K 2 (1) = {S1 , S3 , S4 }

(10.29)

Equation (10.21) renders: Riu (t) = Pr{Z 1 (i)} + Pr{Z 2 (i)} − Pr{Z 1 (i)} Pr{Z 2 (i)} = R2s (t)R3s (t) + R3s (t)R4s (t) − R2s (t)R3s (t)R3s (t)R4s (t)

) (10.30)

The same equation can be written by using the sum of disjoint products: R1Z (t) = Pr{Z 1 (1)} + Pr{Z 2*(1)} = Pr{Z 1 (1)} + Pr{Z *1(1)} Pr{Z 2 (1)} £ ¤ (10.31) = R2s (t)R3s (t) + 1 − R2s (t)R3s (t)}R3s (t)R4s (t) which contains three multiplications and two additions/subtractions, contrasting with Equation (10.30), which contains one multiplication more. Let us now add

221

Estimation Reliability in Nonredundant Systems

one more measurement in S6 . There is one new cutset containing S1 and only measured variables: K 3 (1) = {S1 , S6 }

(10.32)

Thus, Equation (10.21) renders:         

R1Z (t) = Pr{Z 1 (i)} + Pr{Z 2 (i)} + Pr{Z 3 (i)} − Pr{Z 1 (i)} Pr{Z 2 (i)} − Pr{Z 1 (i)} Pr{Z 3 (i)} − Pr{Z 2 (i)} Pr{Z 3 (i)} + Pr{Z 1 (i)} Pr{Z 2 (i)} Pr{Z 3 (i)}

   = R2s (t)R3s (t) + R3s (t)R4s (t) + R6s (t) − R2s (t)R3s (t)R3s (t)R4s (t)     © ª 2 s  s s s s s s s s s − R2 (t)R3 (t)R6 (t) − R3 (t)R4 (t)R6 (t) + R2 (t) R3 (t) R4 (t)R6 (t) (10.33) whereas the sum of disjoint products gives:  R1Z (t) = Pr{Z 1 (1)} + Pr{Z 2*(1)} = Pr{Z 1 (1)} + Pr{Z *1(1)} Pr{Z 2 (1)}     + Pr{Z *1(1)} Pr{Z *2(1)} Pr{Z 3 (1)} = R2s (t)R3s (t) ª © ª ©  + 1 − R2s (t)R3s (t) R3s (t)R4s (t) + 1 − R2s (t)R3s (t)    ª s ©  s s × 1 − R3 (t)R4 (t) R6 (t) (10.34) Equation (10.33) contains 6 additions/subtractions and 13 multiplications, whereas Equation (10.34) contains only 5 additions/subtractions and 8 multiplications. It is clear that the sum of disjoint products renders fewer multiplications, which for systems with many cutsets can become a factor in computation time.

ESTIMATION RELIABILITY IN NONREDUNDANT SYSTEMS In nonredundant systems, the number of cutsets determining an unmeasured variable is reduced to one. In other words Riv (t) = Ris (t)

(10.35)

for measured variables and Riv (t) = Rms 1 (t)Rms 2 (t) . . . . Rms m (t)

(10.36)

for unmeasured variables. The service reliabilities in Equation (10.36) correspond to the sensors located in the chords of the spanning tree that are part of the only cutset that contains variable i.

222

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

AVAILABILITY, RELIABILITY AND DEGREE OF ESTIMABILITY The availability of estimating a variable encompasses the concepts of observability and redundancy as given by the following properties: (1) A variable is observable if and only if its estimation availability/reliability is greater than zero. (2) A measured variable is redundant if and only if its estimation availability/reliability is greater than the service availability/reliability of the sensor measuring it. Ali and Narasimhan (1993) chose to present the above connection to observability in terms of the reliability of variables. Because estimation availability and reliability are the same for nonrepairable systems, the statements are equivalent. However, for repairable systems, availability is still nonzero after a failure is followed by a repair, whereas reliability remains zero. This difference changes the goals and procedures for the design of repairable systems, as covered in Chapter 14. The difficulty in establishing numerical analogies between estimation availability of variables and degree of estimability stems from the fact that the estimation availability is always smaller than one, and, therefore, even though it is monotone with the number of cutsets contributing, the monotonicity is far from being linear.

SYSTEM AVAILABILITY AND RELIABILITY Hitherto, the service and estimation availability and reliability of variables have been defined. There is, however, a need to define a unique measure that will quantify in one single number the reliability of the different variables. Using the notion that a chain is no stronger than its weakest link, Ali and Narasimhan (1993) defined the system estimation reliability as the minimum estimation reliability among all variables. Their definition is slightly modified by restricting the minimization to a set of variables M R where reliability is of interest. Thus, the reliability of the system is given by the smallest reliability of their components. R v (t) = Min Riv (t) ∀i∈M R

(10.37)

In other words, the reliability of the system is defined by its weakest element. This measure of the system reliability can be repeated exactly in the same way to define the system estimation availability. Ali and Narasimhan (1993, 1995)

Nomenclature

223

use this measure as the basis for their sensor network design procedures (Chapter 11). Other definitions are possible. For example, one can use averages of reliabilities or even weighted averages of some sort. Instead of averages, the product of all reliabilities of the components has been used for systems in series (Dhillon, 1983) Y Riv (t) (10.38) R v (t) = ∀i∈M R

This expression reflects the fact that it is enough that one unit fails for the system to fail. Systems in parallel follow other reliability expressions reflecting the fact that all elements in parallel need to fail for the system to fail. The reader is referred to Henley and Kumamoto (1981) or Dhillon (1983) for various forms of defining the system reliability. Because they have not been applied to the field of sensor design yet, its coverage is omitted. Measures based on mean-time calculations of some sort have also been suggested for specific sensor design case studies (Maquin et al., 1994), but no generalizations for generic systems have been made. NOMENCLATURE Ais (t): Aiv (t): f i (t): Fi : K j (i): Pk (t): ri (t): Riv (t): Ris (t): R v (t): Si : Ui : Uis (t): xM : X i,k (t): Z j (i):

sensor service availability variable estimation availability failure density flow rate of stream i jth cutset containing variable i probability that the system is in state k, at time t failure rate variable estimation reliability sensor service reliability system reliability stream i unit i sensor service unavailability set of measured variables rate of transition from state i to state k reduced cutset

Greek Letters σ ∗: τi :

precision threshold mean time to failure of sensor i

224

RELIABILITY OF NONREPAIRABLE SENSOR NETWORKS

REFERENCES Ali Y. Sensor Network Design for Maximizing Reliability of Processes. Ph.D. Thesis, Indian Institute of Technology. Kanpur, India (1993). Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Linear Processes. AIChE J., 39, 5, pp. 2237–2249 (1993). Ali Y. and S. Narasimhan. Redundant Sensor Network Design for Linear Processes. AIChE J., 41, 10 (1995). Bloch H. P. And F. K. Geltner. Use Equipment Failure Statistics Properly. Hydrocarbon Processing, January (1999) Dhillon B. S. Reliability Engineering in Systems Design and Operation. Van Nostrand Reinholdt Company, New York (1983). Fong C. and J. Buzacott. An Algorithm for Symbolic Reliability Computation with Pathsets and Cutsets. IEEE Trans. Rel., R-36, 34 (1987). Henley E. J. and H. Kumamoto. Reliability Engineering and Risk Assessment. Prentice Hall, New Jersey (1981). Maquin D., M. Luong and J. Ragot. Observability Analysis and Sensor Placement. SAFE PROCESS ‘94 IFAC/IMACS Symposium on Fault Detection, Supervision and Safety for Technical Process, June 13–15, Espoo, Finland, (1994). Tewari R. K. and M. Verma. An Algebraic Technique for Reliability Evaluation. IEEE Trans. Rel., R-29, pp. 311 (1980). Veeraraghavan M. and K. S. Trivedi. An Improved Algorithm for the Symbolic Reliability Analysis of Networks. IEEE Trans. Rel., R-40, pp. 347 (1991).

CHAPTER 11

Design of Reliable Linear Nonrepairable Sensor Networks

INTRODUCTION This chapter is devoted to present methodologies for the design of reliable sensor networks without considering requirements of accuracy. Estimability goals are also omitted, but they are somehow connected to reliability. Indeed, increasing the degree of estimability of a variable is equivalent to increasing the number of ways the variable is calculated. The estimation reliability of a variable has the same monotonic behavior, but, unfortunately, there is no one-to-one relation between both so that they cannot be considered equivalent. Therefore, although estimability goals can be useful when the service availability of sensors is not known, the design using estimability goals can be more costly than one based on hard instrument reliability data.

NONREDUNDANT NETWORKS FEATURING MAXIMUM RELIABILITY Ali and Narasimhan (1993) proposed to use the system reliability given by Equation (10.37) as the objective function for the sensor network design. They extended both the set of variables for which the reliability is of interest (M R ) and the set of streams where sensors can be placed (M1 ) to the whole system. Thus, unless some other constraints are imposed, maximizing the reliability of the system may result in a solution where all the variables are measured. This will occur because the reliability of a variable increases monotonically as measured cutsets containing it are added. Therefore, some restriction on the number of measurements is needed to control the cost. 225

226

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

To address the issue of cost, Ali and Narasimhan (1993) proposed to limit the number of sensors to the minimum possible that will still guarantee observability. As shown in Chapter 5, systems that feature all nonmeasured variables being observable (degree of estimability one) and minimum number of sensors, correspond to measurements located in the chords of a spanning tree of the systems. Based on this fact, all that is needed is to obtain a spanning tree conducive to maximized reliability. Motivating Example 11.1 Consider the simplified ammonia network (Kretsovalis and Mah, 1988a) that consists of six nodes and eight edges, with node U6 representing the environmental node, as shown in Figure 11.1(a) and (b). The minimum number of sensors required for this process is three, which is the number of chords of any spanning tree of the graph in Figure 11.1(b). Assume that sensors can be placed on any stream and each of them has a failure probability of 0.1. Let us examine the following two cases: a. Case 1. The mass flows of streams S1 , S4 and S7 are measured. The mass flow of stream S6 can be estimated using the fundamental cutset {S1 , S4 , S6 , S7 }, which gives a reliability of 0.729. b. Case 2. The mass flows of streams S4 , S5 and S7 are measured. The mass flow of stream S6 is now estimated through the fundamental cutset {S5 , S6 , S7 } giving a reliability of 0.81, which is higher than that for Case 1. The above example illustrates the type of relation that exists between sensor placement and reliability.

FIGURE 11.1. Simplified ammonia plant network.

Nonredundant Networks Featuring Maximum Reliability

227

Foundations of the Design Strategy To solve the problem avoiding enumeration, Ali and Narasimhan (1993) presented a strategy where a spanning tree is chosen first and, in a second step, one sensor is removed at a time being replaced by another sensor, such that: a. A new spanning tree solution is obtained with the new sensor. b. The reliability of the new spanning tree solution is higher. To switch from one spanning tree to another, a measurement placed in a chord of a spanning tree that is being removed should be replaced by a measurement in a branch such that both belong to the same fundamental cutset. Before the actual algorithm is presented, a few definitions and lemmas extracted from Ali and Narasimhan (1993) are presented. Definition: The ring sum of two cutsets K 1 and K 2 is the set of all edges (streams) that belong to either cutset K 1 or K 2 but are not in both. For example, consider the two cutsets defined in the previous motivating example. K 1 = {S1 , S4 , S6 , S7 }, and K 2 = {S5 , S6 , S7 }. The rig sum of these cutsets is K 1 ⊕ K 2 = {S1 , S4 , S5 }. We now present the following important lemma (Ali and Narasimhan, 1993): Lemma 11.1: The ring sum of two fundamental cutsets, which have at least one common chord of the associated spanning tree gives a cutset of a graph. Before proving the lemma we illustrate it using the above ammonia network. Assume the spanning tree with chords {S2 , S5 , S6 }. Consider now the following two cutsets: K 1 (S8 ) = {S2 , S6 , S8 } and K 1 (S4 ) = {S2 , S5 , S4 }. Figure 11.2 shows the ring sum of these two cutsets. The dashed lines correspond to the branches of the spanning tree, and the curved lines indicate how the cutsets separate the graph into two disjoint graphs. The lemma is now formally proved.

FIGURE 11.2. Ring sum of two cutsets with one chord in common.

228

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

Proof: Consider a spanning tree T with branches given by the set {b1 , . . . , bn−l } and chords given by the set {c1 , . . . , ce−n+l }. Without loss of generality, let the fundamental cutsets K 1 (bx ) and K 1 (b y ) be defined as: K 1 (bx ) = {bx , c1 , . . . , cr , cr +1 , . . . , cs }

(11.1)

K 1 (b y ) = {b y , c1 , . . . , cr , cs+l , . . . , ct }

(11.2)

where chords {c1 , . . . , cr } are common to both K 1 (bx ) and K 1 (b y ). The ring sum of K 1 (bx ) and K 1 (b y ) is the set of all edges in K 1 (bx ) and K 1 (b y ) excluding the common ones. Thus, K 1 (bx ) ⊕ K 1 (b y ) = {bx , cr +1 , . . . , cs , b y , cs+1 , . . . , ct }

(11.3)

To prove that the ring sum is a cutset, it is only required to show that if a proper subset of edges from the above set is deleted, it does not disconnect the graph and only the deletion of the whole set does. Deletion of the set of chords {cr +l , . . . , cs , cs+1 , . . . , ct } or any subset of it will not disconnect the graph, because the branches of T still exist and they maintain connectivity. Similarly, deletion of the set {bx , cr +1 , . . . , ct } will not disconnect the graph because the common edges {cl , . . . , cr } are subsets of K 1 (bx ) and K 1 (b y ); therefore, they preserve connectivity between the nodes connected by the deleted set. To better visualize this, consider the nodes of the graph divided into three subgraphs: G 1 , G 2 and G 3 , such that

r G 1 and G 2 are connected by the set {bx , cr +1 , . . . , cs } r G 2 and G 3 are connected by the set {b y , cs+1 , . . . , ct } r G 1 and G 3 are connected by the set {c1 , . . . , cr } This is shown in Figure 11.3. We illustrate this by using the ammonia network and the cutsets used previously, that is K 1 = {S1 , S9 , S6 , S7 } and K 2 = {S5 , S6 , S7 }. The common set of chords is {cl , . . . , cr } = {S2 }, bx = S8 , bx = S4 and the chords not in common

FIGURE 11.3. Illustration of the proof of Lemma 11.1.

Nonredundant Networks Featuring Maximum Reliability

229

FIGURE 11.4. Illustration of the proof of Lemma 11.1.

are given by {cr +1 , . . . , cs } = {S6 } and {cs+1 , . . . , ct } = {S5 }, respectively. As illustrated in Figure 11.4, the system preserves connectivity through stream {S2 } when the set {bx , cr +1 , . . . , ct } = {S8 , S5 , S6 } is deleted. A similar argument can be made for the deletion of {b y , cs+1 , . . . , ct }. Thus, it is only when bx and b y and all the chords are deleted that the graph is disconnected. Hence, the ring sum forms another cutset and not a union of edge disjoint cutsets. Q.E.D. Remark: A ring sum of two fundamental cutsets does not always give a cutset. When they do not have a chord in common, their ring sum renders a union of edge-disjoint cutsets. Remark: The property given by Lemma 11.1 is useful in generating the fundamental cutsets of a new spanning tree obtained through an elementary tree transformation. We now present two other important lemmas, proposed and proved by Ali and Narasimhan (1993). Lemma 11.2: For any sensor network that has measured variables in the chords of a spanning tree, the variable that has the lowest availability/reliability is always an unmeasured variable. Proof: Consider the measurement with lowest sensor service availability Asm (t). This will be a chord of the spanning tree corresponding to the sensor network design. Because every chord appears in some fundamental cutset, let chord Sm be a member of fundamental cutset K 1 (Si ), which is a cutset that includes branch Si of the spanning tree, and one or more additional chords. Thus, Y Asj (t) ≤ Asm (t) (11.4) Aiv (t) = j∈K 1 (i)

In other words, either Si has the lowest reliability or some other branch does. In any case, the least reliability is attained for an unmeasured variable. Q.E.D.

230

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

Remark: This lemma shows that to improve the network reliability of a given sensor network, the reliability of some unmeasured variable needs to be improved. Lemma 11.3: Let T be a spanning tree with branch bx having the minimum reliability. If the failure probabilities of all sensors are equal, then the network reliability can be improved by placing a sensor on branch bq (leaving variable) and removing the sensor from chord c p (entering variable) provided the following conditions hold: (1) The chord c p is part of fundamental cutsets containing bx and bq . That is, c p ∈ K 1 (bq ) and c p ∈ K 1 (bx ), (2) The cardinality of the ring sum of both cutsets K 1 (bx ) ⊕ K 1 (bq ) is smaller than the cardinality of K 1 (bx ), that is, 0{K 1 (bx ) ⊕ K 1 (bq )} < 0{K 1 (bx )} (3) If c p ∈ K 1 (bs ), where K 1 (bs ) is any other fundamental cutset, then 0{K 1 (bs ) ⊕ K 1 (bq )} < 0{K 1 (bx )}. Proof: By placing a sensor on bq and removing the sensor from chord c p another spanning tree solution is obtained. Let the new spanning tree be T and let K be the set of fundamental cutsets corresponding to T . The objective is to prove that reliabilities of all variables in the new solution are larger than R(bx ). Thus, if K 1 (bx ) = {bx , c p , c1 , . . . , cr , cr +1 , . . . , cs }

(11.5)

K 1 (bq ) = {bq , c p , c1 , . . . , cr , cs+l , . . . , ct }

(11.6)

then K 1 (bx ) ⊕ K 1 (bq ) = {bx , cr +1 , . . . , cs , bq , cs+1 , . . . , ct }

(11.7)

The ring sum K 1 (bx ) ⊕ K 1 (bq ) is a cutset of the graph (Lemma 11.1). Furthermore, in the new spanning tree solution, the cutset K 1 (bx ) ⊕ K 1 (bq ) contains only one unmeasured variable, which is bx (See Figure 11.5). Condition (1) of this lemma ensures that the new unmeasured variable c p is not a member of K 1 (bx ) ⊕ K 1 (bq ). Based on these observations K 1 (bx ) ⊕ K 1 (bq ) is a fundamental cutset with respect to spanning tree T containing branch bx . Thus, K 1 (bx ) = K 1 (bx ) ⊕ K 1 (bq )

(11.8)

Nonredundant Networks Featuring Maximum Reliability

231

FIGURE 11.5. Illustration of the proof of Lemma 11.3.

From condition 2 and the assumption that all sensors have same failure probability it follows that: v

R (bx ) > R v (bx )

(11.9)

v

where R (bx ) represents the reliability of bx with respect to the spanning tree T. The new unmeasured variable c p belongs to the same cutset as bq , that is K 1 (bq ). Then, v

K 1 (c p ) = K 1 (bq ) ⇒ R (c p ) = R v (bq ) > R v (bx )

(11.10)

The last inequality follows from the original assumption that bx is the variable of minimum reliability. Finally, if cp is a member of some other fundamental cutset K 1 (bs ), then K 1 (bs ) = K 1 (bs ) ⊕ K 1 (bq )

(11.11)

From condition 3, it follows that: v

R (bs ) > R v (bx )

(11.12)

On the other hand, if c p is not a member of cutset K 1 (bs ), then the reliability of bs remains unchanged because K 1 (bs ) = K 1 (bs ). Thus, the reliability of each of all variables corresponding to T is strictly larger than the minimum reliability corresponding to T . Therefore, v

Min R (bi ) > R(bx )

(11.13)

proving that the network reliability has increased.

Q.E.D.

i

232

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

Remarks: The following observations with respect to Lemma 11.3 can be made: (1) The second condition ensures that the reliability of variable bx increases, whereas the third condition ensures that the reliabilities of all other unmeasured variables remain greater than the current network reliability. (2) The network reliability cannot be improved by placing a sensor on variable bx . Indeed, to maintain a spanning tree solution, the sensor of some chord of K 1 (bx ) should be removed. The reliability of this variable in the new solution will then be equal to R(bx ), and thus the network reliability would not change. (3) If sensor failure probabilities are not equal, then the entering and leaving variables can be chosen in a similar manner, except that instead of checking for the cardinality conditions 2 and 3, one must explicitly evaluate the reliabilities. (4) The converse of Lemma 11.3 is not true, that is, if a branch bq and chord c p satisfying the three conditions are not found, it does not imply that the global optimum solution has been obtained. However, it can be viewed as a local optimum. Moreover, if conditions 2 and 3 do not hold as strict inequalities or if there are two or more variables with minimum reliability, then a degenerate solution is obtained, that is, the network reliability remains the same, and this may be true for other solutions as well. Both these problems are handled using heuristic strategies. The algorithm for the sensor network design procedure is described next. Algorithm The algorithm developed by Ali and Narasimhan (1993) is called SENNET. A flowchart is given in Figure 11.6.

r Step 1. Generate a spanning tree of the process graph. r Step 2. Generate all the fundamental cutsets corresponding to the spanning tree.

r Step 3. Obtain K max the set of fundamental cutsets that have the maximum cardinality. The branches corresponding to these fundamental cutsets are the variables with minimum reliability. r Step 4. Choose one element of K max for example K i (bx ), which has not been examined before. Mark K i (bx ) as examined and go to step 5. If no unmarked fundamental cutset exists, go to step 10. r Step 5. Choose another fundamental cutset K j (bq ) that has not been examined before. Obtain the ring sum K i (bx ) ⊕ K j (bq ). Mark K j (bq ) as examined and go to step 6. If no such K j (bq ) exists, go to step 4.

FIGURE 11.6. Flowchart of algorithm SENNET.

233

234

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

r Step 6. If 0{K i (bx ) ⊕ K j (bq )} ≤ 0{K i (bx )} go to step 7. Otherwise, go to step 5.

r Step 7. Choose a chord c p . To do this, compute the set of all elements of

K i (bx ) not present in K i (bx ) ⊕ K j (bq ). This set is denoted as K i (bx ) − {K i (bx ) ⊕ K j (bq )}. Mark one element (chord) c p as examined and go to step 8. If no such chord exists, go to step 5. r Step 8. If all fundamental cutsets K m (bs ) containing chord c p satisfy 0{K m (bs ) ⊕ K j (bq )} ≤ 0{K i (bx )}, then go to step 9. Otherwise, go to step 7. r Step 9. Branch bq is selected as the new measured variable and chord c p is chosen to enter the spanning tree, that is, to become unmeasured. Update all fundamental cutsets to obtain the fundamental cutsets of the new tree, and go to step 3. r Step 10. Stop if this step, called “hill climbing,” has already been executed 10 times. Otherwise, store the current solution. Let branch bq correspond to the most recently examined cutset K j (bq ) and c p any arbitrary element of the set K i (bx ) − {K i (bx ) ⊕ K j (bq )}. Select bq as the leaving variable. Go to step 3. Handling Degeneracy Note that in the algorithm, the inequalities are weak inequalities and may lead to successive solutions that do not improve the network reliability (degeneracy). In fact, it is theoretically possible to be caught in an infinite loop. Degeneracy is, therefore, broken by random selection of a set K i (bx ) from set K max in step 4 and random selection of chords c p from set K i (bx ) − {K i (bx ) ⊕ K j (bq )} in step 7. Ali and Narasimhan (1993) claim that this heuristic method has worked well with the problems they tested. Hill Climbing Procedure Step 10 in the above algorithm is implemented as an attempt to get away from a local minimum and reach the global optimum solution. If a situation in which any choice of entering/leaving variable worsens the network reliability is reached, then the current best solution is perturbed allowing the network reliability to decrease and the algorithm is repeated. This is similar to “hill climbing” strategies used in optimization. Updating the Fundamental Cutsets In step 9, the fundamental cutsets of the new spanning tree can simply be obtained by updating the fundamental cutsets of the previous spanning tree as follows. The fundamental cutset of the entering variable c p is given by K j (bq ). If a fundamental cutset K m (bs ) contains c p as a member, then the updated

Nonredundant Networks Featuring Maximum Reliability

235

fundamental cutset is given by the ring sum of K m (bs ) and K j (bq ). All other fundamental cutsets remain unaltered. Example 11.2 The algorithm is illustrated by designing a sensor network for the simplified ammonia plant shown in Figure 11.1. As established, the minimum number of sensors required for this plant is three. Let the sensor failure probabilities for all edges be 0.1.

r Step 1. Let the initial spanning tree solution consist of the unmeasured edges (branches) S2 , S3 , S5 , S6 and S8 .

r Step 2. The fundamental cutsets are: K 1 (S8 ) = {S4 , S7 , S8 } K 1 (S6 ) = {S1 , S4 , S6 , S7 } K 1 (S5 ) = {S1 , S4 , S5 } K 1 (S2 ) = {S1 , S2 } K 1 (S3 ) = {S1 , S3 }

r Steps 3 and 4. The set K max contains only one element, the maximum cardinality cutset K 1 (S6 ) in which variable S6 is determined by using three measurements. This is the variable that has the minimum reliability, and we attempt to improve its reliability. Thus, bx = S6 , and K 1 (bx ) = {S1 , S4 , S6 , S7 }. r Steps 5 and 6. The ring sum of K 1 (bx ) with other fundamental cutsets gives the following cutsets: K 1 (S8 ) ⊕ K 1 (bx ) = {S1 , S6 , S8 } Unmeasured: {S6 , S8 } Unmeasured: {S5 , S6 } K 1 (S5 ) ⊕ K 1 (bx ) = {S5 , S6 , S7 } K 1 (S2 ) ⊕ K 1 (bx ) = {S2 , S4 , S6 , S7 } Unmeasured: {S2 , S6 } K 1 (S3 ) ⊕ K 1 (bx ) = {S3 , S4 , S6 , S7 } Unmeasured: {S3 , S6 } Among the above cutsets, only the first two have cardinality smaller than that of K 1 (bx ). The first is arbitrarily chosen. Thus, bq = S8 and K 1 (bq ) = {S4 , S7 , S8 }. Therefore, K 1 (bx ) ⊕ K 1 (bq ) = {S1 , S6 , S8 }. r Steps 7 and 8. We now compute the set of all elements of K 1 (bx ) not present in {K 1 (bx ) ⊕ K 1 (bq )}, that is: K 1 (bx ) − {K 1 (bx ) ⊕ K 1 (bq )} = {S4 , S7 }. Any of the chords in this set may be chosen. Thus, chord S4 is selected, that is, c p = S4 . r Step 9. The branch bq = S8 leaves the tree and becomes measured, whereas chord S4 enters the tree and becomes unmeasured. The new spanning tree obtained through the elementary tree transformation is {S2 , S3 , S4 , S5 , S6 }.

236

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

The following fundamental cutsets for this new spanning tree are K 1 (S4 ) = {S4 , S7 , S8 } K 2 (S6 ) = {S1 , S6 , S8 } K 2 (S5 ) = {S1 , S5 , S7 , S8 } K 2 (S2 ) = {S1 , S2 } K 2 (S3 ) = {S1 , S3 } Note that the reliability of variable S6 has improved. However, the network reliability remains the same, because cutset K 2 (S5 ) contains three chords (S1 , S7 , S8 ). This is due to the fact that for this cutset, condition 3 of Lemma 11.3 holds only as a weak inequality. One can proceed with the next iteration in which S6 is the leaving variable and S1 is the entering variable to give the spanning tree solution {S1 , S2 , S3 , S4 , S5 }. In the subsequent iteration, there is no choice of entering variables and a local optimum solution is obtained. The hill climbing technique is applied next. The network reliability does not improve in 10 successive attempts, and the solution {S1 , S2 , S3 , S4 , S5 } is adopted as optimal. This solution corresponds to x M = {S6 , S7 , S8 }. To compare this solution with the global optimum, Ali and Narasimhan implemented an algorithm for explicit enumeration of all spanning trees and obtained the best solutions. The algorithm used was taken from Nijenhuis and Wilf (1978). The process graph has 32 spanning trees of which only 8 are globally optimal, giving a network reliability of 0.81. The optimal solutions generated by the algorithm for five different initial starting solutions are shown in Table 11.1. All the solutions obtained by the algorithm are also globally optimal, thus indicating that the algorithm is robust, converging for each choice of the initial solution. A larger example based on the steam-metering network of a methanol plant (Serth and Heenan, 1986) was also presented by Ali and Narasimhan (1993). TABLE 11.1. Initial Data and Results of Ammonia Plant Design. Initial Solution

Optimal Solution

{S2 , S3 , S5 , S7 , S8 } {S2 , S3 , S5 , S6 , S7 } {S2 , S3 , S4 , S6 , S7 } {S1 , S3 , S4 , S5 , S6 } {S1 , S2 , S3 , S5 , S8 }

{S1 , S2 , S3 , S4 , S5 } {S2 , S3 , S4 , S7 , S8 } {S1 , S2 , S4 , S7 , S8 } {S1 , S2 , S4 , S7 , S8 } {S1 , S2 , S3 , S6 , S8 }

Redundant Networks Featuring Maximum Reliability

237

This example consists of 12 nodes (units) and 28 edges (streams) and will be used later in this chapter to illustrate other methods (Figure 11.9). In this case, there are over 1.2 million spanning trees of which only 0.01% are globally optimal. Ali and Narasimhan (1993) also attempted to solve the problem for unequal sensor service reliabilities reporting the need for hill climbing.

REDUNDANT NETWORKS FEATURING MAXIMUM RELIABILITY The procedure presented in the previous section provides solutions for nonredundant networks with a degree of estimability of one for all variables, in other words, for minimal networks (Chapter 5). However, redundant systems have several desired properties. First, one can perform data reconciliation, and, therefore, the accuracy of the estimates obtained is improved. Second, the estimation reliability of any redundant variable is higher, because more cutsets are instrumented. Finally, redundant systems are the only ones capable of detecting gross errors (biased instruments and/or leaks). To address redundant networks, Ali and Narasimhan (1995) presented a model where a fixed number of sensors larger than the minimum is used. The model relies on a. Choosing first a spanning tree with the measured variables located in its chords. The remaining measured variables are added to some of the branches of this spanning tree. b. An evolutionary strategy, based on choosing entering and leaving variables, as in the case of nonredundant networks, is used. As in the case of nonredundant networks, the leaving variables are unmeasured variables that become measured and vice versa; the entering variables are measured variables that become unmeasured. We recall that in SENNET, the entering and leaving variables are chosen in such way that a spanning tree structure is maintained. For a redundant network, however, the only condition that needs to be met is that the estimability of all variables is not zero. In other words, unmeasured variables should always be observable. In addition, in SENNET, the reliability is computed using the fundamental cutsets. It was shown in Chapter 5 that all the estimable cutsets containing variable Si correspond to all the indirect ways that the variable can be estimated. Thus, in redundant networks, all the estimable cutsets containing the variable of interest should be used to compute the reliability. Finally, in contrast with nonredundant networks, where the lowest reliability corresponds to an unmeasured variable, the lowest reliability could correspond either to a measured variable or to an unmeasured variable.

238

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

Determination of the Leaving Variable Leaving variables are, as defined earlier, unmeasured variables that become measured. Let Si be the variable having the least reliability in some iteration. We consider two cases: (1) Si is unmeasured. In this case, the estimable cutsets, which are the only cutsets contributing to the reliability of Si , contain all measured variables. Thus there are two ways of improving the reliability of Si : making Si the leaving variable or finding a cutset that contains Si and only one other unmeasured variable St so that St is chosen as leaving variable. The latter is not possible in the case of the type of redundant networks analyzed here because all variables are observable by construction and, therefore, all cutsets are estimable. (2) Si is measured. Then, the leaving variable should be such that by making it measured, a new estimable cutset containing Si is formed. This is accomplished by choosing the leaving variable from a cutset that does not contain any other unmeasured edge besides the leaving variable; otherwise, the introduction of a measurement in the leaving variable would not have any effect. These observations prompt the introduction of some notation and suggest a procedure for selecting the leaving variable. We first introduce the notation. Definition: An almost redundant cutset of order one containing variable Si , denoted K s(1) (St , Si ), is an estimable cutset containing Si and only one unmeasured variable St . Remark: Si can be unmeasured, in which case the cutset is denoted K s(1) (Si , Si ). Definition: The set of all almost redundant cutsets of order one containing Si is denoted by 8(1) (Si ). Thus, if Si is measured, variable St belonging to K (1) j (St , Si ) can be chosen as a leaving variable candidate. The number of ways of indirectly estimating Si may increase by more than one if the cardinality of 8(1) (Si ) is larger than one. If there are many candidates for the leaving variable, the one that gives the maximum number of additional ways of indirectly estimating Si should be selected first. If Si is an unmeasured variable, then it can itself be a leaving variable candidate. In the algorithm implementation presented later, if Si is unmeasured, it is selected as the first choice of leaving variable. Thus, the set of leaving variable candidates, Lc is defined by © ª Lc(Si ) = St | St ∈ K (1) (11.14) j (St , Si )

Redundant Networks Featuring Maximum Reliability

239

Determination of the Entering Variable To keep the number of sensors fixed, once a leaving variable has been chosen, a sensor from a measured variable has to be removed. The entering variable should, therefore, be chosen so that the network reliability does not decrease. Because there are multiple ways of estimating a variable, it is not possible to predict which sensor removal will meet this condition. Explicit enumeration, where each sensor is removed at a time and the reliabilities are computed is, therefore, the first option. The following observations are used to reduce the search space and, hence, decrease the computational burden. (1) The objective is to improve the reliability of the least reliable variable Si . Because the leaving variable is chosen with the objective of providing at least one additional way for estimating Si , if the entering variable is a member of any cutset belonging to 8(1) (Si ), then it would nullify this gain. This ensures that there will be at least one additional new way of indirectly estimating variable Si in the new sensor network. The set of entering variable candidates is, therefore, given by \ K s(1) (St , Si ) (11.15) Le = x M − s∈8(1) (Si )

(2) As pointed out earlier, if the number of ways of estimating a variable increases, then the reliability of the variable also increases. Furthermore, if the cardinality of a cutset containing variable Si is high, then its contribution to the reliability of Si is small. In other words, cutsets with large cardinality should be avoided and variables belonging to such cutsets should be chosen as entering variables, that is, to become unmeasured. However, variables belong to different cutsets at the same time, a fact that makes the selection difficult. The above observations prompt a few definitions and the development of a measure to rank the entering candidates. We present the definitions first: Definition: An almost redundant cutset of order two containing variables Si and measured variable S j denoted K s(2) (St , Si , S j ) is an estimable cutset containing Si , S j and only one unmeasured variable St . Remark: Variable Si could be unmeasured in which case St = Si . Definition: The set of all almost redundant cutsets of order two containing Si and S j is denoted by 8(2) (Si , S j ). We are trying to define a quantity Ä(Si , S j ) representing the potential of measured variable S j to increase the reliability of Si . To intuitively introduce such measure, suppose two candidate variables, each participating in only one

240

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

cutset with Si . Suppose further that these cutsets have a different number of elements. Removal of any such candidate lowers the reliability of Si , an effect that should be countered by the leaving variable. Thus, because low cardinality implies high reliability, we should choose as entering variable the one with lower cardinality. Thus, a high value of Ä(Si , S j ) will mean that the removal of S j , that is, making it unmeasured, will have a small effect on the reliability of Si . However, variables can be part of more than one cutset. Thus, Ä(Si , S j ) is defined as the ratio of the number of all the variables participating in all almost redundant K s(2) (St , Si , S j ) divided by the number of the cutsets involved. In other words, Ä(Si , S j ) is given by the sum of all the cardinalities of K s(2) (St , Si , S j ) divided by the cardinality of 8(2) (Si ,S j ) (Ali and Narasimhan, 1995). X Ä(Si , S j ) =

© ª 0 K s(2) (St , Si , S j )

∀K s(2) ∈8(2) (Si ,S j )

© ª 0 8(2) (Si , S j )

(11.16)

Finally, to cover the case in which a variable is not present in any of the cutsets of 8(2) (Si , S j ), its measure Ä(Si , S j ) is set to some high value, that is, its effect on the reliability of Si is zero. A high value of Ä(Si , S j ) implies that S j occurs in fewer cutsets through which Si can be estimated in the new sensor network, or, conversely, the cardinality of such cutsets is large. Thus, the variables that have the smallest measure should be chosen to be removed. The measure developed is nonetheless only a guideline. Choosing the variable with lowest entering score does not guarantee the increase of network reliability. Then, the reliabilities of all variables are computed, and if the minimum reliability over all variables is greater than the current network reliability, then the choice holds. Otherwise, the next candidate is considered. Algorithm Based on the preceding discussion, the steps of the iterative improvement algorithm are given below. Ali and Narasimhan (1995) called this algorithm GSENNET. A flowchart describing the algorithm is also presented in Figure 11.7

r Step 1. Obtain a spanning tree and generate an initial solution by placing sensors on all the chords and remaining sensors on some of the branches of the spanning tree. Compute the network, reliability. r Step 2. Obtain all cutsets of the process graph. r Step 3. Find the set of edges Rmin that have the least reliability. r Step 4. Select an unmarked element Si ∈ Rmin and mark it. If no unmarked edge exists, then stop.

FIGURE 11.7. Flowchart of algorithm GSENNET.

241

242

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

r Step 5. Obtain the set of leaving variable candidates Lc using Equation (11.14). Arrange the variables in set Lc in decreasing order based on the number of cutsets of 8(1) (Si ) in which they occur. r Step 6. Pick the next unmarked edge Sq ∈ Lc and mark it. If an unmarked element does not exist, go to step 4. Otherwise, select Sq as a leaving variable and go to step 7. r Step 7. Find the set Le of entering variable candidates as shown in Equation (11.15). Compute their measures using Equation (11.16) and arrange them in order of decreasing measures. r Step 8. If an unmarked element in Le does not exist, go to step 6. Otherwise, choose the next unmarked element S p ∈ Le and mark it. r Step 9. Evaluate the network reliability. If the network reliability improves, accept S p as an entering variable and go to step 3. If the network reliability does not improve, go to step 8. The GSENNET algorithm does not guarantee that a globally optimal solution will be reached because in any iteration it only searches for a better neighboring solution that differs from the current solution in the placement of one sensor. If a better neighboring solution cannot be obtained, then this is similar to obtaining a locally optimal solution. Example 11.3 We illustrate the algorithm for the sensor network design of a simplified ammonia plant (Figure 11.1). For simplicity, the sensor failure probabilities for all edges are 0.1. Step 1. All the cutsets are given in Table 11.2. Step 2. The initial solution is the spanning tree {S1 , S2 , S3 , S7 , S8 } in which the measured edges are S4 , S5 and S6 . Because one more sensor needs to be installed, edge S1 from this spanning tree is selected to be also measured. Thus, the initial solution is edges x M = {S1 , S4 , S5 , S6 }. By computing the reliabilities (Chapter 10), one obtains the values shown in Table 11.3: TABLE 11.2. All Cutsets of Ammonia Plant Network. No.

Streams

No.

Streams

No.

Streams

1 2 3 4 5 6

S5 S6 S7 S1 S2 S1 S3 S2 S3 S2 S6 S8 S3 S6 S8

7 8 9 10 11 12

S1 S6 S8 S1 S4 S6 S7 S1 S5 S7 S8 S2 S4 S5 S4 S7 S8 S4 S5 S6 S8

13 14 15 16 17 18

S1 S4 S5 S2 S4 S6 S7 S3 S4 S6 S7 S3 S4 S5 S2 S5 S7 S8 S3 S5 S7 S8

Redundant Networks Featuring Maximum Reliability

243

TABLE 11.3. Reliabilities of the Initial Solution. Stream

Riv (t )

Stream

Riv (t )

S1 S2 S3 S4

0.981 0.981 0.981 0.981

S5 S6 S7 S8

0.981 0.900 0.972 0.972

Step 3. Because edge S6 has the smallest reliability, Rmin = {S6 }. Note that although edge S6 is measured, it has the smallest reliability among all variables. Steps 4 and 5. Rmin contains only one element. We mark it and go to step 6. Steps 6 and 7. Using Table 11.2 we obtain the set 8(1) (S6 ), which contains the following four cutsets. K 1(1) (S7 , S6 ) = {S5 , S6 , S7 } Unmeasured: = {S7 } Unmeasured: = {S8 } K 2(1) (S8 , S6 ) = {S1 , S6 , S8 } K 3(1) (S7 , S6 ) = {S1 , S4 , S6 , S7 } Unmeasured: = {S7 } K 4(1) (S8 , S6 ) = {S4 , S5 , S6 , S8 } Unmeasured: = {S8 } The set of leaving variable candidates is given by the set of unmeasured streams belonging to the above cutset elements of 8(1) (S6 ), that is, Lc = {S7 , S8 }. Note that each of these variables occur in only two cutsets of 8(1) (S6 ). We choose edge S7 as the leaving variable. This gives two more cutsets through which mass flow in edge S6 can be estimated. Step 8. Because the intersection of all sets of 8(1) (S6 ) is just S6 , the set of entering variable candidates obtained by using Equation (11.15) is Le = {S1 , S4 , S5 }. The elements of 8(2) (S6 , S1 ) are the two following cutsets: K 1(2) (S8 , S6 , S1 ) = {S1 , S6 , S8 } and K 2(2) (S7 , S6 , S1 ) = {S1 , S4 , S6 , S7 }. Therefore, 0{8(2) (S6 , S1 )} = 2. In addition, the total number of variables in these two cutsets is 7. Thus, Ä(S6 , S1 ) = 3.5. In turn, the set {S6 , S4 } is present in two cutsets, which have a total of eight variables. Similarly, the set {S6 , S5 } is present in two cutsets totaling seven variables. Thus, the measures corresponding to the other two entering candidates are: Ä(S6 , S4 ) = 4, Ä(S6 , S5 ) = 3.5. If we select edge S1 as the entering variable, the new set of measured edges is x M = {S4 , S5 , S6 , S7 } and the network reliability actually reduces from 0.900 to 0.882, despite the low value of its entering measure. The selection of edge S4 as the entering variable does not improve the network reliability and when edge S5 is chosen as the entering variable, the network reliability improves from 0.900 to 0.964. Therefore, the choice of edge S5 as the entering variable holds. On further iterations, the network reliability does

244

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

FIGURE 11.8. Simplified HDA process network.

not improve. This shows that a local optimal solution has been achieved by the algorithm. Comparing this solution to the one obtained by explicit enumeration Ali and Narasimhan (1995) found that this is also the global solution. Ali and Narasimhan (1995) showed two more examples on the results of the application of this algorithm for the simplified toluene hydrodealkylation process (HDA) (Douglas, 1988). This process is shown in Figure 11.8. It has 8 nodes and 14 edges. The minimum number of sensors is 7. Results of an exhaustive search are shown in Table 11.4, with results of the algorithm included in the last two columns. Less than 1% of the feasible solutions are optimal. GSENNET is capable of finding these solutions efficiently. In cases where the global optimum TABLE 11.4. Optimal Solutions for the HDA Process (All Sensors with Sensor Failure Probability = 0.1). No. of Sensors

No. of Feasible Solutions

No. of Optimal Solutions

Global Reliability

7

992

55

0.729

8

1,779

1

0.889

9

1,614

1

0.955

10 11 12 13 14

929 358 91 13 1

1 1 1 1 1

0.975 0.986 0.989 0.997 0.998

Optimal Solution (GSENNET)

Optimal Reliability (GSENNET)

S1 S5 S7 S9 S12 S13 S14 S2 S5 S6 S8 S10 S14 S2 S5 S7 S8 S10 S1 S2 S7 S8 S1 S6 S10 S1 S6 S4 —

0.729 0.881 0.953 0.975 0.986 0.989 0.997 0.998

245

Redundant Networks Featuring Maximum Reliability and Hardware Redundancy

TABLE 11.5. Failure Probabilities for the HDA Process. Sensor

Failure Probability

Sensor

Failure Probability

S1 S2 S3 S4 S5

0.32 0.16 0.25 0.37 0.09

S6 S7 S8 S9 S10

0.23 0.17 0.10 0.14 0.27

Sensor

Failure Probability

S11 S12 S13 S14

0.34 0.24 0.28 0.12

is not obtained by GSENNET, the results differ from the optimum by less than 0.5%. An example with unequal sensor failures has also been solved. Table 11.5 shows the values of these failure probabilities, and Table 11.6 shows the results.

REDUNDANT NETWORKS FEATURING MAXIMUM RELIABILITY AND HARDWARE REDUNDANCY The GSENNET algorithm was extended by Ali and Narasimhan (1995) to include hardware redundancy. For simplicity, it is assumed that at most two sensors are used to measure a variable, although the method can be easily generalized for a higher degree of hardware redundancy. The leaving variable candidate set given by Equation (11.14) includes only unmeasured variables. To allow for hardware redundancy, all measured variables that occur in cutsets 8(1) (Si ) that are currently measured by only one TABLE 11.6. Reliabilities of the Initial Solution. No. of Sensors

No. of Feasible Solutions

No. of Optimal Solutions

Global Reliability

7

992

2

0.443

8

1,779

1

0.693

9

1,614

1

0.837

10 11 12 13 14

929 358 91 13 1

1 1 1 1 1

0.897 0.945 0.960 0.967 0.967

Optimal Solution (GSENNET)

Optimal Reliability (GSENNET)

S3 S4 S5 S6 S12 S13 S14 S1 S3 S4 S6 S10 S11 S1 S4 S6 S11 S13 S1 S4 S6 S11 S1 S4 S6 S4 S6 S4 —

0.433 0.693 0.833 0.881 0.945 0.960 0.967 0.967

246

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

sensor can also be leaving variable candidates, because these can also increase the reliability of variable Si . In the case of entering variables, the candidate set given by Equation (11.15) also includes all measured variables that are being currently measured by more than one sensor. Both these modifications increase the search space, leading to a corresponding increase in computational requirements. Because it has been observed that hardware redundancy does not increase the reliability as much as spatial redundancy, the first choice of a leaving variable is an unmeasured variable. Similarly, the first choice of an entering variable is one that is measured by more than one sensor. In addition to the preceding modifications of leaving and entering variable candidate sets, the computation of reliability has to be modified slightly to account for variables that are measured by two sensors. If p1 and p2 are the failure probabilities of two sensors measuring a variable, then the probability that both of them fail is p1 p2 , assuming that the sensors fail independently. Thus, for computing the reliabilities, the two sensors on a particular variable can be considered as equivalent to one sensor with failure probability p = p 1 p2 .

REDUNDANT AND RESTRICTED NETWORKS As pointed out in Chapter 5, some variables may not be important and, therefore, may not need to be observable. Similarly, some variables cannot be measured because of technical and/or other economic reasons. In other words, M R , the set of variables of interest, and M1 , the set of streams where sensors can be placed, are not equal to the whole set of streams M. These networks are called restricted. Vaclavek and Loucka (1976) and later Madron and Veverka (1992) treated such specifications in their measurement placement strategies. In this section, modifications to the GSENNET algorithm introduced by Ali and Narasimhan (1995) are described so that such specifications on variables can be performed. In addition to the sets of measured (x M ) and unmeasured (xU ) variables, the following set of variables based on the specifications are defined:

r x K ⊆ xU : set of key but unmeasurable variables U r x NK ⊆ xU : set of nonkey and unmeasurable variables U r x K ⊆ x M : set of key and measurable variables M r x NK ⊆ x M : set of nonkey, but measurable variables M

In the design of a sensor network, only the reliabilities of key variables (whether measurable or not) are of concern. As before, we attempt to maximize the minimum reliability among all key variables.

Redundant and Restricted Networks

247

Well-Posedness We first establish the conditions that these sets have to fulfill so that the problem is not ill-posed. These conditions are: NK K , xUNK and x M are disjoint. (1) The sets xUK , x M (2) All variables should be classified, that is, x = x M ∪ xU , xU = xUK ∪ xUNK NK K and x M = x M ∪ xM . (3) The subgraph formed by the variables (streams) in xU should not contain any cycle containing one or more variables from xUK . This is because in such case these will be unobservable variables. (4) The number of sensors selected should be such that observability of required or important variables is feasible.

Number of Sensors for Restricted Networks We now focus on a procedure that allows the determination of the minimum number of sensors that can ensure observability. If one wants to estimate all variables in a process, the minimum number of sensors n min is equal to n S − n U + 1. If one is able to determine that certain streams can be unobservable, then they can be eliminated by the merging procedure that was presented in Chapter 5 to determine connectivity of systems and to obtain the redundant network. In this section, the conditions on which different edges (streams) can be eliminated are determined, and a new expression for the minimum number of sensors is presented. —All unmeasurable and nonkey variables xUNK will not be measured and are not of interest. They can be eliminated. K can belong to two types of cutsets. —All measurable and key variables x M r Cutsets containing some variables from set x NK . These cutsets are useU less, because they contain a variable that cannot be measured. Measuring K will at most contribute to the reliability of an unmeavariables from x M sured variable that is of no interest, and they can contribute to their own reliability. Because they will not contribute to the reliability of any other variable, they can be eliminated from the graph. We denote the number of these variables n s,1 . r Cutsets containing only key variables. Measuring such variables may contribute to the reliability of other variables of interest. NK can belong to two types of cutsets. —All measurable and nonkey variables x M r Cutsets containing some variables from set x NK . This cutsets are useless, U because they contain a variable that cannot be measured. Measuring variNK will at most contribute to the reliability of unmeasured ables from x M

248

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

variable that are of interest. Thus, these variables can be eliminated from consideration, because they cannot possibly contribute to the reliability of any key variable. To identify such variables, a spanning tree of the reduced network (obtained after eliminating nonkey unmeasurable and key measurable variables in the preceding steps) is constructed choosing all edges from set xUK first. This is possible because edges of set xUK do not form cycles. Then, one edge at a time from set xUNK is tested. If it forms a NK that are in the tree, then this edge cycle containing only edges of set x M is marked. It will be a chord of the spanning tree, but it does not need to be measured. This can be explained by the fact that cutsets containing the variable under analysis are formed by picking at least one other element from each cycle to which the variable belongs. If one such cycle contains only variables of no interest, then all cutsets will contain at least one variable of no interest, and therefore measuring the variable under analysis is NK have been considered, the spanning futile. After all edges from set x M K tree can be completed by using edges from set x M . We denote by n s,2 the number edges marked. r Cutsets where all the rest of the variables are key variables. Measuring such variables may contribute to the reliability of other variables of interest. Then, the number of sensors required to observe all key variables is given by n min = nrS − n rU + 1 + n S,1 − n S,2

(11.17)

where n rS and n rU are the number of edges and nodes of the reduced graph, respectively. This reduction method also provides the opportunity to reduce the computational requirements for a design of networks with a number of sensors larger than the minimum above established. The algorithm for redundant networks (GSENNET) can be applied to the reduced process graph using n min − n S,1 sensors to obtain their optimal locations. The remaining n S,2 sensors are placed K that were eliminated from the on edges belonging to the variables of the set x M original graph. Modifications to GSENNET The changes needed in GSENNET to address the existence of unmeasurable and nonkey variables are:

r Only the reliabilities of important variables are computed at each iteration. r Variables that belong to the set x K are not included in the leaving variables U

candidate set Lc. This guarantees that at the final solution, no sensors are placed on such variables.

TABLE 11.7. Data for the Constrained Steam Metering System. xM

xU

xMK

xMN K

xUK

xUN K

S1 S4 S5 S6 S7 S8 S9 S10 S14 S15 S20 S21 S22 S24 S25 S26 S27 S28

S2 S12 S18

S3 S13 S17

S11 S16 S19 S23

FIGURE 11.9. Steam metering network.

FIGURE 11.10. Reduced steam-metering network.

249

250

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

TABLE 11.8. Solution to the Steam-Metering Network. No. of Sensors

Optimal Solution (xU ) (GSENNET)

Global Reliability

Variable with Least Reliability

15 16 17

S3 S5 S10 S11 S12 S13 S14 S16 S17 S18 S19 S20 S23 S3 S10 S11 S12 S13 S14 S16 S17 S18 S19 S20 S23 S3 S11 S12 S13 S14 S16 S17 S18 S19 S20 S23

0.430 0.504 0.511

S17 S17 S17

Illustrative Example To illustrate the performance of GSENNET in the case of where constraints are imposed, the steam-metering network (Serth and Heenan, 1986) is used for this purpose. This network is shown in Figure 11.9 and the constraints on variables are shown in Table 11.7. Nodal aggregation to eliminate all variables that are unmeasurable and nonkey results in the reduced graph shown in Figure 11.10. Basically, node U4 and node U9 are merged with node U5 . In addition, nodes U8 and U10 are merged with node U7 . NK become part of a loop that cannot As a result, streams S12 and S18 from x M contribute to any other variable through balances. Thus, they can be eliminated. K are eliminated (n s,2 = 0). The reduced graph contains None of the edges of x M NK left are S2 , and, therefore, 8 nodes and 22 edges. The only edges from x M there is no loop. Thus, S2 is not eliminated (n s,1 = 0). The minimum number of sensors is then 22 − 8 + 1 + 0 − 0 = 15. Results of the design using GSENNET are shown in Table 11.8. NOMENCLATURE Ais (t): Aiv (t): bx : cp: K j: K j (i): K s(1) (St , Si ):

sensor service availability variable estimation availability branch x of a spanning tree chord p of a spanning tree jth cutset of a graph jth cutset containing variable i almost redundant cutset containing Si and unmeasured variable St K s(2) (St , Si , S j ): almost redundant cutset containing Si , mesured variable Si and unmeasured variable St K max : set of fundamental cutsets that have the maximum cardinality Lc: list of candidate leaving variables Le: list of candidate entering variables

References

M1 : MR : n min : nS: nU : n rS : n rU : pi : Riv (t): Ris (t): R v (t): Rmin : Si : T: Ui : x: xM : xU : xUK : xUNK : K xM : NK xM :

251

set of variables where sensors can be placed set of key variables for reliability minimum number of measured streams to obtain observability of all variables of interest number of streams or edges in a graph number of units or nodes in a grap number of streams or edges of a reduced graph number of units or nodes of a reduced graph probability of variable i variable estimation reliability sensor service reliability system reliability set of streams having the minimum reliability stream i spanning tree unit i vector of state variables set of measured variables set of unmeasured variables set of key and unmeasurable variables set of nonkey and unmeasurable variables set of key and measurable variables set of nonkey but measurable variables

Greek Letters 0{ }: 8(1) (Si ): 8(2) (Si , S j ): Ä(Si , S j ):

cardinality function set of all cutsets containing Si that have only one unmeasured variable St 6= Si set of cutsets containing edges Si and S j containing some unmeasured variable St 6= Si potential of measured variable S j to increase the reliability of the network

REFERENCES Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Linear Processes. AIChE J., 39, 5, pp. 2237–2249 (1993). Ali Y. and S. Narasimhan. Redundant Sensor Network Design for Linear Processes. AIChE J., 41, 10 (1995). Douglas J. Conceptual Design of Chemical Process. McGraw Hill, New York (1988). Kretsovalis A. and R. S. H. Mah. Observability and Redundancy Classification in Generalized Process Networks. I. Theorems. Comp. & Chem. Eng., 12, 7, pp. 671–687 (1988a).

252

DESIGN OF RELIABLE LINEAR NONREPAIRABLE SENSOR NETWORKS

Madron F. and V. Veverka. Optimal Selection of Measuring Points in Complex Plants by Linear Models. AIChE J., 38, 2, pp. 227 (1992). Nijenhuis A. and H. S. Wilf. Combinatorial Algorithms for Computers and Calculators. Academic Press, New York (1978). Serth R. and W. Heenan. Gross Error Detection and Data Reconciliation in Steam Metering Systems. AIChE J., 32, pp. 733 (1986). Vaclaveck V., M. Loucka M. Selection of Measurements Necessary to Achieve Multicomponent Mass Balances in Chemical Plants. CES, 31, pp. 1199–1205 (1976).

CHAPTER 12

Design of Reliable Bilinear Nonrepairable Sensor Networks

INTRODUCTION This chapter is devoted to extending the concepts presented in Chapter 11 to the design of reliable bilinear sensor networks. Requirements of accuracy are not considered.

BILINEAR MULTICOMPONENT SYSTEMS We now present the method developed by Ali and Narasimhan (1996). The following assumptions are made: (1) Only minimal (nonredundant) sensor networks are considered. That is, networks in which only one way of estimating each variable exists. This is equivalent to requesting estimability of order one for nonlinear systems. (2) All concentrations/temperature of all components in all streams are nonzero. This implies, for example, that no component is completely separated from any stream by a separator. The assumption is trivial for bilinear systems corresponding to energy balances. (3) Either all or none of the compositions of a stream are measured, that is, partial stream composition measurement is not considered. We call these measure-all composition sensors. (4) A single sensor is used to measure all compositions of a stream. (5) Sufficient flow rates are measured so that it is possible to estimate all unmeasured flow rates by using flowmeters only (linearly estimable networks). 253

254

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

Assumption 1 limits the scope of the method to the development of nonredundant sensor network design algorithms. However, these can serve as a starting point for the optimal placement of redundant sensors. Assumptions 2, 3 and 4 are relevant only for multicomponent processes, and they make possible the quantification of the minimum number of sensors required to estimate all variables. The last assumption, which decomposes the bilinear problem into two coupled linear problems, disallows certain feasible sensor network designs where flow rates are estimated by using both flow rate and composition (or temperature) sensors. Ali and Narasimhan (1996) claim that this assumption is not so restrictive because, in general, the estimation of flow rates using both flow rate and composition sensors requires information from several sensors and leads to lower accuracy and reliability. Although the basic design algorithms are developed under these assumptions, extensions to treat the general case are described later. Processes without Splitters We observed in Chapter 5 that when a multicomponent process consists of mixers and separators only, and no splitters, then a complete set of nonredundant equations that relate the variables is obtained by writing all the component balances for every unit and normalization equation for every stream. It was also assumed that compositions of a stream are not partially measured. Moreover, if the composition sensor of a stream fails, then all compositions of that stream become unmeasured, and none of them can be indirectly estimated by using the normalization equation. Thus, only the overall flow and component balances around each unit are useful in indirectly estimating variables and, hence, are useful in computing reliabilities. A similar situation arises in multicomponent processes, where one of the components in all streams is unmeasurable. For example, in mineral beneficiation networks, the gangue component in all streams is unmeasured. In these networks, normalization equations are not useful in indirect estimation of mineral concentrations of streams. It was also shown in Lemma 6.1, that under the above conditions the minimum number of sensors needed is 2(n S − n U + 1). Moreover, it was shown that flowmeters have to be placed on the chords of a spanning tree, and concentration sensors should be located in the chords of another, not necessarily different spanning tree. Evaluation of Reliability Because we are considering minimal sensor networks, measured variables are nonredundant, and, therefore, the reliabilities of measured compositions

Bilinear Multicomponent Systems

255

in a stream are all equal to the nonfailure probability of the set of sensors used. In the case of unmeasured compositions, they are observable by construction. Their reliability should be computed by taking into account in how many cutsets they are included. Moreover, all unmeasured compositions of a stream are estimated indirectly by using the same set of cutsets. Therefore, all unmeasured compositions in a stream have the same reliabilities. We now show how to compute these reliabilities. Let T x be the spanning tree formed by streams with unmeasured compositions. Let K sx (S j ) be the fundamental cutset with respect to this spanning tree containing branch S j . To estimate the unmeasured mass fraction of component t in stream j (x j,t ), the compositions of all the rest of the streams in K sx (S j ) should be measured, and flow rates of all streams in K sx (S j ) should be observable. Moreover, the cutset K sx (S j ) contains one or more streams with unmeasured flow rates. This follows from the fact that if flow rates of all streams in K sx (S j ) are measured, then it implies that all streams of this cutset are chords of the spanning tree. This is impossible because a cutset cannot solely consist of chords of some spanning tree. To illustrate this, consider the system of Figure 12.1. This is similar to the system in Figure 5.9 with the difference that a flash unit substitutes the splitter. Assume that we are trying to evaluate the reliability of the unmeasured composition of stream S4 . The cutsets containing stream S4 are also shown in the figure. The measured flow rates are in S6 and S2 and the measured concentrations are in streams S1 and S5 . The corresponding spanning trees are:

FIGURE 12.1. Cutsets of the system in Figure 5.9.

256

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

T x = {S2 , S3 , S4 , S6 }, T m = {S1 , S3 , S4 , S5 }. It is easy to verify that some cutsets contain more than one unmeasured flow. Let T m be the spanning tree consisting of unmeasured flows. Consider the set of streams with unmeasured flow rates that belong to the union of all cutsets containing S j . We denote this by {b1 , b2 , . . . , bn }. For S4 this set is {S1 , S3 , S5 }. Consider now all the estimable cutsets containing these streams: K 1 , K 2 , . . . , K n . All these cutsets contain measured flow rates that contribute to determine the flow rates of S1 , S3 and S5 . Because by assumption flow rates have degree of estimability one, there will be one such cutset per unmeasured flow rate. Then the set Z j of flowmeters required to estimate flow rates of all streams of {b1 , b2 , . . . , bn } is given by: Zj =

[

K sx ∪ K 1 ∪ K 2 ∪ . . . . . . . . ∪ K n − {b1 , b2 , . . . . , bk }

(12.1)

s

In our example the estimable cutsets containing S1 , S3 , and S5 are K 1 = {S1 , S6 }, K 2 = {S2 , S5 , S6 } and K 3 = {S2 , S3 , S6 }. Therefore Z j = {S2 , S6 }. Thus, to estimate composition x j,t , composition sensors of all other streams in Z j should be active. The reliability of x j,t is given by: Rxv j,t (t) =

Y¡ i∈K xj

¢ Y (1 − pi ) 1 − pix ×

(12.2)

i∈Z j

where pi and pix are the failure probability of flow rate and composition sensor of stream Si , respectively. Properties of Optimal Bilinear Sensor Networks As in Chapter 11, a sensor network for multicomponent processes is designed to maximize the minimum reliability for all flow rate and composition variables. In view of the assumptions made, the problem is then to determine which choice of spanning trees T m and T x renders the maximum network reliability. First, we prove that the least reliability is always attained by an unmeasured composition. Lemma 12.1: The minimum reliability is attained by an unmeasured composition in some stream and not by an unmeasured flow rate. Proof: Let the least reliability among flow rates correspond to an unmeasured flow rate in stream Sk . This stream may be a chord or a branch of the spanning tree T x , depending on whether the compositions of stream Sk are measured or not. In either case, stream Sk must be present in some fundamental cutset of T x , because every branch and every chord of T x must be present in some fundamental cutset. Let stream Sk be present in a fundamental cutset

Bilinear Multicomponent Systems

257

K sx (S j ). Then, to estimate compositions of stream S j , all flowmeters necessary for estimating the flow rate of stream Sk must be active together with other flowmeters and composition sensors. Therefore, the reliability of x j,t is smaller than the reliability of flow rates in stream Sk . Either x j,t has the least reliability or some other unmeasured composition has lower reliability. In either case, the least reliability is attained by an unmeasured composition. Q.E.D. From this result, it follows that while evaluating the network reliability, we need to evaluate the reliabilities of unmeasured compositions only. In the case of equal failure probability for all sensors, consider the following lemma: Lemma 12.2: In bilinear sensor network with equal failure probabilities of all flow sensors p, the optimal set of flow rates and composition spanning trees (T m and T x ) are the same. Proof: Consider first any given (not necessarily optimal) composition spanning tree T x . Consider a fundamental cutset K sx (S j ) of the process graph for the given T x . The reliability of any composition x j,t , in stream j, is computed by using Equation (12.2). From these equations, we can observe that for a given T x , the only way of increasing Q the reliability of x j,t is to choose the spanning tree T m so that the product i∈Z j (1 − pi ) increases. For equal flowmeter failure probabilities, this is achieved by minimizing the cardinality of set Z j . Because every indirect estimation of a flow rate (using flow rate measurements only) requires at least one other flowmeter, the set Z j has least cardinality when the flow rates of all streams in K sx (S j ) (except stream S j ) are measured. In other words, K sx (S j ) should also be a fundamental cutset corresponding to T m . Because this is valid for all fundamental cutsets of T x , therefore T m and T x should be identical for the optimal choice of T x . Naturally, the largest reliability will be obtained for the best spanning tree solution corresponding to the case when Q.E.D. only flow measurements are considered (T ∗ ). Corollary: In bilinear sensor network with equal failure probabilities of all flow sensors p, and equal composition sensor failure probability p x, the optimal set of flowmeters and composition spanning trees (T x and T m ) are the same as the one corresponding to the optimal solution corresponding to pure flows only T ∗. Proof: Because T x = T m , then the reliability of the sensor network corresponds to the spanning tree that has a cutset with largest cardinality. Q.E.D. Consider now the case where the sensor failure probabilities for composition sensors are not equal, but they are still equal for the flow sensors. To visualize the effect, we present the example in Figure 12.2, where the simplified ammonia process is presented. In this process the failure probabilities of flowmeters are all 0.1 and the composition sensor failure probabilities are shown in Figure 12.2.

258

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

FIGURE 12.2. Ammonia plant network and composition sensor failure probabilities.

If all composition sensors have the same failure probability, then the best solution is to locate flowmeters and composition sensors on streams S3 , S5 and S6 . The corresponding spanning tree for maximum reliability of flows only, T ∗ , is shown in Figure 12.3a. However, if we adopt this pattern of flow measurement for the present problem, the minimum reliability is 0.486, obtained for the compositions of stream S4 . The optimal spanning tree solution for this problem is shown in Figure 12.3b. By locating flowmeters and composition sensors on streams S6 , S7 and S8 , a network with reliability of 0.605, corresponding to the reliability of compositions in streams S1 , S2 and S3 , is obtained. This shows that the solution T x = T m = T ∗ is not optimal when composition sensor failure probabilities are unequal. However, as pointed out in Lemma 12.2, T x = T m still holds. The following trivial example shows that when the flow rate sensors have unequal failure probabilities, the optimal solution does not feature the same spanning trees, that is T x = T m is not a necessary condition of optimum. Consider a simple process unit with one input stream and two output streams shown in Figure 12.4. The numbers within parentheses for each stream correspond to failure probabilities of flowmeters and composition sensors.

FIGURE 12.3. Optimal ammonia plant solutions.

Bilinear Multicomponent Systems

259

FIGURE 12.4. System with unequal sensor failure probability.

The optimal solution for this process is to measure flow rates of streams S1 and S2 and compositions of streams S2 and S3 , giving T x = {S1 } and T m = {S3 }, which are not identical. Such a possibility arises whenever the variable reliability of a flow rate is higher if it is indirectly estimated than if it is measured. For example, if we measure the flow rates of streams S1 and S2 , then the reliability of estimating the flowrate of stream S3 is 0.56, which is higher than directly measuring it with a reliability of 0.50. Design Procedure of Bilinear Multicomponent Systems In the previous section, three different cases of increasing complexity have been discussed. We now present algorithms for the three cases.

r All sensor failure probabilities are equal: SENNET provides the optimal solution.

r Equal sensor failure probabilities for flows only: Given the above finding that both spanning trees coincide, then the solution is obtained with a slight modification of SENNET. The entering and leaving variable candidate sets are constructed in the same way as in the linear nonredundant case, that is, exchanging a chord by a branch, so that a spanning tree structure is always maintained. The modification consists in a different way of choosing the entering and the corresponding leaving variable, which cannot be decided by comparing the cardinalities of the cutsets involved because these are not correlated with reliabilities. The choice is now arbitrary. r Unequal sensor failure probabilities: For this case, the algorithm proposed by Ali and Narasimhan (1996) is presented in the next section: Algorithm for Unequal Sensor Failure Probabilities Start with an initial spanning tree for T x , and keeping this fixed, attempt to obtain a spanning tree T m that gives maximum network reliability. Phase 1

260

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

provides the spanning tree T m and Phase 2 improves on T x : Phase I: —Step 1. Set T x = T m . —Step 2. Find the unmeasured composition, x j,t with the least reliability. —Step 3. Obtain the set of streams, Z j whose flow rate measurements are required to estimate x j,t . —Step 4. For each stream c p of set Z j , identify all the fundamental cutsets of T m that contain c p as a member. Let {K 1 , K 2 , . . . . , K r } be the set of such fundamental cutsets, corresponding to the branches b1 , b2 , . . . , br of T m . —Step 5. If for any of the fundamental cutsets, K q , obtained in step 4, the following is satisfied: ¡

¢ Y (1 − pi ) 1 − pc p ≤

(12.3)

i∈K q i6=c p

then c p should enter and bq should leave T m . —Step 6. If no such cutset exists, stop; otherwise, go to step 2. Step 3 identifies all the flowmeters (chords of T m ) that are required to estimate the least reliable composition variable. In step 4, an attempt is made to improve the network reliability by interchanging one of these chords with a branch of a fundamental cutset of which it is a member (elementary tree transformation of T m ). The network reliability improves if the indirect estimation of the flow rate of the chord is better than its direct measurement, which is checked in step 5. Phase 1 of the algorithm thus gives the spanning tree T m corresponding to the given T x that results in the maximum network reliability. Phase 2: In this phase, an elementary tree transformation of T x is performed in an attempt to find a better solution (T x , T m ), where T m is optimal for the chosen T x . The entering and leaving variables for T x are obtained in the same way as described in the previous section. For each choice of entering and leaving variable candidates, a spanning tree T x is obtained, and its corresponding best possible flow rate spanning tree can be obtained by using phase 1. If this solution renders an improved network reliability, we accept it and proceed with the next iteration of the algorithm; otherwise, we try the next choice of entering and leaving variables for T x . If none of these choices leads to an improvement in network reliability, the iterative process is terminated and the final solution is accepted. Example 12.1 The simplified ammonia network shown in Figure 12.2 is used to illustrate the proposed algorithm. Flowmeters are assumed to have a failure probability

Bilinear Multicomponent Systems

261

equal to 0.1 and the failure probabilities of composition sensors are as shown in the figure. We now apply SENNET. —Construction of the spanning tree: Start with the spanning tree shown in Figure 12.3a. This tree is T m = {S1 , S2 , S4 , S7 , S8 }. —Identification of fundamental cutsets: The fundamental cutsets with respect to this tree are: r K 1 (S1 ) = {S1 , S3 } r K 1 (S2 ) = {S2 , S3 } r K 1 (S4 ) = {S3 , S4 , S5 } r K 1 (S7 ) = {S5 , S6 , S7 } r K 1 (S8 ) = {S3 , S6 , S8 } —Calculation of minimum network reliability: The minimum network reliability is 0.486, corresponding to the concentration in stream S4 . Therefore bx = S4 . —Determination of leaving variables: The leaving variable candidates are those branches whose fundamental cutsets have at least one chord in common with the fundamental cutset of stream S4 . Moreover, stream S4 may itself be a leaving variable candidate. The fundamental cutset containing stream S4 is K 1 (S4 ) = {S3 , S4 , S5 }. One chord of this cutset is S3 , and the fundamental cutsets containing this chord are K 1 (S1 ), K 1 (S2 ) and K 1 (S8 ). Similarly, the other chord is S5 , and the fundamental cutsets containing this chord are K 1 (S7 ) and K 1 (S8 ). Thus, the leaving variable candidate set is Lc = {S1 , S2 , S4 , S7 , S8 }. We arbitrarily select S7 as the candidate for leaving variable. —Determination of entering variables: The corresponding set of entering variables is Le = {S5 }, so we choose S5 as the entering variable. The new tree becomes {S1 , S2 , S4 , S5 , S8 }, giving a network reliability of 0.498. We observe that the network reliability improves; therefore, the choice of leaving and entering variables holds. In the next iteration, the leaving and entering variable candidates are S8 and S3 , respectively, giving a new tree T x = T m = {S1 , S2 , S3 , S4 , S5 } with a network reliability of 0.605. Because the network reliability does not improve any further for any choice of entering and leaving variables, we accept this as the final solution. Explicit enumeration of all possible spanning tree solutions shows that this is also the globally optimal solution. Example 12.2 Consider the ammonia process shown in Figure 12.5. The failure probabilities of flowmeters and composition sensors for each stream are, respectively, shown within parentheses in the figure.

262

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

FIGURE 12.5. Ammonia plant network with unequal composition sensor failure probabilities.

Choose an initial spanning tree T x = {S1 , S2 , S4 , S7 , S8 } as shown in Figure 12.3a. Phase 1: The optimal T m corresponding to this T x is obtained by using the algorithm as follows: —Steps 1–2. Initialize T x = T m = {S1 , S2 , S4 , S7 , S8 }. The network reliability for this choice is 0.274, obtained for compositions in stream S8 . —Step 3. The set Z 8 is equal to {S3 , S6 }. —Step 4. The fundamental cutsets of T m containing chord S3 are: (1) K 1 (S1 ) = {S1 , S3 } (2) K 1 (S2 ) = {S2 , S3 } (3) K 1 (S4 ) = {S4 , S3 , S5 } (4) K 1 (S8 ) = {S8 , S3 , S6 } The fundamental cutsets containing chord 6 are: (5) K 1 (S7 ) = {S7 , S5 , S6 } (6) K 1 (S8 ) = {S8 , S3 , S6 } —Step 5. The fundamental cutsets containing S3 do not give an indirect way of estimating the flow rate of chord S3 with better reliability than its direct measurements. However, cutset K 1 (S7 ) provides an indirect way of estimating flowrates of S6 that is better than its direct measurement. So we exchange branch S7 with chord S6 and get a new T m = {S1 , S2 , S4 , S6 , S8 } giving an improved network reliability of 0.323, corresponding to compositions in stream S8 . Further iteration does not lead to any improvement in the network reliability. Thus, the best T m obtained for the initial T x is T m = {S1 , S2 , S4 , S6 , S8 }. Phase 2: We now attempt to find an improved solution by elementary tree transformations of T x . The leaving variable candidates can either be stream

Bilinear Multicomponent Systems

263

S8 or branches of T x whose fundamental cutset has a common chord with K 1x (S8 ),. Thus, the leaving variable candidate set is {S8 , S1 , S2 , S4 , S7 }. We arbitrarily select edge S1 as the leaving variable and obtain the corresponding set of entering variables: Le = {S3 , S6 , S8 } − {{S3 , S6 , S8 } ⊕ {S1 , S3 }} = {S3 } Because edge S3 is the only element of Le, we choose it as an entering variable. The new T x is {S2 , S3 , S4 , S7 , S8 }. Corresponding to this T x , we find the optimal T m , which happens to be identical. The network reliability for this trial solution is equal to 0.303. We observe that the network reliability does not improve; therefore, the choices of entering and leaving variable candidates are rejected. We continue until all candidates from set Lc have been verified and find that the network reliability does not improve. Thus, we accept T x = {S1 , S2 , S4 , S7 , S8 } and T m = {S1 , S2 , S3 , S6 , S8 } as the solution. Treatment of Splitter Units In Chapter 6, it was discussed the way splitter units account for the role of branches and chords of spanning trees. The net result of that analysis is that one cannot consider spanning trees for T x containing one or two splitter branches. If T x contains only one splitter branch, one of the splitter stream compositions must be measured. In this case, among all the streams associated to a splitter, the one whose composition sensor has the least failure probability is measured. If T x contains two splitter streams as branches, then none of the splitter streams have measured compositions. It is now possible to use the sensor network design algorithm for the three different cases described earlier with the following modifications. (1) When splitters are present, it can be shown that in the optimal sensor network design, T x and T m are identical for equal failure probabilities of flowmeters, only if T x contains one splitter stream. Because, in general, it cannot be determined whether T x and T m should be identical in the optimal solutions, the algorithm for unequal sensor failure probabilities are always used for any given failure probabilities of flowmeters and composition sensors. Moreover, the initial spanning tree and entering variable set for T x is chosen so that among streams incident on any splitter, not more than two are branches of T x . This can be easily incorporated in the spanning tree generation algorithm. At each iteration, we also chose the entering variable so that not more than two streams incident on the same splitter became branches of T x . (2) To compute the reliability of a composition variable, one needs to identify all flowmeters and all composition sensors that are useful for its estimation. These are obtained as follows:

264

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

r If T x contains only one stream of a splitter, then this implies that the compositions of one of the splitter streams that has the least failure probability, say psx , is measured. Thus, the composition reliability of all streams incident on that splitter are equal to (1 − psx ). If the fundamental cutset of any other stream K xj (Si ) contains one or more streams of a splitter, then in the reliability computation of composition x j , the composition of all these splitter streams is assumed to be measured by using one composition sensor with failure probability psx . r If T x contains two streams of a splitter, say S1 and S2 , then none of the splitter stream compositions of a splitter stream are indirectly estimated by using the fundamental cutset, K xj (S1 ) or K xj (S2 ). Let K xj (S1 ) be the cutset through which compositions of a splitter stream are estimated. Let c1 , c2 , . . . , cr be the chords of K xj (S1 ) that do not correspond to a splitter. The set of flowmeters required to estimate the compositions of streams associated with a splitter is obtained as before by using Equation (12.2). However, only the composition sensors of chords c1 , c2 , . . . , cr are required for estimating splitter stream compositions. Thus, the reliabilities of all splitter stream compositions are given by: R(x j,i ) =

Y¡ i∈K xj

¢ Y (1 − pi ) 1 − pix ×

(12.4)

i∈Z j

If the fundamental cutset of any other stream, K xj (S1 ) contains one or more streams of a splitter, then the reliability of x1 also depends on the flowmeters and composition sensors required to estimate compositions of splitter streams. It was proved in Lemma 12.1 that the least reliability is attained by an unmeasured composition. However, if splitters are present, there are rare instances when an unmeasured flow rate of some stream associated with a splitter may have lower reliability than all unmeasured compositions. This may occur, for instance, if a splitter stream S j is a branch both in T x and T m , and if one of the splitter streams has measured composition. In this case, the unmeasured flow rate of stream S j will not be useful for estimating any of the unmeasured composition and Lemma 12.1 cannot be used. For the flow rate of stream S j to have the least reliability, the flowmeter failure probabilities of splitter streams must be substantially higher than other flowmeter and composition sensor failure probabilities. Therefore, this case was ignored in the development. Example 12.3 In this example, we illustrate the design of a sensor network when splitters are present in the process. Consider the ammonia network given in Figure 12.2.

Bilinear Multicomponent Systems

265

For reasons of simplicity, we assume that all flowmeters have equal failure probability of 0.10 and all composition sensors also have equal failure probability of 0.20. We also assume that only a single splitter is present in the process that is represented by node U5 . To observe flowrates and compositions of all streams, the minimum number of flowmeters is three, whereas that of composition sensors is two, which is given by Equation (6.20). In this equation n S = 8, n U = 6, n sp = 1 (one splitter) and m s,1 = 3 (three nodes incident to the splitter). We start with an initial tree T x = T m = {S2 , S3 , S4 , S5 , S8 }. Note that we cannot start with the spanning tree given in Figure 12.3b, because it contains three splitter branches. The fundamental cutsets of this spanning three are:

r K 1 (S2 ) = {S1 , S2 } r K 1 (S3 ) = {S1 , S3 } r K 1 (S5 ) = {S5 , S6 , S7 } r K 1 (S4 ) = {S1 , S4 , S6 , S7 } r K 1 (S8 ) = {S1 , S6 , S8 } Since T x contains two splitter branches, all splitter stream compositions are unmeasured. Thus, the compositions of stream S7 (a splitter edge) is unmeasured. However, the flow rate of this stream is measured. Reliabilities of compositions for all streams, except for S4 , S5 and S7 , can be computed by using Equation (12.2), because the fundamental cutsets of these streams in T x do not contain any splitter stream. The compositions of splitter stream S8 are observable by using the flow rate and composition measurements of streams S1 and S6 , that is, by using fundamental cutset K 1 (S8 ) with a reliability of 0.518. Therefore, the compositions of splitter streams S4 and S7 are also observable with the same reliability. The compositions of stream S5 can be estimated by using the flow rate and composition measurements of streams S6 and S7 (fundamental cutset 3). However, the compositions of stream S7 are indirectly estimated by using flow rate and composition sensors of streams S1 and S6 . Therefore, the compositions of stream S5 are observable through flow rate measurements of streams S1 , S6 and S7 and the composition measurements of streams S1 and S6 , with a reliability of 0.467. Corresponding to this T x we obtain the optimal T m using phase 1 of the algorithm for unequal sensor failure probabilities. We observe that T x and T m are identical. The network reliability is 0.467, which is attained by compositions of stream S5 . As in SENNET (Ali and Narasimhan, 1993), we find the set of leaving variables is Lc = {S2 , S3 , S4 , S8 }. If edge S4 is chosen as the leaving variable, the corresponding set of entering variables is Le = {S1 , S6 , S7 }. If we choose either S6 or S7 as the entering variable, the network reliability does not improve. However, the network reliability improves if edges S1 or S8 are

266

DESIGN OF RELIABLE BILINEAR NONREPAIRABLE SENSOR NETWORKS

chosen as leaving and entering variables. This gives a new spanning tree solution {S1 , S2 , S3 , S4 , S5 }. Further iterations do not improve the network reliability. Explicit enumeration of all solutions shows that this solution is also the global optimum. ENERGY NETWORKS For this case, the same approach as the one shown in Chapter 6 is used. All the algorithms developed for multicomponent networks can now be applied to obtain the optimal placement of flow and temperature sensors. The streams with unmeasured flows will form a spanning tree of G m , whereas the streams with unmeasured temperatures will form a spanning tree of G e . As an example, Ali and Narasimhan (1996) solved a flowsheet of a crude preheat train of a refinery by using the same technique. The example is omitted because it does not add new conceptual material. NOMENCLATURE K j: K j (i): K xj : Lc: Le: nS: nU : pi : pix : Riv (t): Si : T: T x: T m: T ∗: Ui : x: xM : xU : Z j:

jth cutset of a graph jth cutset containing variable i jth cutset of the concentration graph list of candidate leaving variables list of candidate entering variables number of streams or edges in a graph number of units or nodes in a grap failure probability of flow rate sensor of stream Si failure probability of composition sensor of stream Si variable estimation reliability stream i spanning tree spanning tree corresponding to concentration measurements in its chords spanning tree consisting of unmeasured flows optimal solution corresponding to pure flows only unit i vector of state variables set of measured variables set of unmeasured variable set of flowmeters required to estimate all flowrates of K xj

References

267

REFERENCES Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Linear Processes. AIChE J., 39, 5, pp. 2237–2249 (1993). Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Bilinear Processes. AIChE J., 42, 9, pp. 2563–2575 (1996).

CHAPTER 13

Design of Reliable and Cost-Efficient Nonrepairable Sensor Networks

INTRODUCTION Chapters 11 and 12 presented methodologies to design reliable systems based on the concept of maximizing the system reliability where cost is not taken into consideration. Most of the material for this chapter, which adds cost considerations to the problem, is taken from Bagajewicz and S´anchez (2000). MINIMUM COST MODEL The design of the sensor network subject to reliability constraints can be written as follows: X  ci qi Min     ∀i∈M1  s.t. (13.1)  Rkv (q) ≥ Rk∗ ∀k ∈ M R     qi ∈ {0, 1} ∀i ∈ M1 where M1 is the set of streams where sensors can be placed, M R is the set of variables whose reliability is to be constrained, qi are the binary variables determining whether a sensor is located in stream Si (qi = 1), or not (qi = 0) and ci are the corresponding costs. The reliability of each variable is calculated using the failure probabilities of all the sensors participating in the cutsets as discussed in Chapter 10.

269

270

DESIGN OF RELIABLE AND COST-EFFICIENT

MODEL FOR MINIMAL NETWORKS Assume first that all sensors have the same cost c and that N is the number of sensors. Then, X ci qi = cN (13.2) ∀i

Therefore, one can rewrite model (13.1) as a minimum number of sensors model: X  Min qi     ∀i s.t. (13.3)  Rkv (q) ≥ Rk∗ ∀k ∈ M R    qi ∈ {0, 1} ∀i ∈ M1 Solution Procedure Models (13.1) and (13.3) are mixed integer programming problems. Moreover, the constraints regarding reliability are in principle constructive constraints, in the sense that they involve an algorithm for the evaluation of reliability. However, an explicit mathematical expression to calculate Rkv (q) can also be used. One can also use the tree search methodology developed by Bagajewicz (1997) for this type of problems, which was presented in Chapter 6. This methodology is based on an enumeration of every branch of the tree of alternatives aided by a stopping criterion. In essence, the enumeration proceeds until a feasible node of the tree is found. This node is evaluated and compared with the existing best solution, and the developing of that branch is stopped because any addition of sensors leads to an increase in cost. Although the computational effort for this technique is reasonable for small problems, the technique is not very efficient for large problems. There is, therefore, a need to develop algorithms that are more efficient to solve these problems. Such methods have not yet been developed. Relation to Other Models The model presented by Ali and Narasimhan (1993), which was discussed in detail in Chapter 11, can be put in the following form:  Max Min Rkv (q)   ∀k∈M R     s.t.  ∗ E i (q) ≥ E i ∀i ∈ M R (13.4) X   qi = N ∗     ∀i  qi ∈ {0, 1} ∀i ∈ M1

Generalized Maximum Reliability Model

271

The first constraint puts a lower bound on the estimability of the variables of interest. The second constraint fixes the number of sensors. In the case of minimal networks, N ∗ should be chosen to be the number of chords of the spanning tree and the threshold estimability is set to one. In this case, these two can be substituted by a unique constraint requiring estimability of order one in all variables. We saw in Chapter 6 that this is a property of a minimal network. For the case of redundant networks, the number of sensors can be increased, and the estimability requirements can be still set to one. Such use of model (13.4) is equivalent to the use of the methodology presented in Chapters 11 and 12. Limitations of Previous Models The model (13.4) is successful in identifying reliable sensor networks, as described in Chapters 11 and 12. However, it has the following limitations. (1) Cost is not explicitly considered so that the solutions may not be costeffective. (2) It does not allow limiting the reliability of specific variables, because it relies on maximizing only the smallest, which does not guarantee desired reliability levels in others. (3) It cannot be used in conjunction with other goals such as accuracy, error detectability etc. (4) In the case of the design of redundant sensor networks, there is no control over which variable is redundant. Moreover, there is no control on what is the degree of redundancy of the system or the different ways a particular variable can be estimated. (5) The set of variables for which reliability is requested M R is equal to the whole set of variables of the system. (6) The set of variables where sensors can be placed (M1 ) is also the whole set of variables. Although some modifications can be made to address this problem in the context of the methodology based on the choice of leaving and entering variables, the effect of such restrictions on the ability to find optimal solutions has not been investigated. To address these limitations a more general maximum reliability model is needed. In the next sections, such model is presented, and its connections with the minimum cost model given by model (13.1) are established. GENERALIZED MAXIMUM RELIABILITY MODEL Assume now that a new constraint is added to model (13.1) in which the minimum reliability of all the variables of interest is used, as follows:

272

DESIGN OF RELIABLE AND COST-EFFICIENT

X  Min ci qi    s.t.  ∀i   v ∗ Min Rk (q) ≥ Min Rk ∀k3M R ∀k3M R    Rkv (q) ≥ Rk∗ ∀k ∈ M R    qi ∈ {0,1} ∀i ∈ M1

(13.5)

Note that the addition of the constraint is trivial, because it is guaranteed by the other constraints of reliability of individual streams. We now use the concept of duality according to Tuy (introduced in Chapter 8) to obtain the following result: X   ci qi  cT = α ≤ Min f (x) = Min        i∈M1          s.t. v ∗ g(x) = Min Rk ≥ Min Rk = β   ∀k∈M R ∀k∈M R         Rv ≥ R∗   ∀k ∈ M R   k k     qi ∈ {0,1} ∀i ∈ M1   β = Min Rk∗ ≥ Max g(x) = Max Min Rkv     ∀k∈M ∀k∈M R R         s.t.     X   c q ≤ c = α f (x) = i i T ⇔   i∈M1         ∗   ∀k ∈ M R Rk ≥ Rk       qi ∈ {0,1} ∀i ∈ M1

(13.6)

Thus, the maximum reliability model constrained by cost follows: Max Min Rkv ∀k∈M R

s.t. X

ci qi ≤ cT

i∈M1 Rkv ≤

Rk∗ qi ∈ {0,1}

         

     ∀k ∈ M R     ∀i ∈ M1

(13.7)

The objective function of this problem is the same as the one proposed by Ali and Narasimhan (1993). A few simplifying assumptions will produce a

Generalized Maximum Reliability Model

273

problem constrained by a fixed and minimum number of sensors, which is the model they presented. Consider all the sensors having the same cost, so that the cost constraint becomes a constraint on the total number of sensors. X

X

ci qi ≤ cT

i∈M1

qi ≤ N ∗

(13.8)

i∈M1

Thus, when the total number of sensors is set to a minimum and the constraints on the reliability of individual variables is dropped, the problem becomes: Max Min Rkv ∀k∈M R

s.t. X

qi ≤ N ∗

i∈M1

qi ∈ {0,1}

∀i ∈ M1

              

(13.9)

Consider now the following problem, where the constraint is made an equality. Max Min Rkv ∀k∈M R

s.t. X

qi ≤ N ∗

i∈M1

qi ∈ {0,1}

∀i ∈ M1

              

(13.10)

Lemma 13.1: The solution to model (13.9) has the same objective function value as the solution of model (13.10). Proof: If the constraint is not binding, then, if e q is the vector corresponding to the optimal solution X

e qi < N ∗

(13.11)

i∈M1

Therefore, it is possible to add a sensor to the system. Such an addition can leave the reliability of the system unaltered or increase it. Assume, without loss of generality, that the optimum of the problem corresponds to the reliability of

274

DESIGN OF RELIABLE AND COST-EFFICIENT

FIGURE 13.1. Example 3.1.

estimation of the flowrate of stream S1 . That is v v Rk = e R1 Min e

∀k∈M R

(13.12)

A new sensor can be located in a stream where a new balance equation involving the flowrate of S1 cannot be written. In such case, the reliability of S1 is not altered. On the contrary, if a new balance equation can be written, then the reliability of S1 will increase, because reliability is a monotone function of the number of sensors involved. Q.E.D. Example 13.1 We now illustrate the above lemma in the example of Figure 13.1. Assume that in this example the set M R = {S1 , S2 , S3 }. Assume that a feasible solution of model (13.9) is given by only one measurement in streams S3 , that is, q = (0,0,1,0,0), as indicated in Figure 13.1. It is easy to see that an addition of a measurement in stream S2 will increase the reliability of S1 , whereas the addition of a measurement in S4 or S5 will not. Indeed, the addition of a measurement in S4 cannot be used in conjunction with S3 to provide a new estimate of S1 . Therefore, if the original constraint sets a maximum of two sensors (N ∗ = 2), then the constraint on the number of sensors will be binding if sensors in S1 and S2 can be placed. Otherwise, the one sensor solution q = (0,0,1,0,0) will have the same reliability as the two sensors solutions q = (0,0,1,1,0), or q = (0,0,1,0,1). The above model addresses most of the concerns raised when analyzing the model presented by Ali and Narasimhan (1993). It explicitly considers the cost, it can impose lower bounds on the reliability of specific variables and it can restrict the set where sensors can be located. Other goals, such as accuracy goals, can be easily added to the minimum cost model as constraints. These constraints restrict the feasible region and remain as such in the Tuy-type dual problem. Finally, this model cannot explicitly control the degree of redundancy. This is considered in the next section.

Generalized Maximum Reliability Model

275

Reliable Sensor Networks with Estimability Constraints We now proceed to generalize the maximum reliability model to include estimability constraints. Consider the addition of estimability constraints to the minimum cost model (13.1). Min

X

ci qi

∀i∈M1

        

s.t. Rkv (q) ≥ Rk∗ ∀k ∈ M R     E k (q) ≥ E k∗ ∀k ∈ M E     qi ∈ {0,1} ∀i ∈ M1

(13.13)

Using the duality according to Tuy, one can convert this model into maximum reliability models. Reliable Sensor Networks with Accuracy Constraints We now proceed to generalize the maximum reliability model to include accuracy constraints. The resulting model is: Min

X

ci qi

∀M1

s.t. Rkv (q) ≥ Rk∗ σk (q) ≥ σk∗ qi ∈ {0,1}∀i

         ∀k ∈ M R     ∀k ∈ M P    

(13.14)

Using the duality according to Tuy, one can convert this model into maximum reliability models or into a maximum precision one. Note that estimability constraints can still be added. Other Illustrative Examples Consider the same ammonia network example presented in Chapter 11 (Figure 11.1). The cutsets corresponding to this network are given in Table 11.2. We now present different cases illustrating the above concepts. Data for these cases are given in Table 13.1 and solutions are given in Table 13.2

TABLE 13.1. Data for Illustrative Examples. Case 1 2 3 4 5 6 7 8 9

Instrumentation Cost 2000 * [1 1 1 1 1 1 1 1] 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8] 2000 * [1 1 1 1 1 1 1 1] 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8] 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8] 2000 * [1 1 1 1 1 1 1 1] 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8] 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8] 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8]

Estimability Lower Bounds E∗ — — — — — [ 2 0 2 0 0 0 2 0] [ 2 0 2 0 0 0 2 0] [ 2 0 3 0 0 0 2 0] [ 0 3 0 0 0 3 0 3]

Reliability Lower Bounds R∗ 0.81 * [1 1 1 1 1 1 1 1] 0.81 * [1 1 1 1 1 1 1 1] 0.9 * [1 1 1 1 1 1 1 1] [0.9 0 0.81 0 0 0 0.729 0] [0.95 0 0.95 0 0 0 0.95 0] 0.81 * [1 1 1 1 1 1 1 1] 0.81 * [1 0 1 0 0 0 1 0] 0.81 * [1 0 1 0 0 0 1 0] 0.9 * [1 0 0 1 0 0 1 0]

TABLE 13.2. Solutions of Illustrative Examples. Case (Cost)

1 (6000)

2 (5200)

Solutions S1 S4 S8 S1 S5 S6 S2 S4 S8 S2 S5 S6 S3 S4 S8 S3 S5 S6 S4 S5 S7 S6 S7 S8

Reliabilities 0.90 0.90 0.90 0.90 0.81 0.81 0.81 0.90 0.90 0.90 0.90 0.81 0.90 0.90 0.81 0.81 0.90 0.90 0.90 0.90 0.81 0.81 0.81 0.90 0.90 0.90 0.90 0.81 0.90 0.90 0.81 0.81 0.90 0.90 0.90 0.90 0.81 0.81 0.81 0.90 0.90 0.90 0.90 0.81 0.90 0.90 0.81 0.81 0.81 0.81 0.81 0.90 0.90 0.81 0.90 0.81 0.81 0.81 0.81 0.81 0.81 0.90 0.90 0.90

Estimabilities

S1 S5 S6

0.90 0.90 0.90 0.81 0.90 0.90 0.81 0.81





S1 S4 S6 S7 0.973 0.973 0.973 0.973 0.964 0.973 0.973 0.964 S1 S5 S7 S8 0.973 0.973 0.973 0.964 0.973 0.964 0.973 0.973 S2 S4 S6 S7 0.973 0.973 0.973 0.973 0.964 0.973 0.973 0.964 3 (8000)

S2 S3 S3 S4

S5 S4 S5 S5

S7 S6 S7 S7

S8 S7 S8 S8

0.973 0.973 0.973 0.964 0.973 0.964 0.973 0.973 0.973 0.973 0.973 0.973 0.964 0.973 0.973 0.964 0.973 0.973 0.973 0.964 0.973 0.964 0.973 0.973 0.964 0.964 0.964 0.973 0.973 0.973 0.964 0.973



4 (3000)

S1 S7

R1 = 0.9 R3 = 0.9 R7 = 0.9



5 (7500)

S1 S5 S7 S8

R1 = 0.973 R3 = 0.973 R7 = 0.973



6 (8000)

Same as in case 3

All binding

7 (7500)

Same as in case 5

All binding

R1 = 0.9973 R3 = 0.9973 R7 = 0.9802

E1 = 3 E3 = 3 E7 = 2

R1 = 0.9988 R3 = 0.988 R7 = 0.989

E2 = 3 E6 = 3 E8 = 3

8 S1 S2 S5 S7 S8 (9500) 9 S1 S2 S5 S6 (11500) S7 S8

276

Generalized Maximum Reliability Model

277

TABLE 13.3. Instrumentation Cost Reliability Lower Bounds Precision Upper Bounds 1000 * [1.5 2.0 2.3 2.8 1.7 2.0 1.5 2.8] R ∗ = 0.9 * [1 0 0 1 0 0 1 0] σ ∗ = [0 3 0 0 0 2 0 2]

Case 1: This first case corresponds to the same problem presented by Ali and Narasimhan (1993) in this opportunity solved by using the minimum cost model. The constraints in reliability were chosen equal to the solution obtained by Ali and Narasimhan (1993). Different alternative solutions having the same cost of 6000 are obtained. Case 2: This case explores variants of the above solution by altering the cost of the different sensors. The optimal solution is now only one. Case 3: In this case uniform costs are used again, but higher reliability thresholds are introduced. Case 4: Different costs and different reliability constraints are used in this case. However, only three streams are restricted. These types of solutions cannot be obtained using the models presented in Chapter 11. Case 5: This case is the same as case 4, with an increase in the lower bound on reliability. Case 6: This case explores the effect of adding estimability constraints. All estimability constraints are binding, with no variable having an estimability larger than two. Observing that the solution is one, as in Case 3, this example illustrates how estimability constraints can play a role similar to reliability constraints. However, as discussed before, they are not completely equivalent. Case 7: Cost is changed in this case with only some reliability and estimability constraints. All the estimability constraints are binding. Case 8: This case explores the increase of the estimability of variable S3 in case 7 to three (E 3∗ = 3), while the rest of the data remains the same. Estimabilities are not all binding. Case 9: Tight reliability constraints in variables and different estimability constraints, are explored in this case. Case 10: To show how reliability and accuracy constraints interact, the example of Table 13.3 was prepared. Results of model (13.14), with a resulting cost of C = 8000, are shown in Table 13.4. All the examples presented above show the richness of the sensor location problem, when several constraints of different nature are imposed. The challenge is now to be able to solve large systems. Techniques capable of addressing this issue will be presented in Chapter 16. TABLE 13.4. Results for Case 10. Solution S2 S5 S6 S7 S8

Reliabilities Precision R1 = 0.9729 R3 = 0.9639 R7 = 0.9729 σ2 = 1.3297 σ6 = 1.9911 σ8 = 1.9249

278

DESIGN OF RELIABLE AND COST-EFFICIENT

NOMENCLATURE ci : cT : Ei : E k∗ : M1 : ME : MP : MR : N: N ∗: q:

cost of sensor i bound on total cost estimability of variable i threshold of estimability set of variables where sensors can be placed set of key variables for estimability set of key variables for precision set of key variables for reliability number of sensors number of sensors fixed for design binary vector indicating whether sensors are located (1) or not (0) in variables Riv (q): variable estimation reliability as a function of q variable estimation reliability threshold q Ri∗ : stream i Si : Ui : unit i x: vector of state variables set of measured variables xM : Greek Letters σ: σ ∗:

precision precision threshold

REFERENCES Ali Y. and S. Narasimhan. Sensor Network Design for Maximizing Reliability of Linear Processes. AIChE J., 39, 5, pp. 2237–2249 (1993). Bagajewicz M. Design and Retrofit of Sensor Networks in Process Plants. AIChE J., 43, 9, pp. 2300– 2306 (1997). Bagajewicz M. and M. S´anchez. Cost-Optimal Design of Reliable Sensor Networks. Comp. & Chem. Eng., 23, 11/12, pp. 1757–1762 (2000).

CHAPTER 14

Design of Repairable Sensor Networks

INTRODUCTION In this chapter the service and estimation availability and reliability of sensor networks that are subject to corrective maintenance are studied. Finally, a minimum cost model is presented. As it was discussed in Chapter 10, when repairs are not present, the service availability of a sensor is equal to its service reliability. In addition, the failure rate has been considered constant. However, in the presence of repairs, failure is no longer an event that depends on how many hours the sensor survived from the time it has been put in service. It is also conditioned by the fact that maintenance is made and/or the sensor has been repaired at a certain time after being put in service. These events condition the failure rate. Thus, we distinguish unconditional from conditional events in failure and repair. These concepts are first formally introduced to be later used to determine sensor maintenance cost, which accounts for nearly 20% of all maintenance cost (Masterson, 1999). Its reduction or containment is essential.

FAILURE INTENSITY Definition: The unconditional failure intensity wi (t0 , t) at time t is the probability that the sensor fails per unit time at time t, given they are normal at time t0 . Definition: The conditional failure intensity of sensor i, λi (t) at time t is the probability that the sensor fails per unit time at time t, given that it was at the normal state at time t. 279

280

DESIGN OF REPAIRABLE SENSOR NETWORKS

To help understand the difference between these two concepts, assume a system of many alike sensors with a constant failure rate. As time goes by, sensors fail at a constant rate, that is, the number of sensors failing is proportional to the number of normally functioning sensors. This is no other than the conditional failure intensity. Thus, for a system with constant failure rate the conditional failure intensity is equal to the failure rate (λi (t) = r ). However, the unconditional failure intensity is the proportion of the original sensors that fail. For the system in question, at a constant failure rate, the number of sensors failing is lower as time goes by; therefore, the unconditional failure rate drops. In the general case, when the rate of failure r is not constant, then the conditional failure intensity is not equal to the failure rate.

REPAIR INTENSITY Once one is able to calculate what the probability is of a given number of sensors failing, we assume that they will be repaired. Thus, much in a similar way as in the case of failures the following is defined: Definition: The unconditional repair intensity vi (t0 , t) at time t is the probability that the component is repaired per unit time at time t, given that it jumped into the normal state at time t0 . Definition: The conditional repair intensity µi (t) is the probability that a component is repaired per unit time at time t, given that is failed at time t. Assume again a system of many alike sensors. The unconditional repair intensity counts the number of sensors being repaired, as a fraction of the original number of sensors. On the other hand, the conditional repair intensity is the number of repairs as a fraction of the number of failed sensors at time t. Thus, if the repair rate m is constant, the conditional repair intensity is equal to the repair rate (µi (t) = m) (Henley and Kumamoto, 1981). EXPECTED NUMBER OF REPAIRS The real number of repairs in the interval (t, t + dt) is directly related to the unconditional repair intensity by: vi (t0 , t)dt. The conditional repair intensity is not used because the unconditional repair intensity counts the fraction of sensors being repaired with respect to the total original number of sensors, not with respect to those that had failed at time t. Thus, the expected number of repairs (3i ) in the interval [t0 , t), is Z 3i (t0 , t) = t0

t

vi (t0 , t) dt

(14.1)

281

Expected Number of Repairs

To calculate the unconditional repair intensity vi , we will assume the following: —The conditional failure intensity (λi ) is known and is constant. —The sensor does not deteriorate in time, that is, that the probability of failing before and after the repair does not change. These are known as AGAN (as-good-as-new) type of maintenance. —If the sensor has been repaired, even if constant, its failure intensity does not change. —The conditional repair intensity (µi ) is known and is constant. This is given by the rate at which the instruments can be repaired, or in more practical terms the number of these sensors per unit time that can be repaired. Systems with an increasing failure rate with time are intentionally left out from consideration, because no design procedures for sensor networks featuring nonconstant failure rates have been developed yet. Similar consideration applies to repair rates. Finally, sensors can have different failure modes, because different components can have different failure rates. However, for the same reasons outlined above, only one failure mode is considered. Number of Repairs under Corrective Maintenance We now aim at obtaining an expression for the number of repairs when the instruments are repaired immediately after they fail. This last assumption is crucial for a development of an expression of the unconditional repair intensity. Assume the following events:

r Bi = the component is failed at time t. r Ci = the component is repaired during (t, t + dt). r Wi = the component was in the normal (operating) state at time t0 . Then we can write an expression for the unconditional repair intensity: vi (t0 , t) = Pr(Ci | Wi )

(14.2)

However, events Bi and Ci are by assumption simultaneous. In other words, Ci is included in Bi or implied by Bi . In other words, Ci = Ci ∩ Bi . Thus, using laws of probability: vi (t0 , t) = Pr(Ci | Wi ) = Pr(Ci ∩ Bi | Wi ) = Pr(Ci | Bi ∩ Wi ) Pr(Bi | Wi ) (14.3) The first term is the probability that the system is being repaired at time t given it has failed at time t and it was in normal state at time t0 . This is no other

282

DESIGN OF REPAIRABLE SENSOR NETWORKS

than the unconditional repair intensity, that is µi (t) = Pr(Ci | Bi ∩ Wi )

(14.4)

The second term in Equation (14.3) is the probability that the system has failed at time t given it was in normal state at time t0 . This is no other than the service unavailability, that is Uis (t) = Pr(Bi | Wi )

(14.5)

vi (t0 , t) = µi (t)Uis (t)

(14.6)

therefore,

and thus for a constant unconditional failure intensity (µi = m i ) the expected number of repairs is: Z

t

3i (t0 , t) = µi t0

Uis (t) dt

(14.7)

We now proceed to develop a Markov model that will help us determine the unavailability of sensor i. We assume that transitions from the failed state to the normal state (and vice versa) occur one at a time, that is, they cannot take place simultaneously. In other words, the possibility of two or more transitions is negligible. This is depicted in Figure 14.1. The basis for these markovian models has been introduced in Chapter 10. Using Equations (10.6) and (10.7) for our case we obtain: d Ais (t0 , t) = µi Uis (t0 , t) − λi Ais (t0 , t) dt

(14.8)

Equation (14.8) is explained as follows: The rate at which the sensor changes its availability (left side) is equal to the rate at which unavailable sensors are

FIGURE 14.1. Markov representation of corrective maintenance.

283

Expected Number of Repairs

repaired minus the rate at which they fail. Using Equation (10.1) and integrating: Uis (t0 , t) =

µ ¶ λi λi + Uis (t0 , t0 ) − e−(λi +µi )(t−t0 ) λi + µi λi + µi Ais (t0 , t) = 1 − Uis (t0 , t)

(14.9) (14.10)

Thus, from Equations (14.6) and (14.9) we conclude that after a long period of time the unconditional repair intensity is given by: lim vi (t0 , t) =

t→∞

µi λi λi + µi

(14.11)

In other words, when the repair rate is much larger than the failure rate (µi À λi ), the unconditional repair intensity is given by the failure rate (vi (t0 , t) → λi ). Conversely, if the failure rate is much larger than the repair rate the unconditional repair intensity approaches the repair rate (vi (t0 , t) → µi ). The expected number of repairs is obtained by using Equation (14.7), that is: · 3i (t0 , t) = µi µ −

λi (t − t0 ) λi + µi Uis (t0 , t0 ) λi − λi + µi [λi + µi ]2



¡ −(λi +µi )t ¢ − e(λi +µi )t0 e

¸ (14.12)

Figure 14.2 shows the evolution of service unavailability, service availability, and the expected number of repairs as a function of time. As expected, the

FIGURE 14.2. Availability, unavailability and expected number of repairs.

284

DESIGN OF REPAIRABLE SENSOR NETWORKS

availability of a repairable system does not go to zero, but rather levels off at a certain time and stays constant.

MAINTENANCE AND TOTAL COST The maintenance cost is for simplicity related to the expected number of repairs, that is c M (t1 , t2 ) =

X

3i (t1 , t2 )c R,i

(14.13)

∀i

where c R,i is the cost of repairing sensor i, which can be broken down into spare parts cost and manpower costs. If the life cycle is n years, the present value of the life cycle maintenance cost can be calculated as follows: c M,0 =

n X c M ( p − 1, p) p=1

(1 + I ) p

(14.14)

where I is the interest rate and p is the operating time of the instrument in years. Therefore, the total cost of instrumentation can be stated in terms of its purchase cost and the present value of the maintenance cost. cT = c P + c M,0

(14.15)

Other formulations, such as perpetuity formulas or average yearly cost that include depreciation, can be developed. We refer the reader to textbooks such as Peters and Timmerhaus (1991).

RESIDUAL PRECISION In attempting the design of sensor networks, one may ask each key variable to have a certain degree of estimability (Chapter 6) or a certain estimation availability to avoid solutions consisting of nonredundant networks. This can also be achieved easily by requesting the variance of estimates of measured variables to be lower than the variance of the measurement. In many cases, measurements contain biases, and, therefore, they can no longer be used. Thus, their corresponding measurement are eliminated from further consideration. Once a redundant measurement is found to have a bias and is eliminated, the overall degree of redundancy is lower, and the precision

Residual Precision

285

of all variables decreases. It is desired that the precision of certain key variables remain above the specified threshold on any deletion of gross errors. This motivates the following definition (Bagajewicz, 1997). Definition: Residual precision of a sensor is defined as the ability of the network to guarantee a certain level of precision in key selected variables when gross errors are detected and the measurements are eliminated. Because many gross errors can be found and, therefore, the corresponding variables are eliminated, the residual precision order is defined next. Definition: A variable has residual precision of order k, ψi (q, k), when a certain level of precision ψi∗ (k) is guaranteed after k gross errors are detected and the measurements are eliminated, regardless of their position in the network. Note that these concepts were introduced originally by Bagajewicz (1997) under the name “availability.” They are being renamed in this book so that the term “availability” is reserved for the concept introduced in Chapter 10. The residual precision can be obtained as follows: Consider a fixed set of measurements given by the binary vector q. Let tk be another binary vector of the same dimension as q defined as follows: ½ 1 If variable i is eliminated tk (i) = (14.16) 0 otherwise The vector tk has the following properties: X tk (i) = k

(14.17)

∀i

q(i) − tk (i) ≥ 0

(14.18)

The first condition indicates that only k elements of tk are nonzero. The second indicates that only measured variables can be signaled in tk to be nonzero. We now proceed to provide a way of calculating ψi (k). As k measurements are deleted, the new measurement vector is q − tk , and, therefore, a new precision for each variable σi (q − tk ) is obtained. Because this precision varies with the choice of the measurements deleted, then the maximum precision should be picked. Thus, ψi (q, k) = Max σi (q − tk ) ∀tk

(14.19)

Remark 1: If the residual precision requested is equal to the bound requested on deletion, that is, ψi∗ (k) = σi∗ , the constraint on precision can be dropped, as the former contains the latter.

286

DESIGN OF REPAIRABLE SENSOR NETWORKS

TABLE 14.1. Solutions of the Residual Precision Constrained Problem x1

x2

x3

x4

1% 1%

2% 2%

2% —

— 2%

Example 14.1 Consider Example 8.1, with the same costs. Two nonredundant solutions of cost cT = 3000 are shown in Table 8.1. Consider now that residual precision of order k = 1 is added to variables S1 and S4 as follows: ψ1∗ (1) = 1.5% and ψ4∗ (1) = 3%. The solution is: x M = (2%, 3%, 3%, 3%) with cost cT = 3900. Assume now that residual precision is requested to the same level as precision. Then two alternative solutions with cost cT = 5500 are obtained (Table 14.1). Not only the cost is higher but also there is one more degree of redundancy. For larger problems, the number of alternatives will increase, requiring new criteria to further screen alternatives. MINIMUM COST MODEL We now generalize the minimum cost model introduced in Chapter 6. The model minimizes total overall cost, including maintenance cost, and includes all the desired constraints on estimability and precision. In addition, it also features constraints on availability. Min

X i∈M1

(c P,i + c M,i )qi

s.t. E j (q) ≤ E ∗j σ j (q) ≤ σ j∗ ψ j (q, k j ) ≤ ψ ∗j (k j ) Avj (q, t) ≥ A∗j qi = {0, 1}

           ∀ j ∈ ME  ∀ j ∈ Mp    ∀ j ∈ Mp     ∀ j ∈ MA    ∀i ∈ M1

(14.20)

where M A is the set of variables for which availability constraints are imposed. A simplified version of this model has been considered by S´anchez and Bagajewicz (1998). In previous chapters instruments were considered nonrepairable systems, and, therefore, instrument reliability was used to evaluate variable reliability and build constraints. In the above minimum cost model,

Minimum Cost Model

287

reliability constraints are purposely omitted. When instruments are considered repairable systems, the concept of availability of a variable is more adequate. Availability constraints present a problem because availability is a function of time. Thus, the constraint in model (14.20) should be considered as applied to the limiting value for large time. Finally, a model for multiple instruments, similar to model (8.5), is presented next:                   

m

ni XX (c P,i,k + c M,i,k ) pi,k Min ∀i

k=1

s.t. E j ( p) ≤ E ∗j σ j ( p) ≤ σ j∗ ψ j ( p, k j ) ≤ ψ ∗j (k j ) Avj ( p, t) ≥ A∗j

∀j ∀j ∀j ∀j

∈ ∈ ∈ ∈

Is Mp Mp MA

                 m

m

ni X

pi,k ≤ 1

∀i ∈ M1

k=1

pi,k = {0, 1}

(14.21)

∀i ∈ M1 , ∀k = 1, n i

Example 14.2 We now show a design of a sensor network for the simplified ammonia network (Figure 6.5). All cutsets are shown in Table 6.1. In this example, sensors for each stream may be selected from a set of three instruments with different precision, purchase cost and failure rate. These data are included in Table 14.2. Maintenance corrective costs are evaluated considering spare part cost, Spc = 10, manpower cost, M pc = 40 for all instruments, life cycle of 5 years, annual interest I = 6%. Constraints of precision, residual precision and availability are included for only two flow rates. The limits on these variables are presented in Table 14.3. The repair rate of instruments, a parameter that is a characteristic of the plant in consideration, has been varied between 1 and 20. The results of the TABLE 14.2. Instrumentation Data.

Purchase cost Precision Failure rate (failures /yr)

No. 1

No. 2

No. 3

350 1.5%

250 2.5%

200 3%

0.3

0.6

0.7

288

DESIGN OF REPAIRABLE SENSOR NETWORKS

TABLE 14.3. Constraints of the Optimization Problem. Stream

Precision Requirements

Residual Precision Requirements

Availability Requirements

— 1.5% 2.5% —

— 2% 3% —

0.9

S1 S2 S5 S7

0.9

optimization problem are presented for each case in Table 14.4. In the first case, the repair rate is comparatively low. Consequently, the availability of instruments in the life cycle is also relatively low. To satisfy the availability of key variables, the optimal solution includes a set of six instruments. Three of these instruments are of type 1, which are sensors of low failure rate, high precision and high cost. For this reason, precision and residual precision constraints are not binding. When the repair rate is assumed to be 2, an optimal solution exists that consists of five instruments. Two of these instruments are of type 1 and the rest are of type 3. Consequently, the total instrumentation cost decreases. Furthermore, the residual precision for variable S2 and the availability for variable S7 are binding constraints. A lower instrumentation cost is obtained for a repair rate equal to 4. Even though sensors are located on the same streams as in the previous case, one sensor of higher failure rate is installed on stream S8 . This occurs because the repair rate is now higher, making the constraint on availability of variable S7 not binding. The results of the last case show that the influence of availability constraints TABLE 14.4. Optimization Results for the Simplified Ammonia Process Flowsheet. Residual Instrument Precision(%) Precision (%) Availability Repair Measured Precision (S 2) (S 2) (S 1) Rate Variables (%) Cost (S 5) (S 5) (S 7) 1

S1 S4 S5 S6 S7 S8 3 1 1 1 3 2 2040.2

2

S4 S5 S6 S7 S8

33131

1699.8

4

S4 S5 S6 S7 S8

33133

1683.7

20

S4 S5 S6 S7 S8

33133

1775.2

0.8067 1.2893 0.9283 1.9928 1.2313 1.9963 1.2313 1.9963

0.9841 1.2937 1.9712 2.0086 1.9712 2.0086 1.9712 2.0086

0.9021 0.9021 0.9222 0.9062 0.9636 0.9511 0.9983 0.9969

Nomenclature

289

decreases for high repair rates. The cost increases because of the effect of increasing the repair rate µ (from 4 to 20) in the maintenance cost model. In this case only the residual precision of variable S2 is binding. It can be concluded that the repair rate has a direct influence on the availability of a variable. If the repair rate is high, the design follows the requirements of precision and residual precision constraints. Thus, the availability of a variable may be a binding constraint for lower repair rates. In this situation, cost may increase because it is necessary to incorporate more instruments to calculate the variable by alternative ways. NOMENCLATURE Ais (t): Aiv (t): Ai∗ (t): C M (t1 , t2 ): C M,0 : cP : C R,i : cT : I: K j: m i (t): MA: M1 : ME : MP : MR : MU : M pc: p: Pr {.}: q: ri (t): rmax : Si : Spc: tk (i): Ui : Uis (t):

sensor service availability variable estimation availability variable estimation threshold maintenance cost for the interval (t1 , t2 ) present value of maintenance cost purchase cost cost of repairing sensor i total cost interest rate constants instrument repair rate set of key variables for avaliability set of variables where sensors can be placed set of key variables for estimability set of key variables for precision set of key variables for reliability set of key variables compulsory measured manpower cost operating time probability binary vector indicating whether sensors are located (1) or not (0) in variables rate of repair of instrument maximum rate of repairs stream i spare part cost auxiliary binary vector being nonzero when variable i is eliminated unit i sensor service unavailability

290

Uis,c (t): s, p Ui (t): vi (t0 , t): wi (t0 , t): x: xM :

DESIGN OF REPAIRABLE SENSOR NETWORKS

corrective maintenance sensor service unavailability preventive maintenance sensor service unavailability instrument unconditional repair intensity instrument unconditional failure intensity vector of state variables set of measured variables

Greek Letters λi (t): 3i (t0 , t): µi (t): σ: σ ∗: ψi (q, k): ψi∗ (k):

instrument conditional failure intensity instrument expected number of repairs instrument conditional repair intensity precision precision threshold residual precision of order k of variable i as a function of q residual precision threshold

REFERENCES Bagajewicz M. Design and Retrofit of Sensor Networks in Process Plants. AIChE J., 43, 9, pp. 2300– 2306 (1997). Henley E. J. and H. Kumamoto. Reliability Engineering and Risk Assessment. Prentice Hall, New Jersey (1981). Masterson J. S. Reduce Maintenance Costs with Smart Field Devices. Hydrocarbon Processing, Jan. (1999). Peters M. S. and K. D. Timmerhaus. Plant Design and Economics for Chemical Engineers. Fourth Edition. McGraw-Hill, New York (1991). S´anchez M. C. and M. Bagajewicz. On The Design of Robust and Reliable Sensor Networks. AIChE National Meeting, Paper 243f., Miami (1998).

CHAPTER 15

Design of Robust Sensor Networks

INTRODUCTION Instruments fail in many ways. One important noncatastrophic failure is bias, also referred to as systematic error. When data reconciliation is performed, systematic errors produce a smearing effect in all the data so that the whole data set has to be discarded. Without data reconciliation, no smearing exists, but there is no other way than direct inspection to determine this situation. In previous chapters, different properties have been introduced so that redundancy is guaranteed. Estimability constraints can be used as a means to directly guarantee more than one way of calculating a variable. Similarly, precision and residual precision constraints can be used to indirectly guarantee a certain level of redundancy. However, redundancy requirements are not a direct way to request that a sensor network behaves properly in the presence of gross errors. In this chapter, two new sensor network properties dealing directly with the presence of gross errors are presented, and a general model for the design of robust networks is presented.

ORIGIN OF GROSS ERRORS Steady-state data reconciliation is implemented in practice by using averages of single instrument measurements. As outlined in Chapter 4, gross errors from different sources are in this way lumped in a single value. These sources of gross errors are: —Instrument biases: These are consistent fixed value departures from the averaged values of the signals. Drifting is also a bias. 291

292

DESIGN OF ROBUST SENSOR NETWORKS

FIGURE 15.1. Ammonia oxidation reactor feed. Reprinted from Process Plant Performance: Measurement and Data Processing for Optimization and Retrofits, Frantisek Madron, D. Sharp. 1992, Ellis Horwood Series in Chemical Engineering.

—Leaks: Leaks can be classified as predictable (tank evaporation, for example) or unpredictable. Both are typically not considered in the plant models, although the predictable ones can be incorporated. —True outliers: These are occasional measurements that depart significantly from all other measurements. If the number and size of these outliers is large and one-sided, the averaged values may get distorted and confused with biases (Figure 4.4b). —Departure from steady state: When plants present small drifts and oscillations, averaged values treated as steady-state values do not reflect real plant behavior. This phenomenon has multiple effects, not only on gross error detection but also on reconciliation itself as well as on variance (tolerance) estimation. Consider for example, the signal in Figure 15.1: This signal corresponds to the flow rate of ammonia into a reactor of ammonia oxidation by air. The fluctuations are a result of the control scheme. Again, the average value of a certain number of measurements will approach the real average only as the number of measurements is impracticably large. When a small number of values are averaged, the large peak at t = 20 can introduce a departure from the real average. In addition, when signals like these are averaged, the variance used cannot be the instrument variance any longer, and process variations should be allowed if the true average flow rate is to be recovered. This question goes to the heart of the problem, which is the assumption of steady state. Although methods to perform dynamic data reconciliation exist, procedures to develop robust dynamic sensor networks have not been yet developed. Therefore, we omit discussing this issue further and concentrate on methods driven by the steady-state data reconciliation needs.

Test for Gross Error Presence

293

GROSS ERROR HANDLING The challenging task in data reconciliation is to

r Identify the existence of gross errors. r Identify the gross errors location. r Identify the gross error type. r Determine the size of the gross error. At least one method that allows the detection of the existence of gross errors exists. Some of the methods for gross error identification are to a certain extent capable of discerning the location and type. Very little work has been performed to address departures from steady state. After the gross errors are identified, two responses are possible and/or desired:

r Eliminate the measurement containing the gross error. r Or correct the measurements or the model and run the reconciliation again. The first alternative is the one implemented in commercial software, which in general only considers biases. This leaves the system with a smaller degree of redundancy and, as we saw, the precision of the reconciled variables deteriorates. In the next few sections, some elements of gross error detection as well as size estimation are introduced. A more thorough treatment of the subject can be found in the following books: Madron (1992), Narasimhan and Jordache (2000) and S´anchez and Romagnoli (2000).

TEST FOR GROSS ERROR PRESENCE Testing for gross errors is based on the statistical principles of hypothesis testing. We present the three most popular tests. Global Test The null hypothesis H0 is that there is no gross error. Let r be the vector or residuals of the material balances, that is, r = C R z, where z are the flow rate measurements. Then, the expected value of r is: E(r ) = 0

(15.1)

294

DESIGN OF ROBUST SENSOR NETWORKS

and the covariance matrix of r Cov (r ) = C R Q R C RT

(15.2)

where Q R is the covariance matrix of random measurement errors. In the absence of gross errors, the following variable: ¡ ¢−1 χm2 = r C R Q R C RT r

(15.3)

2 ), where m is follows a chi-square distribution with m degrees of freedom (χm,α the number of rows of C R . This number can be obtained before even performing data reconciliation. If it falls within the interval of confidence, that is, if it is lower than a certain critical value, then the null hypothesis is accepted. On the other hand, if it is larger than the critical value, it is said that a gross error has been detected. The global test is performed as follows:

a. Compute χm2 . 2 of the chi-square distribution correb. Determine the threshold value χm,α sponding to a desired level of confidence α (usually 90–95%) 2 , then there is a gross error in the data set. c. If χm2 > χm,α Remark: The global test cannot determine where the gross errors are or how many are there. Measurement Test The measurement test is based on the vector of measurement adjustments (or corrections) eR a = FR+ − F

(15.4)

where a is the vector of measurement adjustments. The test is based on the assumption that the random errors for measurements are independently and normally distributed with zero mean. Under the null hypothesis, H0 , the expected value of a: E(a) = 0

(15.5)

eR Cov (a) = Q

(15.6)

and the covariance matrix of a:

Gross Error Detection in Dynamic Data Reconciliation

295

Thus, the following variable is expected to follow a normal distribution N (0,1). ai ηi = q eR )ii (Q

(15.7)

If no gross error is present, the above value should be lower than a critical value. If α is the confidence level, then this critical value xα/2 , is obtained directly from the normal distribution tables xα/2 . The measurement test can then be summarized as follows: The ith measurement contains a gross error with an error probability of α, if ηi > xα/2 Several studies, modifications and improvements have been proposed for this test (Mah and Tamhane, 1982, 1987; Crowe et al., 1983) Nodal Test In the absence of gross errors the constraint residuals r follow an m-variate normal distribution (m is the rank of C R ). Therefore, ri ξi = q¡ ¢ C R Q R C RT ii

(15.8)

follows a standard normal distribution, N (0,1), under H0 . If ξi is larger than the critical value based on a confidence level α, then one concludes that there is at least one gross error in the set of measurement that participates in the corresponding node balance. Other tests used specifically for gross error exists. Among the most popular are the generalized likelihood ratio (Narasimhan and Mah, 1987, 1988), principal component tests (Tong and Crowe, 1995) and the Bonferroni tests (Rollins and Davis, 1992). GROSS ERROR DETECTION IN DYNAMIC DATA RECONCILIATION The detection of gross errors in the context of dynamic data reconciliation shares some of the difficulties of tests based on steady state models. Narasimhan and Mah (1988) proposed to apply the generalized likelihood ratio (GLR) to dynamic situations with small departures from steady state values. Kao et al. (1992) studied the effect of serially correlated data on gross error detection. They proposed composite test procedures based on window averages, pre-whitening procedures and the generalized likelihood ratio. Rollins and Devanathan (1993) suggested the use of unbiased estimation techniques, and finally Albuquerque

296

DESIGN OF ROBUST SENSOR NETWORKS

and Biegler (1996) also proposed gross error detection techniques as an extension of their discretization approach. There are many other contributions that would be worth mentioning. However, because the design of robust sensors capable of using techniques for dynamic data reconciliation has not been addressed, we will not explore this field in more detail here.

INACCURACY IN GROSS ERROR DETECTION The following are the types of errors that one can incur while doing the hypothesis testing above proposed.

r Type I Error: A gross error is identified, while in reality there is none. r Type II Error: A gross error exists but is not identified. Of these two, the first one is relatively harmless, as an elimination of a variable that does not contain a gross error results only in loss of precision. However, when a gross error is not detected, smearing persists and all the results are corrupted.

MULTIPLE GROSS ERROR IDENTIFICATION The tests described above are suitable for the detection of one gross error. However, when more gross errors exist, strategies are needed to identify them. This area of multiple gross error detection, identification and estimation has been the object of several articles in the last 20 years. One of the first strategies proposed is serial elimination (Ripps, 1965), which consists of coupling a certain test with an elimination strategy. If the test failed, then a strategy is proposed to identify one or more variables, which are the “most suspected ones.” The measurements of these variables are eliminated and the test is run again. Commercial versions of this procedure (DATACON, SIGMAFINE) eliminate one measurement at a time and use the measurement test or similar. The procedure has been the object of several variations of the original scheme (Nogita, 1972; Romagnoli and Stephanopoulos, 1980; Iordache et al., 1985; Rosenberg et al., 1987; Bagajewicz et al., 2000). Several other procedures exist. For a review of these the reader is referred to the books by Madron (1992), Narasimhan and Jordache (2000) and S´anchez and Romagnoli (2000). In addition, more recent work not covered in these books includes articles by Bagajewicz et al. (2000), Bagajewicz and Jiang (1998, 2000), S´anchez et al. (1999), Jiang and Bagajewicz (1999), Jiang et al. (1999) and Renganathan and Narasimhan (1999).

Sensor Network Error Detectability

297

GROSS ERROR SIZE ESTIMATION Once the gross errors have been identified, it is desired to determine their size. There are several methods that have been developed in recent years to perform this. When one gross error is present, Madron proposed an expression based on the statistical properties of r (C R Q R C RT )−1r. Basically, when the measurements follow a normal distribution with no gross errors, r (C R Q R C RT )−1r follows a central chi-square distribution with m degrees of freedom. However, in the presence of gross errors, r (C R Q R C RT )−1r follows a noncentral chi-square distribution χm2 (ρ). When one gross error is present, the noncentrality parameter ρ is related to the gross error δi as follows: ¡ ρ(m, α, β) = δi

σi2 σi2 − e

¢1/2

σi2

(15.9)

where α is the level of confidence, σi2 is the variance of the measurement and e σi2 the corresponding variance of the estimate. The parameter β is the power of the test and is related to the probability of making a type II error (γ ) through β = 1 − γ. We are interested in certain power characteristics that are directly related to the probability of Type II error. Thus, one can look for threshold values of the noncentrality parameter of the chi-square distribution, so that a certain power of the test β is obtained. Thus, the threshold value of the gross error can be obtained inverting Equation (15.9). δi∗ (m, α, β) = ρ(m, α, β) ¡

σi2 σi2 σi2 − e

¢1/2

(15.10)

Thus, δi∗ (m, α, β) is the smallest size of gross error that can be detected with probability β. Typical values of β are 50% and 90%. Madron (1992) presented tables for ρ(m, α, β) and an empirical expression for large m. In the presence of multiple gross errors, the above formulas do not apply. Most of the research in the last 20 years has been devoted to the task of identifying and assessing the size of many gross errors. Several methods, some of which are efficient, have been developed. Because there are no methods developed to design sensor networks featuring capabilities of detecting more than one gross error, such techniques are not covered here. SENSOR NETWORK ERROR DETECTABILITY We now introduce a property that is related to the response of the sensor network under the presence of gross errors (Bagajewicz, 1997).

298

DESIGN OF ROBUST SENSOR NETWORKS

Definition: The ability of the network to detect k gross errors of a certain adimensional size κ D or larger is called error detectability of order k. The adimensional gross error is given by κ D = δi /σi . We now concentrate on the error detectability of order k = 1. As it was explained above, in the presence of a gross error of size δi in variable xi , the objective function 2 Ä = (x − z)T Q −1 R (x − z) has a noncentral χm (ρ) distribution with noncentrality parameter ρi , that is, the mean of Q is ρi + m. Having discussed this effect, we are in a position to work out this concept backward. One can choose a threshold size of gross error for each variable δi∗ and request those errors larger than this threshold to be detected by the network with a certain confidence. If one chooses one common adimensional value for all variables κ D , the error detectability criterion for variable i becomes ρ(m, α, β)

σi ≤ κD (σi − σ˜ i )1/2

(15.21)

Mathematical expressions capable of assess error detectability of larger order have not been developed yet.

SENSOR NETWORK GROSS ERROR RESILIENCE A sensor network should be capable of preventing undetected gross errors from corrupting the reconciled values. This motivates the following definition: Definition: The ability of the network to limit the smearing effect of k undetected gross errors of a certain adimensional size κ R or lower is called gross error resiliency of order k. In the event that a gross error of a certain magnitude in any variable occurs and is not detected, a certain corruption of data will take place after the reconciliation is performed. Let 1xi = xi − z i be the change of the reconciled value of variable xi when a set of gross errors of adimensional size κ R are present in the network. Let Mδ be a set of k gross error locations. Then δ R , the vector of gross errors, is defined as follows: δR = κR

X

σi ei

(15.22)

i∈Mδ

Thus, replacing δ R in Equations (7.21) and (7.23), one obtains: ( 1xi =

¡ ¢−1 −Q R C RT C R Q R C RT C R δR

if xi is measured

C R δR

otherwise

(15.23)

Robust Sensor Networks

299

Then, a desired level of resiliency is fixed for variable xi , requesting that 1xi is lower than a certain threshold ri∗ . Sometimes this threshold can be quantified in cost terms (e.g., loss of product). Thus, the resilience criterion becomes: τi (k, κ R ) ≤ ri∗

(15.24)

where ¯ Ã !¯ ¯ ¯ X ¡ ¢ ¯ ¯ T T T −1 τi (k, κ R ) = Max ¯ei Q R C R C R Q R C R C R σi ei ¯κ R ∀Mδ ¯ ¯ i∈M

(15.25)

δ

if variable xi is measured, and ¯ Ã !¯ ¯ ¯ X ¯ T ¯ σi ei ¯κ R τi (k, κ) = Max ¯ei C R ∀Mδ ¯ ¯ i∈M

(15.26)

δ

otherwise. ROBUST SENSOR NETWORKS One might be tempted here to give a definition of what is a robust sensor network. In principle, such a network is one that features meaningful values of precision, precision availability, variable availability, error detectability and resilience. These five properties encompass all the most desired features of a network. Indeed, precision and precision availability guarantee that data are always of the desired quality. Variable availability controls redundancy and reliability and allows the data to be most of the time at hand. Finally, error detectability and resilience make sure that the probability of data being gross error free is large. Although the two first properties are deterministic and depend directly on the quality of the instrument selected, the other three are statistical in nature. A formal definition of sensor network robustness is intentionally not given, because many other features that a sensor network must have could be added. For example, by the time this book is being edited, neural networks, wavelet analysis, principal component analysis (PCA), partial least squares (PLS) and other techniques for process monitoring have been developed for process monitoring. For all these methods, very little analysis about goals that a sensor network design or upgrade procedure can undertake was done. It is expected that in the next few years, such methods will emerge. Thus, to maintain the term alive, and just because it is ambitious, we simply define a robust sensor network as one in which all the existing network performance measures are below desired threshold values.

300

DESIGN OF ROBUST SENSOR NETWORKS

MINIMUM COST MODEL FOR ROBUST NETWORKS We now generalize the minimum cost model given by Equation (14.37) adding resilience and error detectability. m

ni XX Min (C P,i,k + C M,i,k )Pi,k ∀i

k=1

                         

s.t. E j ( p) ≤ E ∗j ∀ j ∈ Is σ j ( p) ≤ σ j∗ ∀ j ∈ M p ψ j ( p, k j ) ≤ ψ ∗j (k j ) ∀ j ∈ M p Avj ( p, t) ≥ A∗j ∀ j ∈ M A    σi  ρ(m, α, β) ≤ κ ∀ j ∈ M D D  1/2  (σi − σ˜ i )    ∗  τ j (k, κ R ) ≤ r j ∀ j ∈ M R    m  n  i X    pi,k ≤ 1∀i     k=1   m pi,k = 0,1 ∀i, ∀k = 1, n i

(15.27)

Example 15.1 We now concentrate on adding error detectability to Example 14.1. We will show that error detectability increases the precision of the sensor network and can even make the design infeasible. Consider adding an error detectability of κ D = 3.9 (with γ = 50%) to the availability constrained problem of Example 14.1. Two solutions from a set of only four feasible solutions are found with cost cT = 4800 (Table 15.1). If an error detectability of κ D = 3.4 for all measurements is requested, the problem has only one solution, namely x M = (1%, 3%, 1%, 1%), with cost cT = 8300. Consider now the addition of resilience. If error detectability is requested at a level of 3.9 times the standard deviation of the measurement for all TABLE 15.1. Effect of Error Detectability Constraints (η D = 3.9). x1

x2

x3

x4

1% 1%

3% 3%

— 2%

2% —

Minimum Cost Model for Robust Networks

301

FIGURE 15.2. Example 15.2.

measurements and resilience is requested at a level of 3 times the standard deviation for all flow rate measurement, then the solution is again x = (1%, 3%, 1%, 1%), with cost cT = 8300. Relaxing (increasing) the resilience levels maintaining the error detectability at the same level may actually lead to solutions of higher cost, even to infeasibility. Example 15.2 Consider a network proposed by Madron and Veverka (1992) (Figure 15.2). It is of larger size and contains several measured variables. Originally the problem was posed as selecting what measurements should be added to make streams S1 and S5 observable. Meyer et al. (1994) added the costs of Table 15.2. We will consider the flow rates of Table 15.3. Madron and Veverka report the solution x M = {S2 , S4 , S8 }, with a cost cT = 35. Meyer et al. (1994) reported the same solution using their own cost data. If the requirement of precision is sufficiently relaxed, the same solution is found, that is, x M = (S2 , S4 , S8 ). When flowmeters of 2.5% precision are used, this network has a precision of 27.8% in stream S5 . Because this value is too high, resulting in poor monitoring performance, consider requiring a 2.5 precision on stream S1 and S5 . The solution is x M = {S2 , S4 , S5 , S8 }, with a cost cT = 60. However, the network is nonredundant so it does not have error detectability. If error detectability is requested at a level of κ D = 3.9 (with γ = 50%) for flows S1 and S3 , which are the larger flows, the solution obtained is x M = {S1 , S2 , S4 , S5 , S6 , S7 } with a cost of cT = 90. TABLE 15.2. Cost of Flowmeters. Stream Cost

1 19

2 17

3 13

Source: Meyer et al. (1994).

4 12

5 25

6 10

7 7

8 6

9 5

302

DESIGN OF ROBUST SENSOR NETWORKS

TABLE 15.3. Flow Rates. Stream

Flow

Stream

Flow

1 2 3 4 5 6 7 8 9 10 11 12

140 20 130 40 10 45 15 10 10 100 80 40

13 14 15 16 17 18 19 20 21 22 23 24

10 10 90 100 5 135 45 30 80 10 5 45

NOMENCLATURE vector of adjustments (difference between measured and reconciled values) Aiv ( p, t): variable estimation availability as a function of p variable estimation threshold Ai∗ (t): cost of sensor k for variable i c M,i,k : purchase cost cP : cT : total cost incidence matrix for redundant systems CR: Cov(.): covariance unit vector ei : estimability of variable i Ei : threshold of estimability E k∗ : E(.): expected value measured flow rate of stream i FR+ : e estimated flow rate of stream i FR : set of key variables for avaliability MA: M1 : set of variables where sensors can be placed set of key variables for estimability ME : set of key variables for precision MP : set of key variables for reliability MR : set of key variables compulsory measured MU : set of gross error locations Mδ : N (0,1): normal distribution binary vector indicating whether sensor k is located (1) or not (0) pi,k : in variable i a:

References

q: Q R: eR : Q r: Si : x: xα/2 : xM :

303

binary vector indicating whether sensors are located (1) or not (0) in variables variance matriz of redundant measurements variance matriz of estimates residuals stream i vector of state variables threshold value for measuremnt test set of measured variables

Greek Letters α: β: γ: δ: δ∗: δD : δR : ξ: φ: χm2 : 2 χm,α : κD : κR : η: σ: σ ∗: ψi (q, k): ψi∗ (k):

confidence level for hypothesis testing auxiliary variable defined by Equation (15.25) probability of committing Type II error in hypothesis testing gross error smallest size of detectable gross error threshold for error detectability adimensional gross error nodal test statistics noncentrality parameter of the noncentral chi-squared distribution chi-squared distribution chi-squared distribution threshold adimensional gross errors for gross error detectability adimensional gross errors for gross error resilience measurement test statistics precision precision threshold residual precision of order k of variable i as a function of q residual precision threshold

REFERENCES Albuquerque J. S. and L. T. Biegler. Data Reconciliation and Gross-Error Detection for Dynamic Systems. AIChE J., 42, 10, pp. 2841 (1996). Bagajewicz M. Design and Retrofit of Sensor Networks in Process Plants. AIChE J., 43, 9, pp. 2300– 2306 (1997). Bagajewicz M. and Q. Jiang. Gross Error Modeling and Detection in Plant Linear Dynamic Reconciliation. Computers and Chemical Engineering, 22, 12, pp. 1789–1810 (1998). Bagajewicz M. and Q. Jiang. A Mixed Integer Linear Programming-Based Technique for the Estimation of Multiple Gross Error in Process Measurements. Chemical Engineering Communications, 177, pp. 139–155 (2000).

304

DESIGN OF ROBUST SENSOR NETWORKS

Bagajewicz M., Q. Jiang and M. S´anchez. Removing Singularities and Assessing Uncertainties in Two Efficient Gross Error Collective Compensation Methods. Chemical Engineering Communications, to appear (2000). Crowe C. M., Y. A. Garcia Campos and A. Hrymak. Reconciliation of Process Flow Rates by Matrix Projection. I. The Linear Case. AIChE J., 29, pp. 818 (1983). Iordache C., R. Mah and A. Tamhane. Performance Studies of the Measurement Test for Detection of Gross Errors in Process Data. AIChE J., 31, pp. 1187 (1985). Jiang, Q. and M. Bagajewicz. On a Strategy of Serial Identification with Collective Compensation for Multiple Gross Error Estimation in Linear Data Reconciliation. Ind. & Eng. Chem. Research, 38, 5, pp. 2119–2128 (1999). Jiang, Q., M. S´anchez and M. Bagajewicz. On the Performance of Principal Component Analysis in Multiple Gross Error Identification. Ind. & Eng. Chem. Research, 38, 5, pp, 2005–2012 (1999). Kao C. S., A. C. Tamhane and R. S. H. Mah. Gross Error Detection in Serially Correlated Process Data 2. Dynamic Systems. Ind. & Eng. Chem. Research, 31, pp. 254 (1992). Madron F. Process Plant Performance, Measurement. Data Processing for Optimization and Retrofits. Ellis Horwood, West Sussex, England (1992). Madron F. and V. Veverka. Optimal Selection of Measuring Points in Complex Plants by Linear Models. AIChE J., 38, 2, pp. 227 (1992). Mah R. S. H. and A. C. Tamhane. Detection of Gross Errors in Process Data. AIChE J., 28, 828 (1982). Mah R. S. H. and A. Tamhane. Detection of Gross Errors in Process Data. AIChE J., 33, pp. 1514 (1987). Meyer M. J., M. Le Lann, B. Koehret and M. Enjalbert. Optimal Selection of Sensor Location on a Complex Plant Using a Graph Oriented Approach. Computers and Chemical Engineering, 18, Suppl., pp. S535–S540 (1994). Narasimhan S. and C. Jordache. Data Reconciliation & Gross Error Detection. Gulf Publishing Company, Houston, Texas (2000). Narasimhan S., and R. S. H. Mah. Generalized Likelihood Ratio Method for Gross Error Identification. AIChE J., 33, 9, pp. 1514–1521 (1987). Narasimhan S. and R. S. H. Mah. Generalized Likelihood Ratios for Gross Error Identification in Dynamic Processes. AIChE J., 34, pp. 1321 (1988). Nogita S. Statistical Test and Adjustment of Process Data. Ind. Eng. Chem. Process Des. Develop. 2, pp. 197 (1972). Renganathan T. and S. Narasimhan. A Strategy for Detection of Gross Errors in Nonlinear Processes. Ind. & Eng. Chem Research, 38, 6, pp. 2391 (1999). Ripps D. L. Adjustment of Experimental Data. Chem. Eng. Progr. Symp. Ser. No. 55, 61, pp. 8–13 (1965). Rollins D. K. and J. F. Davis. Unbiased Estimation of Gross Errors in Process Measurements. AIChE J., 38, 4, pp. 563–572 (1992). Rollins D. K. and S. Devanathan. Unbiased Estimation in Dynamic Data Reconciliation. AIChE J., 39, 8, pp. 1330 (1993). Romagnoli J. and G. Stephanopoulos. On the Rectification of Measurement Errors for Complex Chemical Plants. Chem. Eng. Sci., 35, 5, pp. 1067–1081 (1980). Rosenberg J., R. S. H. Mah and C. Iordache. Evaluation of Schemes for Detecting and Identifying Gross Errors in Process Data. Ind. & Eng. Chem. Research, 26, pp. 555 (1987).

References

305

S´anchez M. and J. Romagnoli. Data Processing and Reconciliation for Chemical Process Operations. Academic Press, San Diego, California (2000). S´anchez M., J. Romagnoli, Q. Jiang and M. Bagajewicz. Simultaneous Estimation of Biases and Leaks in Process Plants. Computers and Chemical Engineering, 23, 7, pp. 841–858 (1999). Tong H. and C. M. Crowe. Detection of Gross Errors in Data Reconciliation by Principal Component Analysis. AIChE J., 41, 7, pp. 1712 (1995).

CHAPTER 16

Genetic Algorithms

INTRODUCTION The tree enumeration procedure presented in Chapter 6 can be used for the solution of all the sensor network design models. However, this procedure works reasonably well for relatively small problems, but it cannot handle large systems. Recently, Chmielewski et al. (1999) proposed an alternative method based on traditional MILP techniques. However, this chapter will cover recent advances in the use of genetic algorithms to solve the problem. Other approaches, such as special tailored MINLP solving techniques or the investigation of necessary conditions of optimum to reduce the integer search space, have not been investigated.

GENETIC ALGORITHMS If the design problems are presented using binary variables, as in model (6.3), then the standard genetic algorithm (GA), as described in Goldberg (1989) can be applied. Sen et al. (1998) proposed a graph-based genetic algorithm for the design of sensor networks. The graph-theoretic operation used for the design of genetic operators is an elementary tree transformation or cyclic interchange (Deo, 1974), which generates a new spanning tree from an initial spanning tree by adding a chord and deleting a branch belonging to the fundamental circuit of the chord that is added. The genetic operators of crossover and mutation are implemented using this operation. 307

308

GENETIC ALGORITHMS

Selection As in the case of a standard genetic algorithm, one starts with an initial population of randomly chosen spanning tree solutions of the process graph. The operator for selecting the mating pool is the same as in a standard GA implementation where the selection is based on the value of the objective function (fitness) for each spanning tree. The tournament selection strategy was used by Sen et al. (1998). Crossover Consider two spanning trees T1 and T2 that have one or more chords not in common. Let cl be a chord of T1 and a branch of T2 . The following property can be proved (Deo, 1974):

r Let bl be one of the branches of T1 that is also an element of the fundamental

circuit formed by cl . Then bl must be a chord of T2 . (A circuit is a sequence of connected edges such that the first and the last edge are the same.)

The crossover operation is defined as the cyclic interchange of bi and c j (satisfying above properties) in both trees T1 and T2 . The crossover operation results in two new spanning trees (offspring) that have one more chord in common between them compared with T1 and T2 . Because a chord set uniquely defines a spanning tree, the “transfer” of a chord from one spanning tree to another essentially transfers the “characteristics” of the spanning tree, which is the objective of the crossover operator in GA. The exact implementation of the crossover operator between two spanning trees T1 and T2 is as follows: —Step 1. Find the chords not common between T1 and T2 . —Step 2. Select, in sequence, an uncommon chord of T1 , say c j , identified in step (1) and find a branch bi satisfying property (2) described above. —Step 3. Generate a uniform random number between 0 and 1. If this random number exceeds pc , the crossover probability, then interchange bi and c j for both spanning trees. Update the chord sets of the spanning trees and their fundamental circuits. —Step 4. If all chords identified in step 1 have been tried, then stop. Otherwise, return to step 2. The two spanning trees obtained at the end of the above procedure are the offspring produced by crossover. The above operation ensures that both the offspring are feasible spanning trees. As an example, consider the two spanning trees of the graph of Figure 11.1, shown in Figure 16.1. These spanning trees are represented by the chord sets T1 = (S2 , S5 , S6 ) and T2 = (S6 , S7 , S8 ).

Genetic Algorithms

309

FIGURE 16.1. Crossover operation: parent solutions.

The number of chords in each set not common for the two trees is equal to two. Indeed, the set of chords in T1 and not in T2 is (S2 , S5 ), whereas the set of chords in T2 but not in T1 is (S7 , S8 ). Let chord S2 of T2 be selected and approved as a candidate for crossover. The fundamental circuit formed by chord S2 in T1 is (S2 , S3 , S4 , S8 , S1 ). It can be deduced from the two spanning trees that the chord-branch pair (S2 , S8 ) of tree T1 occurs as a branch-chord pair in T2 . Thus, by interchanging S2 and S8 in both trees, one obtains two new trees T1 = (S5 , S6 , S8 ) and T2 = (S2 , S6 , S7 ) as shown in Figure 16.2. It can be observed that although the parents have two uncommon chords, each of the offspring has only one chord uncommon with either of their parents. Thus, both the offspring are a cross between their parents, having some chords drawn from each parent. Mutation A cyclic interchange is also used to implement the mutation operation. For each fundamental circuit of each spanning tree, depending on the mutation

FIGURE 16.2. Crossover operation: children solutions.

310

GENETIC ALGORITHMS

FIGURE 16.3. Mutant spanning tree.

probability, a cyclic interchange with a randomly selected branch of the fundamental circuit is carried out. For example, if for chord S6 of the spanning tree shown in Figure 16.1(a), the mutation probability is exceeded, then a randomly selected branch from its fundamental circuit (S6 , S1 , S2 , S3 , S4 , S7 ), say branch S2 , is selected for interchange. The chords of the resulting tree after mutation are (S2 , S5 , S8 ) as shown in Figure 16.3. Both crossover and mutation operators allow adequate propagation of building blocks and help genetic algorithms to quickly find the optimal solution without digressing to finding many infeasible solutions (Goldberg, 1989). Extensions to the Algorithm The algorithm proposed in the previous section is useful for designing a sensor network that optimizes any one selected criterion. In practice, it is necessary to optimize several performance measures simultaneously. Sen et al. (1998) suggested the use of the modified genetic algorithms for multiobjective optimization proposed by Srinivas and Deb (1995). In addition, the above crossover and mutation operators are useful for the design of nonredundant networks. Objective Functions Different objective functions can be used for the design of these minimal networks. The four shown in Table 16.1 will be tried in the example below: For the case of maximum precision, ki is the number of occurrences of chord i in all the fundamental cutsets of the spanning tree. Example 16.1 The above algorithm was tested on the example of a steam-metering network shown in Figure 11.9. The data for this network were obtained from Sen et al. (1998), who in turn quote Bansal et al. (1994) and is reproduced in

Genetic Algorithms

311

TABLE 16.1. Objectives Used in Conjunction with GA. P Cost ci i∈chords ¢ Q ¡ 1 − Ris System reliability i∈chord

Maximum branch reliability Maximum precision

Min v i∈branches Ri P

(ki + 1)σi2

i∈chords

Table 16.2. The parameters of the GA used in all simulation runs are given in Table 16.3. Although these parameters were not optimized, they were suitably selected after a few trial runs. These parameter values maintained the diversity of the population and prevented premature convergence to a suboptimal solution. The GA was applied to optimize the sensor network for each of the above four objectives, individually. A summary of the simulation results is given in Table 16.4. The first column gives the objective function used in the simulation. The second column shows the chords of the spanning tree corresponding to the best solution obtained in the final population. The last four columns give the values of the four criteria for this solution. For all the four objective functions, the best solution obtained corresponds to the global optimum as verified through explicit generation of all 106 spanning trees of the process graph. In the implementation of the GA, about 50 × 10 = 500 spanning trees are evaluated, which correspond to less than 0.05% of the total number of feasible solutions. Furthermore, for the objective functions of cost, system reliability and accuracy, there is a unique global optimum solution, whereas for the minimum branch reliability objective five different spanning tree solutions correspond to the global optimum. Despite the fact that there is a unique (or few in the case of minimum branch reliability) global optimum solution, the GA has been able to identify it. It can also be observed that there exists a trade-off between different criteria. The minimum cost solution (first row) has a low reliability, whereas the one with maximum reliability has high cost and low accuracy. This points to the need for multiobjective optimization to obtain adequate trade-offs. In Chapter 8, a special type of duality embedded in all these problems was pointed out. For example, a minimum cost network is a dual of a maximum precision network. Likewise, the same type of duality exists between cost and reliability. It is no coincidence that many of the solutions obtained have the same value of these parameters, as seen in the last column in Table 16.4. Sen et al. (1998) also compared the efficiency of the GA with graph-theoretic algorithms for maximizing the minimum branch reliability or accuracy. To

312

GENETIC ALGORITHMS

TABLE 16.2. Data for Steam Metering Network.

Variable

Failure Probability

Standard Deviation of Error

Cost

S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 S26 S27 S28

0.141 0.174 0.104 0.07 0.174 0.096 0.164 0.165 0.055 0.099 0.153 0.092 0.052 0.154 0.095 0.116 0.077 0.08 0.099 0.074 0.071 0.094 0.170 0.066 0.088 0.143 0.128 0.075

0.0215 0.025 2.8 2.749 1.332 2.807 0.058 4.101 0.0215 1.31 0.3715 1.682 2.782 2.296 1.5 0.591 0.8182 0.4057 0.1987 0.2625 2.182 0.1362 0.0648 1.166 2.136 2.033 1.769 1.806

3.7 4.5 132.2 129.2 65.3 132.4 5.0 193.9 2.06 62.8 20.2 80.0 130.4 109.8 71.6 29.7 39.5 20.4 11.1 13.6 102.9 8.1 6.3 55.5 101.0 93.7 84.7 85.4

asses this efficiency, they used the number of sensor network design solutions evaluated (function evaluations). The algorithm for maximizing accuracy developed by Bansal et al. (1994) requires 30–70 function evaluations, depending on the initial spanning tree used, whereas the algorithm by Ali and Narasimhan (1993) for maximizing the minimum branch reliability requires TABLE 16.3. Parameter Values of GA. Population size Number of generations Crossover probability Mutation probability Tournament size

40–60 10 0.45 0.03 10% of population size

References

313

TABLE 16.4. Simulation Results of GA. Performance Objective Criteria

Optimal Spanning Tree

System Minimum Branch Cost Reliability Inaccuracy Reliability

Cost

S1 S2 S7 S9 S10 S11 555.5 S16 –S24 S27 S28 System S3 S4 S6 S9 S10 1178.2 reliability S12 S13 S15 S17 –S22 S24 S25 S28 Inaccuracy S1 S2 S7 S9 S10 S11 555.5 S16 –S24 S27 S28 Minimum S5 S8 –S11 S13 S15 1281.9 branch S17 S22 –S28 reliability

0.141

85

0.265

0.235

267.86

0.265

0.141

85

0.265

0.144

242.26

0.528

50 function evaluations. On the other hand, GA requires about 500 function evaluations. However, GA also provides near optimal solutions, which may be useful in practice, in addition to the fact that it is an algorithm applicable for other objectives than reliability. NOMENCLATURE bx : cp: ci : Ris (t): Si : T: Ui : x:

branch x of a spanning tree chord p of a spanning tree cost of sensor i sensor service reliability stream i spanning tree unit i vector of state variables

Greek Letters σ:

precision

REFERENCES Ali Y. Sensor Network Design for Maximizing Reliability of Processes. Ph.D. Thesis, Indian Institute of Technology. Kanpur, India (1993). Bansal P., Y. Ali and S. Narasimhan. Sensor Network Design for Linear Processes. Proceedings of IFAC. Workshop on Integration of Process Design and Control. Baltimore, Maryland (1994).

314

GENETIC ALGORITHMS

Chmielewski D., T. E. Palmer and V. Manousiouthakis. Cost Optimal Retrofit of Sensor Networks with Loss Estimation Accuracy. AIChE Annual Meeting, Dallas (1999). Deo N. Graph Theory with Applications to Engineering and Computer Science. Prentice Hall, Englewood Cliffs, NJ (1974). Goldberg D. E. Genetic Algorithms in Search, Optimization and Machine Learning. Addison Wesley Reading, (1989). Sen S., S. Narasimhan and K. Deb. Sensor Network Design of Linear Processes Using Genetic Algorithms. Comp. & Chem. Eng., 22, 3, pp. 385–390 (1998). Srinivas N. and K. Deb. Multi-Objective Function Optimization Using Non-Dominated Sorting Genetic Algorithms. Evolutionary Computation, 2 (3), pp. 221–248 (1995).

CHAPTER 17

Design of Sensors Networks for Process Fault Diagnosis

INTRODUCTION Alarm systems differ from normal monitoring systems. Although the basic goal of monitoring systems is to provide a good estimate of the state of the system, alarm systems are designed to detect process malfunction. In turn, process faults, which typically have only one source in some unit, propagate throughout the process, altering the readings of instruments (pressures, temperatures, flow rates etc.). Thus, these sensors should be able to determine departures from normal operation. In this sense, this task is different from that of gross error detection, which concentrates on instrument malfunction instead. As a consequence, the discrimination between instrument malfunction is an additional task of the alarm system. Thus, the problem of designing an alarm system consists of determining the cost-optimal position of sensors, such that all process faults, single or multiple and simultaneous, can be detected and distinguished from instrument malfunction (biases). FAULT DETECTION, DIAGNOSIS AND ALARMS A process fault is a departure from an acceptable range of operation or “degradation from normal operation conditions, and includes symptoms of a physical change (such as deviations in measured temperature or pressure) as well as the physical changes themselves (scaling, tube plugging, etc.) and deviations in parameters (such as a heat transfer coefficient)” (Wilcox and Himmelblau, 1994a). Faults originate in a process and propagate to a set of sensors (Figure 17.1). These sensors are also subject to faults themselves. These faults on the sensors are either biases or catastrophic failures. In the latter case, the fault detection 315

316

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

FIGURE 17.1. Fault diagnosis and alarm system.

is compromised, whereas in the former case the process fault can go either undetected, or false alarms may be induced. Therefore, a good alarm system should be able to filter the disturbances affecting the sensors and the gross errors induced by their faults. The next step is the process fault detection itself, although these two steps can be performed simultaneously. This procedure needs also some capabilities of distinguishing process disturbances from real faults. Once a process fault has been identified, the final step consists of taking corrective actions or determining a shutdown of the process. This is performed by implementing an alarm logic. Fault detection and diagnosis have been addressed in several books (Himmelblau, 1978; Pau, 1981; Gertler, 1998). We briefly list some of different approaches, putting emphasis in those that have been later used for sensor location methodologies. Fault Trees Fault trees have been used to build a library of diagnostic symptoms. A set of undesired events (top events) is selected. For each top event, a digraph is developed, modeling the interaction between variables, and later the fault tree itself is built. Lapp and Powers (1977) presented an algorithm to build the fault tree automatically. These trees are composed of AND and OR logical gates, connecting the top events to their causes. We briefly illustrate the fault tree analysis with a centrifugal pump (Figure 17.2). This system consists of a pump, pressure and flow indicators, a check valve and a control valve. Consider the event consisting of a backflow to the pump feed. The corresponding fault tree is shown in Figure 17.3. By assigning probabilities to events, failure rates, mean time to repair and frequencies of events that are certain, the failure rate of the top event can be

Fault Detection, Diagnosis and Alarms

317

FIGURE 17.2. Centrifugal pump system.

calculated by combining probabilities across logical gates, that is, P(A1 ∪ A2 ) for an OR gate (A1 OR A2 ) and P(A1 ∩ A2 ) for an AND gate (A1 AND A2 ). The probabilities are calculated using the known formulas for probabilities of union and intersection of sets. In addition to the failure rate of the top event, the determination of minimal cutset events, which are the events that alone can cause the top event, can be performed. Qualitatively, the fewer the minimal cutsets and the larger the number of members in each minimal cutset, the safer the process. Fault trees have been used to perform risk analysis and suggest flow sheet modifications. The reader can consult a variety of such tress for the pump system of Figure 17.2 in Aelion and Powers (1993). Hazard identification can also be performed by using a model-based approach (Catino and Ungar, 1995). Signed Digraph Approach Signed digraphs (SDG) are used to make cause effect (CE) analysis (Iri et al., 1979; Umeda et al., 1980; Shiozaki et al., 1985; Wilcox and Himmelblau, 1994a,b). Consider the system representing two tanks in series

FIGURE 17.3. Fault tree for backflow to feed event in Figure 17.2.

318

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

FIGURE 17.4. Two tanks in series.

presented in Figure 17.4, which is adapted from Wilcox and Himmelblau (1994a). The digraph shown in Figure 17.4(b) is similar to the one presented in Chapter 5 for a cyclohexane plant (Figure 5.1). The nodes of the digraph are the units and the edges are the streams. To represent the level changes (increasing in the first tank and decreasing in the second), pseudostreams are added. Consider now another type of digraph where the nodes represent the streams of the process and the edges represent the influence that one exerts on the other. Such a digraph is shown in Figure 17.5. The influence can be either positive or negative, indicating that a positive change in one node will result on a positive or negative in the other. In this example, a positive change in a flow rate has a positive effect on the level of the tank in which it discharges, that is, it will increase it, but it will have a negative effect in the level of the tank from which it comes. This is represented by the pluses and minuses (+ and −) on the edges of the digraph. The same can be said about the level: It will have a positive effect on the flow rate of the stream that is leaving the tank and no effect on the one discharging in it. More formally, a signed digraph is obtained by adding a function that maps the branches of the graph to the set {+, −}. Thus, in the SDG, the nodes of the graph represent the immediate influences between variables, and the function indicates the sign of the influences between variables. This representation of positive and negative influences between variables in a process is due to Iri et al. (1979, 1980).

FIGURE 17.5. Signed directed digraph.

Fault Detection, Diagnosis and Alarms

319

TABLE 17.1. A Possible Pattern for Two Tanks in Series. Node n i Pattern p(n i )

F0 +

L1 +

F1 +

L2 0

F2 0

The state of a system is described qualitatively by a pattern p, which is a function from the nodes of the graph to the set {+, 0, −}. A node mapping to +, 0, or − indicates that the corresponding process variable is high, normal or low, respectively. Table 17.1 describes a possible pattern for the two tanks in a series. In this pattern, one can see that the flow rate F0 , the level L 1 and flow rate F1 are higher than normal. However, the level of tank 2, L 2 and its flow rate F2 remain normal. Two additional definitions are required to understand how the signed digraph is used for diagnosis: valid nodes and consistent branches. Definition: A valid node in the signed digraph is any node which is abnormal (i.e., p maps the node to + or −). Definition: A consistent branch bk is a branch whose sign multiplied by the signs of the initial and terminal nodes is positive. For example, in the case of the pattern of Table 17.1, the branch connecting nodes F0 and L 1 is consistent because the branch has a positive sign and the two nodes also have a positive state. However, the branch connecting nodes F1 and L 2 is not consistent because the state of node L 2 is zero. Thus, a consistent branch can be thought of as a consistent path for the propagation of the influence of its initial node to its terminal node. On the basis of the state of the system and the signal digraph (SDG), one can construct the cause-effect graph (CEG). For the case of the two tanks in series, given the pattern in Table 17.1 and the signed digraph of Figure 17.5, the corresponding CE graph is shown in Figure 17.6. The arc from F1 to L 1 is inconsistent on the basis of the assumption that an “effect cannot compensate for its own cause” (Kramer and Palowitch, 1987) and has been cut in the CE digraph. To aid in the fault identification algorithms, a special type of node is defined next.

FIGURE 17.6. Cause-effect (CE) digraph.

320

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

Definition: A maximally strongly connected component (MSCC) in the SDG consists of a node or a cycle with no input arcs (Iri et al., 1979). For example, in the case of the SDG of Figure 17.6, the node F0 is a maximally strongly connected component. Thus, in a CE digraph, the faults are assumed to originate from an element in one of the MSCCs (Iri et al., 1979). This assumption that only nodes in the maximally strongly connected components can be the root causes of faults is a trivial assumption that can be made without any loss of generality (Raghuraj et al., 1999). Indeed, if there is a fault node that is not a maximally strongly connected component, then a pseudonode with just one output arc can be attached to this fault node. Now in our treatment, this pseudonode becomes the fault node corresponding to the original fault node. If the single-fault assumption is valid (one fault at a time) (Iri et al., 1979; Kramer and Palowitch, 1987), then a single MSCC exists in the CE digraph. Every fault in the process has its own rooted digraph and valid nodes in the SDG, representing the process variables that are influenced by the particular fault. Thus, the problem of sensor location for fault observability can be viewed as a systematic procedure of identifying the root nodes corresponding to all the faults and placing the sensors on the measurable valid nodes in the SD graph. The partial pattern obtained from these sensors then helps in detecting symptoms of every fault with a given resolution. In a series of articles, Iri et al. (1979), Umeda et al. (1980) and Shiozaki et al. (1985) proposed the use of implicit enumeration to assign qualitative states to each unmeasured node in the SDG. A depth-first search algorithm (Tarjan, 1972) is used to locate maximum strongly connected components (MSCC). If a single MSCC is found, the enumeration continues until all nodes have a qualitative state assigned. If more than one MSCC is found, then the enumeration backtracks to change the last qualitative state assignment. This procedure is computationally very expensive. Finally, Chang and Hwang (1992, 1994) developed a digraph-based approach for the synthesis of fault trees. Rule-Based Approach Rule-based algorithms are constructed by using SDG representations. Kramer and Palowitch (1987) proposed the determination of “if-then” rules involving tests of the measured variables only. These rules are developed off-line by using the SDG. Two rules are produced for each node, one testing if the +1 state of the node is a failure and one testing the −1 state. This method has been, nevertheless, challenged in several fronts (Mohindra and Clark, 1993). First, some inconsistencies appear when handling control loops, which generate a path from a nonzero state at a disturbance variable, through a zero state at a controlled variable, to a nonzero state at the controller input. Second, little

Fault Detection, Diagnosis and Alarms

321

guidance is given on how to remove unmeasured nodes, and finally, no proof is given that the results are identical to those produced by other methods. Chang and Yu (1990) addressed the same task by simplifying the SDG according to states. Finally, Mohindra and Clark (1993) introduced the concept of path-SDG, which consists of measured nodes only. This work addresses the issue of the computationally expensive task of finding all MSCCs and the disadvantages of the method proposed by Kramer and Palowitch (1987). They propose to find single-node and single-loop strongly connected components by partitioning the PSDG into “fragments.” A distributed reasoning strategy is used to determine the faults. Failure Propagation Networks Kokawa and Shingai (1982) and Kokawa et al. (1983) proposed to construct a digraph where the nodes are the devices and the arcs are the direction of propagation of failure. Failure propagation time and failure probabilities are also used. A backward search from the abnormal nodes to the source nodes is used. However, the state of all nodes is assumed to be known and cycles cannot be present. Picking up on the deficiencies of the work by Kokawa et al., Qian (1990) proposed to use a failure propagation graph using OR nodes and arrows. The OR nodes represent the devices of the plant or the failure modes of theses devices. The arrows show the failure propagation direction between adjacent nodes. Branches are then classified into several types according to the interaction between the nodes indicating the positive or negative abnormality condition of the pointed node. Certain rules are then outlined to indicate how a consistent branch can be identified so that backtracking from an abnormal node is properly performed. Failure propagation time is not required, and the failure probability is replaced by a degree of confidence. Neural Networks Neural networks have very good abilities to “learn” and recognize patterns. Thus, a neural network can recognize deviations from a pattern of values of process variables and detect and possibly diagnose a fault. Venkatasubramanian and Chan (1989) compared such approach with the performance of a rulebased methodology. Watanabe et al. (1989) used a two-stage neural network efficient only for single faults. Fan et al. (1993) proposed orthonormal transformations of the inputs, thus allowing the successful identification of multiple fault causes and their severity. Finally, Watanabe et al. (1994) provided a long list of applications of artificial neural networks for fault detection and diagnosis. They also proposed a hierarchical artificial neural network structure for detecting simultaneous “slight” faults with the same degree of deviation in the process variable. Kavuri and Venkatasubramanian (1993a) used a hidden layer of

322

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

ellipsoidal activation functions to obtain bounded regions of process faults and later fuzzy clustering techniques to improve the fault classification (Kavuri and Venkatasubramanian, 1993b). Chen et al. (1998) used a wavelet transform to extract features of the transients, which are later processed by a neural network. Finally, variable structure neural networks have been recently proposed (Luan et al., 1998). Knowledge-Based Approach Knowledge-based systems have also been proposed for fault diagnosis. Kramer (1987) introduced nonboolean reasoning in expert systems. Venkatasubramanian and Rich (1988) proposed an expert system that integrates compiled with deep level knowledge, which is in turn based on constraints (material and energy balances), confluence equations (qualitative differential equations that represent the influence of one variable on another), fault models to explain local causes of faults and causal models of process units to indicate the local effect of unit failures on variable. Fathi et al. (1993) presented an integrated approach of analytical redundancy obtained by using extended Kalman filters with compensation and a knowledge base organized on the basis of malfunction hierarchy. Su and Fan (1993) proposed a rule-based expert system integrated with a neural network, which also makes use of wavelet decomposition. Finally, Leung and Romagnoli (1998) proposed an expert system based on possible cause and effect graphs (PCEG) with Bayesian belief networks and applied it successfully to a distillation column. Model-Based Approach Quantitative models used for fault detection and isolation are usually linear models. Several review articles and books have been devoted to cover all the advances. Among the most important review articles are the ones by Gertler (1988), Frank (1990) and Patton (1995) and the books by Himmelblau (1978), Patton et al. (1989) and Gertler (1998), among others. Quantitative modeling uses a dynamic model of the system being monitored to generate estimates of measured and unmeasured variables and parameters. In turn, estimates of the measured variables are contrasted with the measurements generating error signals known as residuals. Patton (1995) offers the explanatory diagram of Figure 17.7. The residual generation, the decision making and the operator are assisted by a knowledge base. There are four approaches to residual generation: —Kalman filter: The prediction error of the Kalman filter can be used as a residual. Because these are white, diagnosis can be performed by using statistical tests. This is the most common approach.

Fault Detection, Diagnosis and Alarms

323

FIGURE 17.7. Quantitative model-based fault diagnosis. Adapted from Patton, 1995.

—Diagnostic observers: Observer innovations are used as residuals, which are decoupled from disturbances with “unknown input” design techniques. Residuals are not white, so statistical tests are somehow complex. —Parity relations: These are rearranged input-output equations. Residuals generated by this technique are also colored. These relations generate identical results as the observer-based techniques when the residuals are generated for the same specification. (Gertler, 1998) —Parameter estimation: According to Gertler (1998), this is the natural approach to fault detection and isolation. A reference model is used, identifying the plant in a fault-free situation. Parameters that are reidentified on-line are used to generate residuals. A weak relation between parity relations and parameter estimation also has been found (Gertler, 1998). The literature on this fault detection and diagnosis is really very large, because this is one of the approaches that has received most attention. Because sensor location techniques have not been proposed using these tools, this area is not covered in any more detail, and the reader is referred to the review by Patton (1995) and the book by Gertler (1998) for additional material.

324

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

Multivariate Statistical Methods In process monitoring it is customary to arrange the measurement into two sets of data, process variables (X) and product variables (Y). It is further assumed that there is a relation between them. Principal component analysis (Jackson, 1991) relies on an eigenvector decomposition, which usually finds that data can be described with far fewer factors than the original variables, simplifying the monitoring task. PCA models have been combined with univariate statistical testing or scoring. Multivariate regression modeling includes inverse least squares, principal component regression and partial least squares (PLS) (Wise and Gallagher, 1995). PLS is similar to PCA, except that it simultaneously reduces the dimensions of process and product variables (X and Y), whereas PCA only reduces the process variables only. Of the three PLS is the most popular. In both cases, the monitoring task consists of performing tests on the loadings and latent scores determining if they are out of range (Kresta et al., 1991). Both methods and their application to process monitoring are excellently reviewed by Kresta et al. (1991) Wise and Gallagher (1995). A good example of PLS use for fault detection is provided by Vinson and Ungar (1994). Finally, Gertler et al. (1998) recently showed the connections between PCA and parity relations. Extensions of these two techniques have flourished in recent years. McGregor et al. (1994) used multiblock PLS methods to analyze processes with subsections. Dunia and Qin (1998) recognize that each multidimensional fault is described in a subspace on which the displacement occurs. They propose a reconstruction procedure that enables fault isolation. Extension to nonlinear cases have been presented by Qin and McAvoy (1992), who embedded neural networks into a PLS framework. Dynamic systems have been addressed by Lakshminarayanan et al. (1997). Dong and McAvoy (1996) used nonlinear PCA based on neural networks, and Jia et al. (1998) improved on the same idea by using nonlinear PCA based on input-training neural networks. Shao et al. (1998) used a combination of wavelet filtering followed by nonlinear PCA, control limit and contribution plots. Vedam and Venkatasubramanian (1998) proposed to use signed directed digraphs to identify root causes of the error in PCA model predictions. Finally, Qin (1998) proposed a recursive PLS method. Other Techniques In a recent workshop of IFAC (Lyon, France, 1998) some new approaches to the problem have been proposed. Semiquantitative model-based techniques using fuzzy rules were proposed by Ozyurt et al. (1998). Parallel evolutionary algorithms of the genetic type were proposed to perform parameter estimation to generate residuals (Marcu and Frank, 1998). Aggregative learning for neural networks was also presented (Akhmentov and Dote, 1998).

Fault Detection, Diagnosis and Alarms

325

Statistical techniques and relationships between fuzzy systems and radial basis function networks were applied to prune a trained neural network and to extract qualitative rules (Gomm et al., 1998). Casual reasoning using fuzzy cognitive maps was used in a forward inference engine (Kim and Lee, 1998). Singular value decomposition was proposed to obtain least squares residuals (Han and Frank, 1998). Szigeti and Tarantino (1998) proposed to augment the state space with instrument and control dynamics and proposed a filter based on this model. A hierarchical procedure based on a triangular representation of process trends (Cheung and Stephanopoulos, 1990) in combination with principal component analysis was proposed by Cho and Han (1998). Stanley and Vaidhayanathan (1998) studied the use of generic fault propagation in casual directed graphs by an event correlator. Finally, Bensaker and Ouchene (1998) proposed to perform parameter estimation by using Poisson moment functional and generating residuals. Role of Sensor Failure Until recently, most of the literature downplayed the sensor fault detection task associated with fault detection as a simultaneous filtering problem. Dunia et al. (1996a,b) presented a PCA-based method to capture the correlation between the sensors, which can be later used for sensor fault identification. Busson et al. (1998) present an analytical residual generation approach for a steam condenser that allows them to obtain signatures for sensor faults. These attempts, however, do not seem capable of distinguishing a sensor fault from a process fault. Zafiriou and coworkers (Naidu et al., 1989; Yao and Zafiriou, 1990) discussed the use of neural networks to pick up sensor failures in control systems. Negiz and Cinar (1994) developed a multipass PLS technique that allows the elimination of incipient faulty sensors. Luo et al. (1998) presented a new approach to sensor validation in real time. In this approach, the signal is represented by wavelets, which are decomposed into different frequency ranges. The features at different frequencies are analyzed by using statistical testing. The authors claim that the physical changes in the sensor can be separated from the noise and the process changes. Vedam et al. (1998) used a B-spline-based algorithm that achieves data compression by denoising the data from each sensor. Later, they perform a trend analysis, detecting abnormal frequencies, changes in correlation among sensors and perform a root cause analysis. A knowledge base is used to distinguish between process changes and abnormalities in the sensors. These are very encouraging results, even though the techniques have only been tested on a few simulated examples. Quite recently, Dunia and Qin (1998) presented a unified PCA geometric approach for a unidimensional fault, which is capable of distinguishing a sensor from a process fault and seems to be amenable for generalization to multiple faults.

326

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

Although it is obvious that the use of the methodologies proposed throughout this book, sensors networks can be designed to exhibit the needed accuracy, reliability and appropriate gross error detection robustness, these properties are not enough for the design of a fault detection network. No work has been yet performed to design sensor networks combining these robustness features with fault detection. Alarm Logic Some incipient work is starting to be performed in the designing of the appropriate alarm logic. Tsai and Chang (1997) proposed a method that is based on data reconciliation first and that minimizes the economic losses due to Type I and Type II errors. It is abundantly clear that the sophistication of tools that are being applied to the detection and diagnosis of faults has far exceeded the humble level of the optimal sensor location problem for the same purpose. We now proceed to review the few existing results and hope that this fertile research ground flourishes in the next years. FAULT OBSERVABILITY Observability of a fault refers to the ability of a sensor network to detect its presence. Thus, a fault is observable if the sensor network can measure its effects in locations that are not necessarily the location of the fault. As in the case of the estimation of variables in steady-state processes, observability of faults can be achieved by using many different sets of sensors. The design task consists of determining which combination is the most costeffective. In addition, in the same way it was done for variable estimation, one may add reliability constraints and other filtering abilities to the problem, such as the ability to distinguish disturbances from process faults, the ability to identify gross errors in the instruments etc. Such model has not been presented yet. FAULT RESOLUTION A fault can be “observed,” when its symptoms are observed. However, the same type of symptoms can correspond to different faults. Thus, the term resolution has been coined to refer to the ability of determining the exact process fault occurred given the set of symptoms. This is, therefore, another constraint that can be added to the problem of designing a robust sensor network.

Sensor Location for Fault Observability

327

SENSOR NETWORK DESIGN In previous paragraphs some aspects of the problem of designing a costeffective sensor network for fault handling have been suggested. As outlined in Chapter 2, the problem can be defined as follows: Minimize {Total Cost} s.t.  Desired Observability of Faults    Desired Level of Resolution of Faults  Desired Level of Reliability of Fault Observation    Desired Level of Gross-Error Robustness in the Sensor Network The first attempt to present a technique to locate sensors was done by Lambert (1977), who used fault trees based on failure probabilities. Failure probabilities are hard to assess, fault trees cannot handle cycles and the construction of the tree is cumbersome for large-scale systems. Because of these limitations, the technique has not been developed further. We now concentrate on a recent sensor location technique for fault observability. SENSOR LOCATION FOR FAULT OBSERVABILITY We start with the definition of the problem: Given a process DG, the sensor location problem for observability is one of finding a set of nodes that is connected to all the nodes with only output arcs (root nodes). For the purpose of defining a strategy to positioning sensors, Raghuraj et al. (1999) used directed graphs (DG), that is, graphs without signs. In this way, the arcs of a DG represent a “will cause” relationship, that is, an arc from node A to node B implies that A is a sufficient condition for B. This in general is not true for an SDG. As an example, the effect of a fault on a connected variable might be offset by the opposing effect of some other fault on that variable. Another example might be the case of a controlled variable that would assume a normal value at steady state, though there would be nodes with abnormal values connected to it. Therefore, the arcs in the SDG represent a “can cause” relationship, that is, an arc from node A to node B only implies that A can cause B, and not that A will cause B. If the signs of the arcs were to be considered, then suitable modifications can be made to the algorithms presented by Raghuraj et al. (1999). Because the notion of signs is not used, the concepts of consistent and inconsistent cycles are also not considered. An important point to note is that the consistent and inconsistent cycles do not really change the CE behavior analysis of the process. A methodology for the optimization of

328

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

different possible sensor locations in the DG representing the process is the focus of this section. The strategy used to solve the problem is based on the following claim. Lemma 17.1: In a DG that is weakly connected (i.e., the corresponding undirected graph is connected) with no cycles, there is at least one directed path from a root node (node with only output arcs) to one node of the observability set. Proof: Consider a node with only output arcs. Consider the longest directed path from the node to some other node in the DG. Now, the last node in the directed path should be a node with only input arcs; otherwise, it would not be the longest path. Hence, every root node is connected through a directed path to one of the nodes of the observability set. It is clear that this claim is valid only for DG with no cycles. Q.E.D. Procedure: Given a process DG, the observability problem is solved through the following sequence of steps: (1) As a first step in the algorithm, a cycle is located. All the nodes in the cycle are collapsed into a supernode. (2) This procedure is repeated until there are no more cycles in the DG. This should make the DG acyclic. (3) Choose all the nodes with only input arcs. This results in a set of sensors that can observe all the faults. If one of the chosen nodes happens to be a supernode, then any sensor that is a part of the supernode can be used. To illustrate the algorithm, consider the digraph of Figure 17.8(a) (Raghuraj et al., 1999). The reduced digraph in which the cycles have been eliminated is shown in Figure 17.8(b). There are two cycles consisting of nodes N9 , N10 and nodes N11 , N12 . Figure 17.8(b) shows the DG after the two cycles have been collapsed to supernodes S N2 and S N1 , respectively. Definition: The observability set is the set of nodes in the reduced graph that have only input arcs. From Figure 17.8, one can conclude that the observability set is given by three nodes, namely [C6 , C7 , C8 ]. This set of sensors would ensure the observability condition, but the set might not be minimal. The possibility of obtaining observability sets that are not minimal is illustrated through Figure 17.9. In this figure, the nodes with only input arcs are [C6 , C7 ]. Clearly, these nodes would not give the observability set, as root node R1 cannot be observed with these two sensors. Now, if nodes N13 and N12 are collapsed into a single node, then this supernode would also become a node with only input arcs. Now, the observability set could be either [C6 , C7 , N12 ] or [C6 , C7 , N13 ].

Sensor Location for Fault Observability

329

FIGURE 17.8. Collapsing of nodes in the digraph.

Minimum Set of Sensors Ensuring Observability To generate a minimal set of sensors, a minimal subset of sensors from the observability set that have at least one directed path from every root node needs to be picked. Clearly, enumeration is one way of exactly solving the problem, but with an increasing number of nodes in the observability set and root nodes, the problem will become combinatorially complex. Raghuraj et al. (1999) proposed

FIGURE 17.9. Illustration of nonminimal observability sets.

330

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

FIGURE 17.10. Bipartite graph.

a greedy search, which is later modified to remove redundant members of the observability set. They do not guarantee optimality, that is, a minimum set may not be found by this algorithm. The greedy search consists of first drawing a bipartite graph between all the nodes of the observability set (top row) and the root nodes. A bipartite graph is one whose vertex set can be partitioned into two sets in such a way that each edge joins a vertex of the first set to a vertex of the second set. Such bipartite graph corresponding to Figure 17.8(b) is shown in Figure 17.10. Now the problem is one of choosing the minimum number of sensors (nodes in the top row) that would cover all the root nodes. This is the well-known “minimum set covering problem” (Parker and Rardin, 1988). All the root nodes are said to be “covered” if a directed path exists from every root node to at least one of the nodes of the observability set. Greedy algorithm: In this algorithm, the node of the observability set that has the maximum number of arcs incident on it is first chosen. A check is made to determine if all the root nodes are covered by the chosen component. If they are covered, then this gives the minimum number of sensors for observability. If some of the root nodes are not covered with this node, then all the arcs from already covered root nodes to other nodes of the observability set are deleted. After deletion of the arcs, the node with the maximum number of arcs incident on it is chosen again. At this point, a check is made to see if all the root nodes are covered. The result of the application of the greedy algorithm to Figure 17.10 is shown in Figure 17.11 where nodes [C6 , C7 , C8 ] form the observability set, because these are the nodes in the DG with only input arcs. It is also clear that by placing sensors on these nodes, one could detect all the faults. However, the set [C6 , C7 , C8 ] is not the minimal set. The minimal set is, in fact, [C7 , C8 ], which results from the application of the greedy algorithm. As a first step in the algorithm, node C7 is chosen, because it has the maximum number of arcs incident on it. In the next step, all the arcs other than the arcs from R5 to C8 are cut because all the other root nodes are already covered by C7 . These arcs are shown as dotted lines in the figure. Node C7 is marked, because this node is already chosen. Now C8 is chosen as the next node, and with this it is

Sensor Location for Fault Observability

331

FIGURE 17.11. Result of the application of the greedy algorithm.

clearly seen that all the root nodes are now covered. This, then, is the minimal observability set, as given by Algorithm 1. This algorithm may not give the actual minimum number of sensors. To explain this, let us consider Figure 17.12. By using the greedy algorithm, C1 would be chosen as the first key node, because it has the maximum number of arcs incident on it (C1 covers the root nodes [ f 1 , f 2 , f 3 , f 4 ]). As the second step, the arcs from [ f 1 , f 2 , f 3 , f 4 ] to [C2 , C3 , C4 , C5 ] are removed. One still has to choose [C2 , C3 , C4 , C5 ] to cover root nodes [ f 5 , f 6 , f 7 , f 8 ]. Thus, the greedy algorithm identifies [C1 , C2 , C3 , C4 , C5 ] as the minimal set. But clearly, the minimal set is actually [C2 , C3 , C4 , C5 ]. The presence of a redundant component, C1 , makes the observability set non-minimum. Although the node C1 was chosen first, the nodes chosen afterward cover all the roots nodes covered by C1 , by themselves. Hence node C1 becomes redundant in the observability set. To solve this problem, a backtracking procedure is used in the algorithm to facilitate identification and removal of the key node. Backtracking algorithm: In this algorithm, the key node with the maximum number of arcs incident on it is chosen first and marked. All the arcs from the root nodes covered by the selected node to all the previously marked nodes are deleted. All the other arcs from the root nodes covered by the chosen node to

FIGURE 17.12. Illustration of the failure of the greedy algorithm.

332

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

unmarked nodes are stored in a buffer. Now, a node is selected again based on the maximum number of incident arcs. The number of arcs that are incident on a node is taken to be the difference between the actual number of arcs incident on the node and the arcs incident on the node that are stored in the buffer. This procedure is continued until all the root nodes are covered. At the end of the algorithm, all the marked nodes that have no arcs incident on them are removed from the minimal set. This ensures the removal of the redundant key nodes. The algorithm is given in Figure 17.13 in terms of a flow chart (Raghuraj et al., 1999). To illustrate the use of this algorithm, consider again the example of Figure 17.12. Denote an arc from Ri to C j as ai j . As a first step in the algorithm, component C1 is chosen and marked. No arcs are deleted, because there is only one marked node. Arcs a12 , a23 , a34 and a45 are stored in a buffer, because they connect covered root nodes (R1 , R2 , R3 and R4 ). Now, all the other nodes have one arc incident on them (two actual arcs minus one arc in the buffer). Choose C2 . Root nodes R1 and R5 are covered by this sensor. At this stage arc a11 is deleted, because it is connected to an already marked node from R1 , which is a root node covered by the currently chosen node C2 . Similarly, arcs a21 , a31 a41 are deleted when components C3 , C4 and C5 are chosen, respectively. At the end of the algorithm, nodes C1 , C2 , C3 , C4 and C5 would be chosen as the minimal set. Now, because node C1 has no arcs incident on it, this node would be deleted. Hence set [C2 , C3 , C4 , C5 ] would be identified as the minimal set. It is obvious that although nodes are being removed at the end of the algorithm, observability is always ensured. SENSOR LOCATION FOR FAULT RESOLUTION The minimum requirement for a fault-monitoring system is that it ensures observability. In the previous section, this problem was formulated and solved. For a fault-monitoring system to be useful, it should not only be able to observe all the faults but also resolve them to the maximum extent possible. The resolution is of course restricted by the topology of the digraph and the position of the fault or root nodes in the digraph. Also, the assumption of single-fault or multiple-fault would lead to different resolutions. In this section, the problem of sensor locations for maximum resolution under single-fault and multiplefault assumptions is formulated and solved. The problem of sensor location for maximum resolution under single-fault assumption could be formally stated as follows: Definition: A sensor network for maximum fault resolution is such that each fault has one and only one set of nodes from which it is observable. Assume Ai is the subset of nodes of the observability set such that its elements are connected to the root node Ri . Assume now that for each Ri , a set Ci ⊆ Ai

Sensor Location for Fault Resolution

333

FIGURE 17.13. Flowchart of backtracking algorithm.

is picked. By construction, the set C = ∪ Ci allows full observability of the faults. The problem then consists of determining a minimal set C such that Ci 6= C j whenever Ai 6= A j and at least one element from every Ai is a member of C. It follows that two faults are indistinguishable if and only if Ci = C j . Whenever they are not equal, there is at least one sensor that can be used to distinguish the faults. We illustrate this as follows: consider three fault nodes

334

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

FIGURE 17.14. Maximum fault resolution.

R1 , R2 , and R3 (Figure (17.14). In this figure A1 = [C1 , C5 ], A2 = [C2 , C3 ], and A3 = [C1 , C3 , C4 ]. Clearly, if only one fault is expected to occur at a time, then the set [C1 , C3 ] would be adequate to distinguish between the three faults. Indeed, a fault in R1 is reflected in C1 , but not in C3 . Similarly, a fault in R2 is reflected in C3 , but not in C1 . Finally, a fault in R3 is reflected in bot C1 , and C3 simultaneously. To obtain the minimum set for maximum fault resolution, Raghuraj et al. (1999) proposed the following procedure: (1) Define Bi j = B ji = Ai ⊕ A j = Ai ∪ A j − Ai ∩ A j . There are n × (n − 1)/2 such sets. (2) Denote each Bi j = B ji by a node. Draw a bipartite graph between this node and the nodes in the set Bi j = B ji . Repeat this procedure for all the new nodes generated. (3) Without the observability condition, the minimum set of nodes that would cover all the root nodes in the bipartite graph is the solution to the problem. The sets Bi j represent the ring sum of the observability subsets of Ri . They contain nodes that can observe either one of the roots Ri and R j , but not both simultaneously. If the root node Bi j = B ji is covered, then faults Ai and A j can be differentiated. If this procedure is repeated for all the faults pairwise, then all the faults could be distinguished except for the ones that affect the identical set of sensors. Our aim here is to choose a minimal set of sensors that would perform single-fault identification. Consider the example of Figure 17.15. Three sets are generated: B12 = B21 = [C1 , C2 , C3 , C5 ], B23 = B32 = [C1 , C2 , C4 ], B13 = B31 = [C3 , C4 , C5 ]. The application of the backtracking algorithm renders the following: Node C1 is chosen and marked. Thus, the connections from B12 to C2 , C3 and C5 as well as the connections from B23 to C2 and C4 are stored in the buffer. These are shown in dotted line in Figure 17.16. Because node B13 remains uncovered, the next step consists of picking the node with the largest number of connections. Nodes C3 , C4 and C5 have one connection each. Assume one picks C3 . Then, node B13 is also covered and the solution C = {C1 , C3 } has been found. If C4 or C5 is picked, the sets

Sensor Location for Fault Resolution

335

FIGURE 17.15. Example to illustrate the backtracking algorithm.

C 6= {C1 , C4 } or C = {C1 , C5 } are also be solutions. The reader can verify that if for example C2 is marked first, then the backtracking algorithm will end up eliminating it. The only aspect of the formulation that has not been considered in this solution procedure is the requirement of the presence of at least one element from every Ai in C. For example, Raghuraj et al. (1999) showed that the observability of fault Ri is not ensured by the above algorithm if Ai ⊆ A j , ∀ j 6= i. Indeed, no element from Ai would be present in the sets Bi j . However, the observability of fault Ri could still be ensured only if some element of Ai is present in some other set Bk,l , where k, l 6= i. This is not possible either because if Ai ⊆ Ak , then Bk,i ∩ Ai = φ ∀k 6= i. Hence, the observability of fault i cannot be ensured. To overcome this difficulty, Raghuraj et al. (1999) suggested that this can be overcome by also adding Ai as a root node in the bipartite graph. Multiple Faults The sensor-location problem for multiple faults and maximum resolution can be solved as an extension of the single-fault assumption problem. Let us consider a specific case where single faults and two simultaneous faults are important. Raghuraj et al. (1999) suggested the use of the following steps. (1) Define Ai j = A ji = Ai ∪ A j and add them to the observability set in the bipartite graph.

FIGURE 17.16. Application of the backtracking algorithm.

336

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

FIGURE 17.17. Illustration of multiple faults.

(2) Solve the single-fault-assumption problem for System 1. This will give the solution to the problem just posed. In the solution to the double-fault case, some redundant sets might be generated, and one could considerably reduce the number of root nodes for which the observability problem has to be solved. Instead of attempting this here, we simply point out how the single-fault-assumption solution can be used to handle the double-fault case also. In fact, this approach provides us with a framework for posing and solving various kinds of sensor-location problems. This is a more important aspect of the proposed solution strategy. In a typical plant scenario, one might not be concerned with all multiple-or double-fault possibilities. There might be some faults that might have a high probability of occurring together. One could simply add such sets of faults to the original single-fault sets and get a solution to the sensor-location problem. Illustrative Example Consider the system of Figure 17.17. The backtracking algorithm is applied to get the minimal set of sensors for observability. This set is {C7 , C8 }. We now obtain the maximum resolution set of sensors. Table 17.2 shows the observability sets for each root. Notice that the roots R1 and R2 should be merged in one, because they are indistinguishable. The algorithm for maximum fault resolution is applied rendering C = {C6 , S N2 , C8 }. As anticipated, faults R1 and R2 are indistinguishable. Under TABLE 17.2. Observability Subsets for Figure 17.7a. Root Nodes {R1 , R2 } R3 R4 R5

Observability Sets A1 A3 A4 A5

= {C6 , S N2 , C7 } = {C7 , S N1 } = {C7 , C8 } = {C8 }

Nomenclature

337

TABLE 17.3. Auxiliary Sets Ai j and Bi j . A12 A13 A14 A23 A24 A34

= {C6 , S N2 , C7 , S N1 } = {C6 , S N2 , C7 , C8 } = {C6 , S N2 , C7 , C8 } = {C7 , S N1 , C8 } = {C7 , S N1 , C8 } = {C7 , C8 }

B12 B13 B14 B23 B24 B34

= {C6 , S N2 , S N1 } = {C6 , S N2 , C8 } = {C6 , S N2 , C7 , C8 } = {S N1 , C8 } = {C7 , S N1 , C8 } = {C7 }

the double-fault assumption, the solution is C = {S N2 , C7 , S N1 , C8 }. Clearly, the number of sensors required for the double-fault diagnosis is more than for the single-fault case. Indistinguishable faults can also be identified, just as in the single-fault assumption case. For example, from Table 17.3, A23 = A24 . This means that the occurrence of A2 and A3 together cannot be distinguished from the occurrence of A2 and A4 together. From Table 17.2, we also see that A3 , A4 and A5 correspond to faults R3 , R4 , R5 . Hence, if they occur together, faults R3 and R4 cannot be distinguished from the simultaneous occurrence of faults R3 and R5 . The application of the algorithms to a CSTR and an FCC unit are presented by Raghuraj et al. (1999). Future Scope According to Raghuraj et al. (1999), there are a number of enhancements that can be incorporated into the proposed algorithm. First, signs could be placed on the DG, and sensor-location problems could be solved using the SDG. Instead of a purely CE analysis, which is used in both DG and SDG, one could pose the sensor-location problem based on semi-qualitative, order-of-magnitude models. Other aspects that have not been considered yet are failure probabilities, sensor failure probability, severity of particular faults and the cost of the sensors.

NOMENCLATURE Ai : Ci : Ri : S Ni : Fi : Li : Si :

subset of nodes connected to node Ri node node supernode flow rate of stream i level in unit i stream i

338

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

REFERENCES Aelion V. and G. J. Powers. Risk Reduction of Operating Procedures and Process Flowsheets. Ind. Eng. Chem. Res., 32, pp. 82–90 (1993). Akhmentov D. F. and Y. Dote. Aggregative Learning Method and Its Applications to Fault Diagnosis. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Bensaker B. and F. Ouchene. Fault Detection for Continuous-Time Systems. The PMF Identification Approach. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Busson F., A. Aitouche, B. O. Boumama and M. Staroswiecki. Sensor Failure Detection in Steam Condensers. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Catino C. A. and L. H. Ungar. Model-Based Approach to Automated Hazard Identification of Chemical Plants, AIChE J., 41, 1, pp. 97–109 (1995). Chang C. C. and C. C. Yu. On-Line Fault Diagnosis Using Signed Directed Graphs. Ind. Eng. Chem. Res. 29, pp. 1290 (1990). Chang C. T. and H. C. Hwang. New Developments of the Digraph-Based Technique for Fault Tree Synthesis. Ind. Eng. Chem. Res. 31, pp. 1490 (1992). Chang C. T. and H. C. Hwang. Studies on the Digraph-Based Approach for Fault Tree Synthesis. Ind. Eng. Chem. Res. 33, pp. 1520 (1994). Chen B. H., X. Z. Wang and C. McGreavy. On-Line Operational Support System for Fault Diagnosis in Process Plants. Comp. & Chem. Eng., 22, Suppl., pp. S973-S976 (1998). Cheung J. T.-Y. and G. Stephanopoulos. Representation of Process Trends-part I. A Formal Representation Framework. Comp. & Chem. Eng., 14, pp. 495–510 (1990). Cho H.-W. and C. Han. Hierarchical Plant-Wide Monitoring and Triangular Representation-Based Diagnosis. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Dong D. and T. J. McAvoy. Nonlinear Principal Component Analysis Based on Principal Curves and Neural Networks. Comp. & Chem. Eng., 20, 1, pp. 65–78 (1996). Dunia R. and S. J. Qin. Subspace Approach to Multidimensional Fault Identification and Reconstruction. AIChE J., 44, 8, pp. 1813–1829 (1998). Dunia R. and S. J. Qin. A Unified Geometric Approach to Process and Sensor Fault Identification and Reconstruction: the Unidimensional Fault Case. Comp. & Chem. Eng., 22, 7–8, pp. 927–943 (1998). Dunia R., S. J. Qin, T. F. Edgar and T. J. McAvoy. Use of Principal Component Analysis for Sensor Fault Identification. Comp. & Chem. Eng., 20, Suppl., pp. S713–S718 (1996a). Dunia R., S. J. Qin, T. F. Edgar and T. J. McAvoy. Identification of Faulty Sensors Using Principal Component Analysis. AIChE J., 42, 10, pp. 2797–2812 (1996b). Fan J. Y., M. Nikolau and R. E. White. An Approach to Fault Diagnosis of Processes via Neural Networks. AIChE J., 39, 1, pp. 82–88 (1993). Fathi Z., W. F. Ramirez and J. Korbicz. Analytical and Knowledge-Based Redundancy for Fault Diagnosis in Process Plants. AIChE J., 39, 1, pp. 42–56 (1993). Frank P. M. Fault Diagnosis in Dynamic Systems Using Analytical and Knowledge-Based Redundancy—A Survey of Some Results. Automatica, 26, 3, pp. 459–474 (1990).

References

339

Gertler J. J. A Survey of Model-Based Failure Detection and Isolation in Complex Plants. IEEE Control Systems Magazine, 8, 6, pp. 3–11 (1988). Gertler J. J. Fault Detection and Diagnosis in Engineering Systems. Marcel Dekker, New York (1998). Gertler J. J., W. Li, Y. Huang and T. McAvoy. Isolation Enhanced Principal Component Analysis. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Gomm J. B., M. Weerasinghe and D. Williams. Pruning and Extraction of Qualitative Fault Diagnosis Rules from a Neural Network. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Han Z. and P. M. Frank. A New Scheme of Identification-Based Fault Detection Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Himmelblau D. Fault Detection and Diagnosis in Chemical and Petrochemical Processes. Elsevier, Amsterdam (1978). Iri M., K. Aoki, E. O’Shima and H. Matsuyama. An Algorithm for Diagnosis of System Failures in the Chemical Process. Comp. & Chem. Eng., 3, pp. 489–493 (1979). Iri M., K. Aoki, E. O’Shima and H. Matsuyama. A Graphical Approach to the Problem of Locating the Origin of the System Failure. J. Opers Res. Soc. Jap. 23, pp. 295–311 (1980). Jackson J. E. A User’s Guide to Principal Components. Wiley, New York (1991). Jia F., E. B. Martin and A. J. Morris. Nonlinear Principal Component Analysis for Process Fault Detection. Comp. & Chem. Eng., 22, Suppl., pp. S851–S854 (1998). Kavuri S. N. and V. Venkatasubramanian. Representing Bounded Fault Classes Using Neural Networks with Ellipsoidal Activation Functions. Comp. & Chem. Eng., 17, 2, pp. 139–163 (1993). Kavuri S. N. and V. Venkatasubramanian. Using Fuzzy Clustering and Ellipsoidal Units in Neural Network to Improve Fault Classifications. Comp. & Chem. Eng., 17, 8, pp. 765–784 (1993b). Kim S.-H. and K.-S. Lee. A Study on the Development of Robust Fault Diagnostic System Based on Neuro-Fuzzy Scheme. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Kokawa M. and S. Shingai. Failure Propagation Simulation and Non-Failure Path Search in Network Systems. Automatica, 18, pp. 335 (1982). Kokawa M., S. Miyasaki and S. Shingai. Fault Location Using Digraph and Inverse Direction Search with Application. Automatica, 19, pp. 729 (1983). Kramer M. A. Malfunction Diagnosis Using Quantitative Models with Non-Boolean Reasoning in Expert Systems. AIChE J., 33, 1, pp. 130–140 (1987). Kramer M. A. and B. L. Palowitch, Jr. A Rule-Based Approach to Fault Diagnosis Using the Signed Directed Graph. AIChE J., 33, 7, pp. 1067–1078 (1987). Kresta J. V., J. F. McGregor and T. Marlin. Multivariate Statistical Monitoring of Process Operating Performance. Can. J. Chem. Eng., 69, pp. 35–47 (1991). Lambert H. E. Fault Trees for Locating Sensors in Process Systems. CEP, August, pp. 81–85 (1977). Lakshminarayanan S., S. L. Shah and K. Nandakumar. Modeling and Control of Multivariable Processes: Dynamic PLS Approach. AIChE J., 43, 9, pp. 2307–2322 (1997). Lapp S. A. and G. J. Powers. Computer-Aided Synthesis of Fault Trees. IEEE Trans. Reliab., 2, pp. 13 (1977).

340

DESIGN OF SENSORS NETWORKS FOR PROCESS FAULT DIAGNOSIS

Leung D. and J. Romagnoli. Fault Diagnosis Expert System with Probability Calculations. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Luan D.-N., D.-S. Chang, M.-P. Chang and C.-C. Yu. Diagnosis of Abrupt Faults Using VariableStructure Neural Network. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Luo R., M. Misra, S. J. Qin, R. Barton and D. Himmelblau. Sensor Fault Detection via Multiscale Analysis and Nonparametric Statistical Inference. Ind. Eng. Chem. Res. 37, 3, pp. 1024–1032 (1998). McGregor J. F., C. Jaeckle, C. Kiparissides and M. Koutoudi. Process Monitoring and Diagnosis by Multiblock PLS Methods. AIChE J., 40, 5, pp. 826–838 (1994). Marcu T. and P. M. Frank. Parallel Evolutionary Approach to System Identification for Process Fault Diagnosis. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Mohindra S. and P. A. Clark. A Distributed Fault Diagnosis Method Based on Digraph Models: Steady-State Analysis. Comp. & Chem. Eng., 17, 2, pp. 193–209 (1993). Naidu S., E. Zafiriou and T. J. McAvoy. Application of Neural Networks on the Detection of Sensor Failure during Operation of a Control System. Proc. ACC, Pittsburgh, pp. 1136 (1989). Negiz A. and A. Cinar. Automated Detection of Incipient Sensor Faults by Multivariable Statistical Methods. Proc. PSE’94, pp. 919–924 (1994). Ozyurt I. B., A. K. Sunol and L. O. Hall. Semi-Quantitative Model-Based Chemical Process Fault Diagnosis via Episodic Fuzzy Rules. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Parker R. G. and R. L. Rardin. Discrete Optimization. Academic Press, San Diego (1988). Patton R. J. Robustness in Model-Based Fault Diagnosis: The 1995 Situation. Proceedings of the IFAC On-Line Fault Detection and Supervision in the Chemical Process Industries, Newcastle upon Tyne, UK (1995). Patton R. J., P. M. Frank and R. N. Clark. Fault Diagnosis in Dynamic Systems: Theory and Application. Prentice Hall, Englewood Cliffs, NJ (1989). Pau L. F. Failure Diagnosis and Performance Monitoring. Marcel Dekker, New York (1981). Qian D.-Q. An Improved Method for Fault Location of Chemical Plants. Comp. & Chem. Eng., 14, 1, pp. 41–48 (1990). Qin S. J. Recursive PLS Algorithms for Adaptive Data Modeling. Comp. & Chem. Eng., 22, 4/5, pp. 503–514 (1998). Qin S. J. and T. J. McAvoy. Nonlinear PLS Modeling Using Neural Networks. Comp. & Chem. Eng., 16, 4, pp. 379–391 (1992). Raghuraj R., M. Bhushan and R. Rengaswamy. Locating Sensors in Complex Chemical Plants Based on Fault Diagnostic Observability Criteria. AIChE J., 45, 2, pp. 310–322 (1999) Shao R., F. Jia, B. Martin and A. J. Morris. Fault Detection Using Wavelet Filtering and NonLinear Principal Component Analysis. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Shiozaki J., H. Matsuyama, E. O’Shima and M. Iri. An Improved Algorithm for Diagnosis of System Failures in the Chemical Process. Comp. & Chem. Eng., 9, pp. 285 (1985). Stanley G. M. and R. Vaidhayanathan. A Generic Fault Propagation Modeling Approach to OnLine Diagnosis and Event Correlation. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998).

References

341

Su H. B. and L. T. Fan. A Rule-Based Expert System Integrated with Neural Networks for Process Fault Diagnosis. Proceedings of the AIChE Annual Meeting, St. Louis (1993). Szigeti F. and R. Tarantino. Augmented Space Method for Fault Detection and Isolation Filter. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Tarjan R. Depth-First Search and Linear Graph Algorithms. SIAM J. Comput., 1, pp. 146 (1972). Tsai C.-S. and C.-T. Chang. Optimal Alarm Logic Design for Mass Flow Networks. AIChE J., 43, 11, pp. 3021–3030 (1997). Umeda T., T. Kuriyama, E. O’Shima and H. Matsuyama. A Graphical Approach to Cause and Effect Analysis of Chemical Processing Systems. Chem. Eng. Sci., 35, pp. 2379 (1980). Vedam H. and V. Venkatasubramanian. Automated Interpretation of PCA-Based Process Monitoring and Fault Diagnosis Using Signed Digraphs. Proc. IFAC. Workshop on On-Line Fault Detection and Supervision in the Chemical Process Industries, Lyon, France (1998). Vedam H., V. Venkatasubramanian and Mohan Bhalodia. A B-Spline-Based Method for Data Compression, Process Monitoring and Diagnosis. Comp. & Chem. Eng., 22, Suppl., pp. S827–S830, (1998). Venkatasubramanian V. and K. Chan. A Neural Network Methodology for Process Fault Diagnosis. AIChE J., 35, pp. 1993 (1989) Venkatasubramanian V. and S. H. Rich. An Object Oriented Two-Tier Architecture for Integrating Compiled and Deep-Level Knowledge for Process Diagnosis. Comp. & Chem. Eng., 12, 9/10, pp. 903–921 (1988) Vinson J. M. and L. H. Ungar. Using PLS for Fault Analysis: A Case Study. AIChE Annual Meeting (1994). Watanabe K., I. Matsura, M. Abe, M. Kubota and D. M. Himmelblau. Incipient Fault Diagnosis of Chemical Processes via Artificial Neural Networks. AIChE J., 35, pp. 1803 (1989). Watanabe K., S. Hirota, L. Hou and D. Himmelblau. Diagnosis of Multiple Simultaneous Fault via Hierarchical Artificial Neural Networks. AIChE J., 40, 5, pp. 839–848 (1994). Wilcox N. A. and D. M. Himmelblau. The Possible Cause and Effect Graphs (PCEG) Model for Fault Diagnosis-I. Methodology. Comp. & Chem. Eng., 18, 2, pp. 103–116 (1994a). Wilcox N. A. and D. M. Himmelblau. The Possible Cause and Effect Graphs (PCEG) Model for Fault Diagnosis. II. Applications. Comp. & Chem. Eng., 18, 2, pp. 117–127 (1994b). Wise B. M. and N. B. Gallagher. The Process Chemometrics Approach to Process Monitoring and Fault Detection. Proceedings of the IFAC On-Line Fault Detection and Supervision in the Chemical Process Industries, Newcastle upon Tyne, UK (1995). Yao S. C. and E. Zafiriou. Control System Sensor Failure Detection via Networks of Localized Receptive Fields. Proc. ACC, pp. 2472 (1990).

INDEX

Accuracy, 5, 74, 76 Alarm Logic, 326 Alarms, 26, 315 Availability, 211 Sensor service availability, 211 System availability, 222 Bias, 69 calibration Base conditions, 35 Compensation, 35, 75 Curves, 74 Testing conditions, 35 Canonical matrices, 100 Canonical matrix, 87 Canonical systems, 87 Compulsory measurements and upgrade, 140 Conformity, 74 Connectivity, 83 Controlled variables, 18 Corrective maintenance, 281 Cost-efficient nonrepairable sensor networks, 269 Maximum reliability model, 271 Minimal networks, 270 Minimum cost model, 269 Cutset, 103 Determination, 107 Efficiency of estimability, 108 Estimable cutsets, 104

Fundamental cutsets, 106 Ring sum, 227 Cycle, 100 Data reconciliation, 7, 159, 170 Dynamic data reconciliation, 170 Linear data reconciliation, 162 Nonlinear steady state data reconciliation, 169 Precision of estimates, 165, 185 Steady state linear data reconciliation, 163 Variance Estimation, 168 Variance of observable quantities, 165 Dead band, 72 Degeneracy, 234 Degree of estimability, 98 Degree of observability, 97 Degree of redundancy, 95, 97 Density instrumentation, 58 Design for estimability, 128 Design for estimability efficiency, 139 Design of bilinear multicomponent Systems, 259 Digital protocols, 61 Dynamic data reconciliation, 170 Efficiency of estimability, 108 Energy networks, 266 Energy sensor networks, 151 Entering variables, 230, 232, 239, 261 Error detectability, 297

343

344

Index

Errors in measurements, 65 Estimability, 25, 96 Estimation availability, 217 Estimation reliability, 218 Expected number of repairs, 280 Failure density, 212 Failure intensity, 279 Failure rate, 212 Fault detection, 2, 11, 26 Fault diagnosis, 26, 315 Fault observability, 327 Fault resolution, 326, 332 Fault trees, 316 Flowrate exchange cost, 145 Flowrate instrumentation, 31 Coriolis flowmeter, 42 Differential pressure meters, 31, 44 Discharge coefficient, 34 Electromagnetic meters, magmeters, 35 Flow nozzle, 32 Mass meters, 31, 40, 45 Orifice plate, 32 Pitot tube, 39 Positive displacement meters, 31, 40, 45 Thermal flowmeter, 42 Turbine meters, 35 Ultrasonic meters, 37 Variable area meters, 31, 44, 45 Velocity meters, 31, 35, 44 Venturi tube, 32 Vortex meter, 36 General Networks, 137 Generalized maximum precision models, 180 Genetic Algorithms, 307 Global test, 293 Graphs, 82 Gross errors Detectability, 25 Detection, 8 Estimation, 9 Inaccuracy in gross error detection, 296 Leaks, 292 Minimum cost model, 300 Multiple gross error identification, 296 Origin, 291

Outliers, 292 Resilience, 26, 298 Robustness, 25 Size estimation, 297 Test for gross error presence, 293 GSENNET, 240 Hardware redundancy, 6, 87, 160, 176 Hill climbing, 234 Hysteresis, 72 Inaccuracy in gross error detection, 296 Incidence matrix, 84 Inferential control, 23 Influence variables, 65 Instrumentation design goals, 25 Key variables, 15 Key variables in control, 17 Knowledge-based fault detection, 322 Leaks, 292 Leaving variables, 230, 232, 238, 261 Level instrumentation, 46 Differential pressure meters, 46 Displacer meters, 46 Radiation-based meters, 46, 50 Radio-frequency admittance meters, 46, 49 Signal reflection meters, 46, 48 Strapping tables, 50 Linear data reconciliation, 162 Linearity, 73 Linearly estimable deficient networks, 144 Linearly estimable sensor networks, 143 Maintenance, 26 Manipulated variables, 19 Matrix projection, 93 Maximum precision models, 178, 180 Maximum reliability model, 271 Mean time to failure, 216 Measured variables, 15, 19 Measurement test, 294 Minimal networks, 129, 270 Minimally estimable sensor networks, 143 Model based monitoring, 4 Model-based fault detection, 322 Monitoring, 2, 4 Monitoring variables, 16

Index Multiple gross error identification, 296 Multiple instruments, 176 Multivariate statistical methods, 324 Networks with maximum reliability, 225 Neural networks, 321 Nonlinear steady state data reconciliation, 169 Nonrepairable sensor networks, 211 Reliability, 211 Sensor service availability, 211 Sensor service reliability, 212 Observability, 85 Observability analysis Bilinear systems, 114 Full nonlinear systems, 120 Linear systems, 87, 93 On Line analyzers, 60 On-line optimization, 3 Optimal bilinear sensor networks, 256 Outliers, 71, 292 Parameter estimation, 2, 184 Planning, 2, 3 Precise sensor networks, 175 Precision, 5, 7, 25, 66 Precision upgrade of Networks, 191 Resource reallocation, 198 Pressure instrumentation, 56 Elastic element devices, 56 Electrical sensing devices, 56 Liquid column devices, 56 Precision of estimates, 165, 185 Process variables, 17 QR decomposition, 93 Range, 65 Rangeability, 65 Readability, 66 Redundancy, 86, 95 Redundant and restricted networks, 246 Redundant networks with maximum reliability, 237 Reliability, 5, 8, 25, 212 Estimation reliability, 218 Evaluation in bilinear systems, 254 System reliability, 222 Reliable bilinear nonrepairable sensor networks, 253

345

Reliable linear nonrepairable sensor networks, 225 Repair intensity, 280 Repairable sensor networks, 279 Minimum cost model, 286 Reproducibility (see precision), 5 Residual precision, 26, 284 Resilience, 298 Resource reallocation, 198 Ring sum of cutsets, 227 Robust sensor networks, 291, 299 Rule-based fault detection algorithms, 320 SENNET, 232 Sensitivity, 71 Sensor networks for bilinear systems, 142 Energy sensor networks, 151 Flowrate exchange cost, 145 Linearly estimable deficient networks, 144 Linearly estimable sensor networks, 143 Minimally estimable sensor networks, 143 Networks with splitters, 148 Sensor networks for fault diagnosis, 315 Sensor networks for fault resolution, 332 Sensor service availability, 211 Sensor service reliability, 212 Signed digraph, 317 Singular value analysis, 20 Smart sensors, 61 Software redundancy, 6 Span, 65 Span shift, 70 Spanning tree, 100, 146 Branch, 100 Chord, 100 Distance, 102 Speed of response, 71 Stopping criteria, 183 Subminimal networks, 132 System availability, 222 System connectivity, 83 System reliability, 222 Systematic error, 69 Temperature instrumentation, 51 Electrical thermometers, 51, 53 Radiation-based thermometers, 51, 55 Thermal expansion thermometers, 51 Tree, 100 Tree-searching algorithm, 133 Tuy Duality, 182, 272

346

Index

Upgrade of instrumentation, 27, 127 Compulsory measurements and upgrade, 140 Precision upgrade, 191 Upgrade of bilinear sensor networks, 152 Variables Classification, 81, 110

Measured, 86 Nonredundant, 86 Observable, 86 Redundant, 86 Unmeasured, 86 Unobservable, 86 Variance estimation, 168 Variance of observable quantities, 165

Related Documents


More Documents from "Suhas Jadhal"