Please wait a minute...
Table of Content
05 March 2018, Volume 69 Issue 3
    Improved whale optimization algorithm and its application in optimization of residue hydrogenation parameters
    XU Yufei, QIAN Feng, YANG Minglei, DU Wenli, ZHONG Weimin
    2018, 69(3):  891-899.  doi:10.11949/j.issn.0438-1157.20171128
    Abstract ( 499 )   PDF (624KB) ( 1046 )  
    References | Related Articles | Metrics

    An improved whale algorithm (DEOBWOA) based on differential evolution and elite opposition-based learning is proposed to solve the problem that the intelligent optimization algorithm is easy to fall into the local optimum and the convergence precision in dealing with the nonlinear optimization problem is poor. The algorithm uses the opposing search initialization, elite opposition-based learning and combines with differential evolution, which can improve the convergence precision and convergence speed of the whale optimization (WOA) algorithm effectively and improve the ability to jump out of local optimum. 8 standard test functions are used to do simulation experiment. The results show that DEOBWOA algorithm has a better performance than WOA, heterogeneous comprehensive learning particle swarm optimization (HCLPSO) and differential evolution (DE). Finally, the kinetic model of residue hydrogenation was established, but there are many typical nonlinear constraints in the process of residue hydrogenation. So DEOBWOA was used to optimize the kinetic model parameters of residue hydrogenation in a refinery residue, which indicates the algorithm can deal with the practical engineering optimization problem.

    Automatic structure and parameters tuning method for deep neural network soft sensor in chemical industries
    WANG Kangcheng, SHANG Chao, KE Wensi, JIANG Yongheng, HUANG Dexian
    2018, 69(3):  900-906.  doi:10.11949/j.issn.0438-1157.20171435
    Abstract ( 554 )   PDF (454KB) ( 734 )  
    References | Related Articles | Metrics

    Deep learning has been applied to the field of soft sensing in process industries. However, the structure and parameters of deep neural network (DNN) have to be tuned manually, which require solid fundamental knowledge about machine learning and rich experiences on parameters tuning. Complicated tuning procedure restricts generalization application of deep learning in chemical industries. A structure and parameters tuning method for DNN soft sensor with little manual intervention was proposed by systematic analysis on selection process of each essential DNN parameter from massive experiments. The presented method could greatly simplify the tuning procedure and offer a reference for engineers to study and use deep learning. Studies on crude-oil distillation and coal gasification process verified effectiveness and generality of the proposed method.

    Research and application of feature extraction derived functional link neural network
    ZHU Qunxiong, ZHANG Xiaohan, GU Xiangbai, XU Yuan, HE Yanlin
    2018, 69(3):  907-912.  doi:10.11949/j.issn.0438-1157.20171416
    Abstract ( 377 )   PDF (825KB) ( 511 )  
    References | Related Articles | Metrics

    Traditional functional link neural network (FLNN) cannot effectively model multi-dimensional, noisy and strongly coupled data in chemical process. A principal component analysis based FLNN (PCA-FLNN) model was proposed to improve modeling effectiveness. Feature extraction of FLNN function extension block not only removed linear correlations between variables but also selected main components of data, which complexity of FLNN learning data was alleviated. The proposed PCA-FLNN model was used to simulate an UCI Airfoil Self-Noise data and purified terephthalic acid (PTA) production process. Simulation results indicated that PCA-FLNN can achieve faster convergence speed with higher modeling accuracy than traditional FLNN.

    Improved schedule model for batch production by state unit network
    YAN Xueli, HAN Yuxin, GU Xingsheng
    2018, 69(3):  913-922.  doi:10.11949/j.issn.0438-1157.20171406
    Abstract ( 314 )   PDF (509KB) ( 310 )  
    References | Related Articles | Metrics

    Establishing effective model for scheduling batch process has always been a hot spot of production planning research. Continuous-time model based on unit-specific-event is evolved as a promising tool for optimizing short-term schedule of batch process. A nonlinear programming model for contiguous batch production was developed on the basis of state unit network and specific event time point. To overcome challenges of solving nonlinear model, a replacement technique was used so that non-linear items in the model were linearized. Consequently, the nonlinear mixed integral program model became a linear one. Because of no big M relaxation item, the linear mixed integral model had compact search space and improved solving efficiency. The simulation results of three batch processes illustrated excellent efficiency and stability of the new model. Furthermore, constraints of the new model at different storage states were provided to expand model applicability.

    Recirculation and reaction hybrid intelligent modeling and simulation for industrial ethylene cracking furnace
    HUA Feng, FANG Zhou, QIU Tong
    2018, 69(3):  923-930.  doi:10.11949/j.issn.0438-1157.20171195
    Abstract ( 525 )   PDF (618KB) ( 615 )  
    References | Related Articles | Metrics

    Simulation of naphtha pyrolysis in industrial ethylene cracking furnaces, which usually requires both firebox and reactor models, is non-linear and strongly compounded. The firebox model involves a great number of variables and takes a lot of time to get a solution. An intelligent hybrid model was proposed by first designing an artificial neural network (ANN) from data of the firebox model and then combining ANN with the reactor model. The intelligent hybrid modelling and simulation was developed on an industrial ethylene cracking furnace. By using actual process data, it is demonstrated that the hybrid simulation shows good agreement with industrial production. The hybrid model significantly reduces simulation time and largely meets requirement of industrial modeling.

    Modeling basic fraction data of petroleum distillation
    MEI Hua, HUANG Biao, QIAN Feng
    2018, 69(3):  931-935.  doi:10.11949/j.issn.0438-1157.20171453
    Abstract ( 369 )   PDF (491KB) ( 497 )  
    References | Related Articles | Metrics

    Properties of petroleum fractions are important data of petrochemical processes. However, tremendous on-site data containing redundant information and measurement errors pose a great challenge to routine operation of chemical processes. A basic fraction data modelling method was proposed from characterization techniques of state space of petroleum fractions, in which an initial basic fraction data model was obtained via non-negative matrix factorization and updated by an iterative strategy so that the scale of the model base set was minimized as much as possible under circumstance of assured modelling accuracy. The results of simulation study verify that the proposed method is effective and suitable for a wide application to petrochemical processes.

    Simultaneous design and control of polyethylene process based on uncertainty Kriging model
    WANG Kuanglei, XIE Lei, CHEN Junghui, SU Hongye, WANG Jingdai
    2018, 69(3):  936-942.  doi:10.11949/j.issn.0438-1157.20171168
    Abstract ( 412 )   PDF (478KB) ( 523 )  
    References | Related Articles | Metrics

    Ethylene polymerization process has strong nonlinearity and multiple metastable states, driven by interaction between mass and energy transfer as well as compounded effect of polymerization and transport. Traditional sequential method of process design and control optimization in polymerization process is not capable of providing sufficient control freedom, which high quality products are difficult to manufacture by relying solely on design controller because of disturbance and uncertainty of process parameters. A new approach was proposed to integrate steady state design and control optimization for stable production of high performance polyethylene. The surrogate model (Kriging model) was introduced to simultaneously predict model dynamics and uncertainty. Model uncertainty was feasible space region of uncertain parameters bounded by coefficient confidence. A design performance index was defined to quantitatively interpret impact of steady state design on closed-loop dynamic behavior at process design stage. Closed-loop operating variability was quantified by model predictive controller that was to ensure process operate close to constraints and cost function of MPC that was to penalize deviations of predicted control outputs from reference operating point. The proposed method has been illustrated with integrated optimization of process design and operation control in gas-phase ethylene polymerization and method effectiveness is verified by process simulation under parameter uncertainty and disturbance.

    Process optimization of CO2membrane and cryogenic hybrid separation
    HU Yongxin, LIAO Zuwei, WANG Jingdai, DONG Hongguang, YANG Yongrong
    2018, 69(3):  943-952.  doi:10.11949/j.issn.0438-1157.20171127
    Abstract ( 463 )   PDF (708KB) ( 512 )  
    References | Related Articles | Metrics

    Separation and purification of carbon dioxide is current research hotspot. With increasing needs for environmental protection, traditional single separation techniques are difficult to meet these discharge specifications. Integration of multiple separation methods has gradually gained research attention. Separation and recovery of carbon dioxide in the integrated gasification combined cycle (IGCC) process was studied by process simulation and polynomial state equations were obtained for such system. Furthermore, a mathematical superstructure model of hybrid membrane-cryogenic flash distillation was established. Optimal separation sequence was found by targeting minimal annual cost at pre-set lower specification of product purity and recovery. The optimized process show excellent separating performance of membrane after flash configuration and multistage membrane structure. Compared to control process, optimized process not only guarantees product recovery and purity but also decreases total annual cost by almost 27%.

    Decomposition algorithm for PVC plant planning optimization based on piecewise linear approximation
    GAO Xiaoyong, FENG Zhenhui, WANG Yuhong, HUANG Dexian
    2018, 69(3):  953-961.  doi:10.11949/j.issn.0438-1157.20171532
    Abstract ( 294 )   PDF (625KB) ( 258 )  
    References | Related Articles | Metrics

    In the previous work, the multiperiod planning optimization model of the whole production process has been presented. Due to the high energy consumption characteristics of PVC produced by calcium carbide method, the piecewise linear model to approximate the nonlinear items in real process was introduced and a mixed integer linear programming (MILP) model was established. However, it is difficult to solve it due to the large scale and the complex nonconvexity. Thus, a hierarchical decomposition algorithm is proposed to accelerate the computation progress. The problem is divided into two levels, in the first level, the operating states of equipment are optimized, which would be the hard-to-solve binary variables in the plantwide planning model; in the second level, the determined binary variables are embedded into the plantwide planning model, and thus a reduced scale scheduling optimization is executed. A case study was provided to verify the effectiveness of the proposed algorithm. Computational results show that the proposed algorithm can accelerate the computation greatly and the production costs are close to or even better than those given in the previous work.

    Quality-related fault detection based on weighted mutual information principal component analysis
    ZHAO Shuai, SONG Bing, SHI Hongbo
    2018, 69(3):  962-973.  doi:10.11949/j.issn.0438-1157.20171009
    Abstract ( 399 )   PDF (1321KB) ( 458 )  
    References | Related Articles | Metrics

    Quality-related fault detection has been a new research hotspot in recent years. It aims to the higher fault detection rate for quality-related faults and the lower fault detection rate for quality-unrelated faults. The traditional principal component analysis (PCA) will alarm all faults and can't satisfy the above requirements, which will cause lots of downtime and seriously affect the normal production. The quality variables usually are not easy to measure online in actual industrial production. So this paper proposed the weighted mutual information principal component analysis (WMIPCA) to solve these problems. Firstly, the supervision relationship between process variables and quality variables is established via mutual information and Bayesian Inference. Then a set of process variables that contain the largest amount of quality variable information is selected and the PCA is modeled on them. After that, the principal components containing more information on the quality variables are selected and used to establish the statistics and monitor the process. Finally, the feasibility and effectiveness of the WMIPCA are verified by experiments.

    Computing and application analysis of maximum tolerable delay index for chemical reactor systems
    HUANG Weiqing, TAN Guiping, QIAN Yu
    2018, 69(3):  974-981.  doi:10.11949/j.issn.0438-1157.20171103
    Abstract ( 422 )   PDF (553KB) ( 322 )  
    References | Related Articles | Metrics

    Operation safety should be paid more attention for chemical reactor systems in the presence of uncertainty and time delay to prevent environmental pollution events and human injury. The uncertainty deviation maybe unadjustable during the practical operation, such the maximum tolerable delay of chemical reactor systems under uncertainty should be figured out to guarantee the operation safety. A delay tolerability index problem (DTI) and computing framework is proposed to find the maximum tolerable delay for continuous chemical reactor systems. Firstly the continuous reactor system is linearized by Taylor series spreader formula and modeling as transform function by using Laplace transform. Secondly the dynamic response performance is tested for the system with PID control, where the control action is optimized by the nonlinear control design package (NCD). Finally the delay tolerability index problem (DTI) is figured out based on the bisection search method combing with dynamic response analysis. A continuous reactor-separator system is investigated to compute the maximum tolerable delay index when the expected uncertainty deviation is considered as unadjustable condition during the operation. The analysis results demonstrate that the proposed framework/strategy may provide a simple and effective tool for the computing and application of delay tolerability index (DTI), which is also useful for the operation safety and reliability of chemical reactor system under uncertainty.

    Causation mechanism analysis of urban haze based on FTA method: taking Tianjin as a case study
    HUANG Weiqing, XU Pingru, QIAN Yu
    2018, 69(3):  982-991.  doi:10.11949/j.issn.0438-1157.20171045
    Abstract ( 403 )   PDF (685KB) ( 391 )  
    References | Related Articles | Metrics

    Haze weather characterized by PM2.5 has become a serious environmental pollution problem in major Chinese cities recently, which has bad effect on the air quality and human health. Coal burning may be one of the critical factors of haze pollution because coal is the most important energy source in China. It is significant to figure out the pollutant sources and causation mechanism of urban haze to indicate directions and support theoretical basis for the atmospheric pollution prevention and control. Based on a new perspective of systematic methodology, the fault tree analysis (FTA) method is employed and investigated for the causation mechanism analysis and risk management of urban haze related to coal burning for Tianjin in this work. All of the important risk factors are discussed and identified by using this deductive FTA method. After the fault tree “haze weather-excess emission of coal-fired exhausts” is established, the qualitative and quantitative assessments based on the minimal cut sets, the structure, probability and critical importance degree analysis of those risk factors in the fault tree system are also carried out for Tianjin city. The analysis results show that “unreasonable energy structure” and “without sustainable cleaner energy” are the most important risk factors for causing excess coal burning. This study may provide a new scientific and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China.

    Research on hot metal Si-content prediction based on LSTM-RNN
    LI Zelong, YANG Chunjie, LIU Wenhui, ZHOU Heng, LI Yuxuan
    2018, 69(3):  992-997.  doi:10.11949/j.issn.0438-1157.20171534
    Abstract ( 600 )   PDF (622KB) ( 1102 )  
    References | Related Articles | Metrics

    The ironmaking in blast furnace, with large delay and complex conditions, is a dynamic process. The traditional methods for prediction of silicon content in hot metal are mostly based on the statistics or the simple neural networks, leading to lower accuracy. However, a model based on the long short-term memory-recurrent neural network (LSTM-RNN) is proposed to exploit the characteristics of the mutual information before and after the time series in this paper. The independent variables are selected according to the time series trend and the correlation coefficient. After that, the silicon content is predicted according to the input variables by optimizing the parameters automatically. In order to verify the constructed model, the extremely complex production data is used to compare the LSTM-RNN and simple RNN models. Remarkably, the result shows that the prediction error of LSTM-RNN model is stable and the prediction accuracy is high.

    PCR-multi-case fusion method for setting optimal process indices of coking flue gas denitration
    LI Yaning, WANG Xuelei, TAN Jie
    2018, 69(3):  998-1007.  doi:10.11949/j.issn.0438-1157.20170807
    Abstract ( 370 )   PDF (804KB) ( 435 )  
    References | Related Articles | Metrics

    Due to complex process mechanism, frequently changeable inlet flue gas indices induced by upstream coking conditions, and severe interference of process unknowns, it is difficult to determine process indices by traditional exact mathematical models for the first domestic coking flue gas desulfurization and denitration integrated unit. A case-based reasoning method was proposed to optimize indices of the coking flue gas denitration process. Meanwhile, abrupt change of some correlation description indices, which was caused by coke oven reversion, may lead to deviation from results because single feature was used to describe current working condition in traditional case reuse method. A case retrieval and reuse method was further proposed from principal component regression multiple case fusion. The results of numerical simulation and application in the coking plant show that this method can appropriately obtain operating parameter settings at different characteristic conditions, effectively control NOx outlet concentration within process specification, and greatly reduce power consumption of the equipment.

    Simulation and optimization of shale gas purification process by sensitivity analysis
    LI Weida, LIU Linlin, ZHANG Lei, WANG Shaojing, DU Jian
    2018, 69(3):  1008-1013.  doi:10.11949/j.issn.0438-1157.20171442
    Abstract ( 424 )   PDF (446KB) ( 648 )  
    References | Related Articles | Metrics

    Removal of acid gases and moisture in raw shale gas is one basic requirement of shale gas downstream processing and utilization. Shale gas sweetening and dehydration process were simulated by software Aspen Plus v8.6 in order to provide theoretical and technical guidance for shale gas purification industry. In sweetening process, sensitivity analysis was used to analyze and optimize process parameters such as feed stage of absorbent regeneration and reflux ratio of regenerator. In dehydration process, correlation method between water content and water dew point of shale gas was proposed which provided a foundation to determine dehydration specification. Moreover, stripping gas was employed to regenerate highly concentrated absorbent and to meet high dehydration requirements for the process.

    Performance assessment method of chemical process based on multi-space total projection of latent structures
    DU Yupeng, WANG Zhenlei, WANG Xin
    2018, 69(3):  1014-1021.  doi:10.11949/j.issn.0438-1157.20171477
    Abstract ( 292 )   PDF (443KB) ( 429 )  
    References | Related Articles | Metrics

    An online performance assessment method based on multi-space total projection of latent structures (MsT-PLS) was proposed for assessing operation status of chemical process by “offline modeling, online assessment” strategy. After the whole historical input data space was decomposed, interference information unrelated to quality variable in the input data space was eliminated by multi-space basis vector extraction method. The “off-line modeling”, an off-line data network classification model, with different operating performance grades was established in input multi-space related to the quality variable. During “online assessment”, assessment unit of data sliding time window was used to divide process performance into steady and transition performance grades and to match degree of similarity between online data and historical performance grades. According to relative contribution of all process variables, these process variables that decisively influence performance were identified and analyzed for degree of contribution, which provided a reference to identify causes for system degradation. Application to performance assessment of ethylene cracking process showed that the proposed assessment method can accurately assess system performance online.

    Design of multi-period heat exchanger networks for overdesign control
    KANG Lixia, LIU Yongzhong
    2018, 69(3):  1022-1029.  doi:10.11949/j.issn.0438-1157.20171132
    Abstract ( 418 )   PDF (526KB) ( 289 )  
    References | Related Articles | Metrics

    A practical multi-period heat exchanger network (HEN) design is required to satisfy not only the operating requirements in several discrete sub-periods, but also the flexibility requirements in all sub-periods that help to compensate the potential fluctuations of the parameters in each sub-period. However, the problem of overdesign arise in most cases where the heat transfer areas are determined by increasing the margin or using the maximum area concept during the multi-period HEN design. These factors will lead to a waste of resource and capital. To address these problems, a method of multi-period HEN design considering overdesign control is proposed by taking both the requirements of multi-period operation and single period flexibility into account. In the proposed method, an initial multi-period HEN is first obtained by the single period HEN optimization. A HEN modification model, on the basis of stage-wise superstructure model and flexibility index model, is then developed and solved to determine the optimal modifications of the initial multi-period HEN in each sub-period. The final multi-period HEN is thus finished by combing the initial multi-period HEN and the corresponding modifications. The application and effectiveness of the proposed method are finally verified through a comparison of the results in this work and those in literature.

    Algebraic analysis modeling method for complex reactions
    ZHOU Leihao, LIU Guilian
    2018, 69(3):  1030-1037.  doi:10.11949/j.issn.0438-1157.20171224
    Abstract ( 344 )   PDF (475KB) ( 423 )  
    References | Related Articles | Metrics

    In chemical production design, reactors are usually designed only based on experimental data and practical experience, especially in the case of complicated chemical reaction. Due to the absence of an effective model that simulates and predicts the behavior of the reaction systems, the systems are often not operated under optimal conditions, resulting in an increase in costs or a decrease in desired reaction product. Therefore, a model is needed to optimize the system. Starting from atomic scope, this paper will use an algebraic analysis modeling method to explore the relation among components in complicated chemical reaction, seek all possible reaction steps, and establish plausible reaction network. Specifically, the reaction model will be developed using Aspen, and the best approximation that describes the complicated chemical reaction will be analyzed and chosen. In conjunction with the model and its implication, it will allow optimization of the design and even control over the direction of the reaction. In this paper, coal gasification reaction will be optimized by this method.

    Optimization method for hydrogen network integration considering coupled sinks and its application
    HUANG Lingjun, LI Wei, WANG Yingjia, LIU Guilian, WANG Zhiwei
    2018, 69(3):  1038-1045.  doi:10.11949/j.issn.0438-1157.20171243
    Abstract ( 327 )   PDF (549KB) ( 362 )  
    References | Related Articles | Metrics

    In a hydrogen network, there are generally a pair of sink and source connected to the same reactor. This pair of sink and source are coupled and affected by the reactor operation parameters. It is necessary to consider the variation of these streams caused by these equipments in the integration of the hydrogen network. Based on the analysis on the relation between the coupled sink and source and hydrogen utility consumption of the hydrogen network, the equation is deduced to describe the relation among the hydrogen utility adjustment (HUA), hydrogen consumption of the reactor and coupled sink and source. Based on this, a graphic method is developed to optimize the reactor operation parameters and integrate the hydrogen network. The case study shows that the proposed method is simple, easy understanding, and can give a clear insight about the interrelation between HUA and input temperature of a reactor.

    Online monitor of Aspirin crystallization process
    LI Lanju, LI Xiuxi, XU San
    2018, 69(3):  1046-1052.  doi:10.11949/j.issn.0438-1157.20171147
    Abstract ( 756 )   PDF (716KB) ( 631 )  
    References | Related Articles | Metrics

    Size distribution and geometry are key properties of crystal products, which can affect product quality and downstream processes of filtration, drying, storage and transportation. An on-line process analysis system, including ultrasonic particle size analyzer, attenuated total reflection Fourier transform infrared spectroscopy (ATR-FTIR), turbidity analyzer and two-dimensional imaging system, was developed to monitor crystallization of Aspirin ethanol solution at various stirring and cooling rates. The results show that lower cooling or faster stirring rate produces more amount of fine crystals and higher cooling rate yields product with larger mean size and higher aspect ratio (AR). Therefore, adjusting cooling and stirring rate is an effective technique to control particle size distribution and shape of Aspirin crystals.

    Monitoring and diagnosis of abnormal condition in ethylene production process based on SVM-BOXPLOT
    HUA Li, YU Haichen, SHAO Cheng, GONG Shixin
    2018, 69(3):  1053-1063.  doi:10.11949/j.issn.0438-1157.20170907
    Abstract ( 384 )   PDF (739KB) ( 418 )  
    References | Related Articles | Metrics

    As an important raw material for chemical production, the demands of ethylene greatly increase, but it consumes large energy. Since ethylene production and operation status is directly related to the level of energy efficiency, the economic benefits of enterprises are affected. It is great significance to realize the intelligent identification of ethylene production operating conditions for saving energy and reducing consumption. Therefore, a comprehensive method for the abnormity identification in ethylene production is presented by using the IPSO-optimized SVM-BOXPLOT method based on the key energy efficiency indicators, ethylene yield, propylene yield and comprehensive energy consumption. Specifically, the data dimensionality is reduced on the basis of the deep analysis of the ethylene production technology and the data analysis. Then the working conditions are classified by SVM for reducing the scope of abnormal recognition. Finally, the abnormal data is identified by BOXPLOT. Combined with the on-line monitoring system, the scheme is applied to the production of a petrochemical enterprise. The monitoring and diagnosis scheme for abnormal working conditions has higher model precision and faster convergence speed. The method not only realizes the monitoring and diagnosis of abnormal working conditions in ethylene production, but also meets the technological requirements of actual operating conditions, which ensures the real-time and accuracy of abnormal identification.

    Prediction research and application of a combination model based on FEEMD-AE and feedback extreme learning machine
    XU Yuan, ZHANG Wei, ZHANG Mingqing, HE Yanlin
    2018, 69(3):  1064-1070.  doi:10.11949/j.issn.0438-1157.20171399
    Abstract ( 452 )   PDF (622KB) ( 560 )  
    References | Related Articles | Metrics

    To facilitate feature extraction and prediction of nonstable time series in industrial processes, a combination model was proposed from fast ensemble empirical mode decomposition (FEEMD), approximate entropy (AE), and feedback extreme learning machine (FELM). First, the FEEMD method was used to decompose complex non-stable time series data into relatively stable intrinsic model function components and residuals from high to low frequency. Secondly, complexity of these components by FEEMD decomposition was reduced by AE complexity degree calculation and feature reconstruction. Thirdly, a feedback mechanism based on traditional ELM structure was introduced to create a feedback layer between output layer and hidden layer for memorizing output data of the hidden layer, calculating trending change rate of output data, and dynamically updating output of the feedback layer, such that a feedback extreme learning machine (FELM) was formed to predict the next timepoint output for nonlinear dynamic system. Finally, the combination model was used to simulate purified terephthalic acid (PTA) solvent system with UCI standard data set. The simulation results show that the proposed method can obtain high prediction accuracy, which will provide guidance for operation optimization of actual production processes.

    Sequence-decision PID parameter tuning approach towards control system decoupling
    GAO Yue, SU Chong, LI Hongguang
    2018, 69(3):  1071-1080.  doi:10.11949/j.issn.0438-1157.20171478
    Abstract ( 448 )   PDF (692KB) ( 324 )  
    References | Related Articles | Metrics

    Traditional decoupling control methods have difficulties in industrial applications as a result of heavily relying on model identification and in controller re-designing as a result of mature system. Besides, it is difficult to extract and inherit expert experience of loops' decoupling controls. Based on equivalent transfer function theory, sequence-decision PID parameter tuning method was developed, which control sequences of experienced operators were effectively extracted from historical data. When facing poor control performance due to loops' coupling, junior staff could operate from inheritance of digitalized PID parameter tuning experience of experienced engineers. Method effectiveness and accuracy were verified in two coupling circuit systems in Matlab and Aspen simulation platforms.

    Full-cycle operation optimization of acetylene hydrogenation reactor
    XIE Fuming, XU Feng, LIANG Zhishan, LUO Xionglin, SHI Fengyong
    2018, 69(3):  1081-1091.  doi:10.11949/j.issn.0438-1157.20170844
    Abstract ( 534 )   PDF (800KB) ( 605 )  
    References | Related Articles | Metrics

    Acetylene hydrogenation reactor is an important unit operation in ethylene process, whose operation deeply influences yield and purity of ethylene product. Within an operating cycle, the catalyst activity in reactor gradually declines with time, the operating point will slowly deviates from the initial steady-state design point such that the ethylene yield will drop. To implement the full-cycle operation optimization, the kinetics model of catalyst deactivation considering green oil accumulation is presented through deactivation mechanism research, then a two-dimensional heterogeneous dynamic model of acetylene hydrogenation reactor with modified catalyst deactivation equation is proposed. The full-cycle simulation on gPROMS software verifies the correctness of modified model, and the full-cycle operation optimization is solved by Matlab optimizer on upper layer interacting with gPROMS simulation. The optimization results show that, the full-cycle operation optimization is superior to the fixed value temperature compensation in regard to both the economic benefit and the regeneration cycle of reactor, and the full-cycle operation optimization simultaneously optimizing inlet temperature and hydrogen input is of greater advantage.

    Multivariable control system based on PID diagonal dominant compensation matrix
    WANG Qihang, XU Feng, LUO Xionglin
    2018, 69(3):  1092-1101.  doi:10.11949/j.issn.0438-1157.20171137
    Abstract ( 337 )   PDF (685KB) ( 388 )  
    References | Related Articles | Metrics

    Chemical processes are multivariable systems with often coupled interactions between input and output variables, which conventional decentralized PID control system is difficult to maintain good control performance. In order to weaken coupling effect in multivariable system, PID dynamic pre-compensation matrix was designed by weighted optimization of many frequency points and diagonal dominance design criterion. A controller of the compensated diagonal dominance system was designed using positive Nyquist array design method. According to Nyquist stability criterion of diagonal dominance system, stable parameter range of feedback matrix was determined by plotting dominance degree curve and Gershgorin band. After that, dynamic compensator was designed by following single input single output system principle, which made the closed-loop system meet performance requirements of dynamic controls. Case studies show that centralized control systems designed by this method are more advantageous, simpler, and easier than decentralized control system.

    Inverse Nyquist array for multivariable control system using constant diagonal dominant pre-compensation matrix
    XU Feng, WANG Qihang, LUO Xionglin
    2018, 69(3):  1102-1113.  doi:10.11949/j.issn.0438-1157.20171138
    Abstract ( 309 )   PDF (858KB) ( 416 )  
    References | Related Articles | Metrics

    There are coupling effects between input and output variables of multivariable system of chemical processes. In control system design, diagonal dominance is usually implemented in transfer function matrix and then several single loop controllers are deployed. Since it is easy to define diagonal dominance by inversing transfer function, a pseudo diagonal dominance method was used to develop constant diagonal dominant compensation matrix for inverse transfer function. First, diagonal dominance was obtained by minimizing sum of module value squares of off-diagonal elements in each row of inverse open-loop transfer-function matrix at one or more frequency points. Then, inverse Nyquist array design method was adopted to design controller for the compensated system. Based on inverse Nyquist stability criterion, diagonal dominance degree was decided from dominance curve and Gershgorin diagram. Parameter range of feedback matrix was also selected from Gershgorin diagram. Dynamic compensators were designed according to method of single variable system, so that the system met quality requirements of dynamic controls. Finally, study on three examples show that the design method is simple and the control performance is excellent.

    A process monitoring method based on informative principal component subspace reconstruction
    CANG Wentao, YANG Huizhong
    2018, 69(3):  1114-1120.  doi:10.11949/j.issn.0438-1157.20171369
    Abstract ( 381 )   PDF (543KB) ( 592 )  
    References | Related Articles | Metrics

    Principal component analysis (PCA), a classical algorithm for feature extraction, has been widely used in multivariate process monitoring. Conventional PCA selects those principal components with larger variance in order to maintain more information of modeling samples. However, when process information is changed, principal components with smaller variance may exhibit more obvious transformation, which means they are more informative and more beneficial for fault detection. Hence, a new process monitoring method was proposed on a basis of informative principal component subspace reconstruction (Info-PCA). Info-PCA calculated change rates of cumulative T2 of process data in different directions of principal components and reconstructed a principal component subspace by selecting those components with larger change rates. Then, a statistical process monitoring model was built. Finally, feasibility and validity of the Info-PCA monitoring method were demonstrated by a case study of a chemical process.

    Industrial process soft sensor method based on deep learning ensemble support vector machine
    MA Jian, DENG Xiaogang, WANG Lei
    2018, 69(3):  1121-1128.  doi:10.11949/j.issn.0438-1157.20171050
    Abstract ( 460 )   PDF (736KB) ( 586 )  
    References | Related Articles | Metrics

    The soft sensor modeling method based on support vector machine (SVM) has been widely used in the field of industrial process control. However, the traditional support vector machine directly models the original measurement variables without fully extracting the intrinsic data information to improve the prediction accuracy. Aiming at this problem, a soft sensor modeling method based on deep ensemble support vector machine (DESVM) is proposed in this paper. Firstly, this method uses the deep belief network (DBN) to carry on the deep information mining, and extracts the intrinsic data characteristic. Then the ensemble learning strategy based on the Bagging algorithm is introduced to construct the ensemble support vector machine model based on the deep data characteristic, which can enhance generalization ability of soft measurement prediction model. Finally, the applications on a numerical system and real industrial data are used to validate the proposed method. The results show that the proposed method can effectively improve the prediction accuracy of the soft vector model of support vector machine and can predict the change of process quality index better.

    Partial approximate least absolute deviation for multivariable nonlinear system identification
    XU Baochang, ZHANG Hua, WANG Xuemin
    2018, 69(3):  1129-1135.  doi:10.11949/j.issn.0438-1157.20171518
    Abstract ( 396 )   PDF (494KB) ( 428 )  
    References | Related Articles | Metrics

    Based on approximate least absolute deviation and principal component analysis, the partial approximate least absolute deviation for non-linear system identification is carried out aiming at multivariable Hammerstein model with linear correlation of input signals. An approximate least absolute deviation objective function is established by introducing a deterministic function to replace the absolute value under certain situations. The proposed method can overcome the disadvantage of large square residual of least square criterion when the identification data is disturbed by the impulse noise which obeys symmetrical alpha stable (SαS) distribution. By adopting principal component analysis to eliminate the linear correlation among the elements of data vector of nonlinear systems, the unique solution of model parameters can be easily acquired by the proposed method. The simulation shows that the proposed method has stronger robustness than the partial least square (PLS) method in the identification of multivariable Hammerstein model with white noise and impulse noise under the above situation.

    Spontaneous path selection for hazardous chemical transportation based on real-time traffic condition
    XU Wenxing, BIAN Weibin, WANG Wanhong, LIU Cai, ZHUANG Jun
    2018, 69(3):  1136-1140.  doi:10.11949/j.issn.0438-1157.20171522
    Abstract ( 431 )   PDF (736KB) ( 410 )  
    References | Related Articles | Metrics

    With more automobiles on road, the chance of accidents involving hazardous chemical transportation vehicles increases. In order to handle complex road conditions and to improve transportation efficiency and service quality, real-time traffic conditions were introduced to alert hazardous chemical vehicles and a method of spontaneous update on path planning was proposed. First, historical traffic data was used for initial route planning. Then, as vehicle was moving forward, real-time road information on next segment of the planned path was updated and alternative local path might be selected according to traffic condition. The iteration was continued until the destination was reached. In this way, combination of global planning and spontaneous local optimization was realized for transportation path of hazardous chemical vehicles. The method effectiveness was validated on path planning between Beijing Petroleum Co. Ltd of China Aviation Oil and Beiyuan gas station of China Sinopec.

    FGCN modeling on iron precipitation process in mineral goethite
    CHEN Ning, ZHOU Jiaqi, GUI Weihua, WANG Lei
    2018, 69(3):  1141-1148.  doi:10.11949/j.issn.0438-1157.20171443
    Abstract ( 332 )   PDF (445KB) ( 286 )  
    References | Related Articles | Metrics

    Iron precipitation is consisted of several continuous reactors, which involves a series of complex chemical reactions such as oxidation, hydrolysis and neutralization. Owing to its strong nonlinearity and uncertainty, it is difficult to establish an accurate mathematical model of the iron precipitation process. A modeling method based on fuzzy gray cognition network was proposed from expert experience and historical data. The weighted values were studied by nonlinear Hebbian learning algorithm with terminal constraints. The analysis results on system at various extents of uncertainty show that FGCN can effectively simulate complex industrial systems in environment with high uncertainty. The simulated system can be converged to a gray number equilibrium point of very small or zero gray scale and then be solved to obtain an accurate control output by whitening function.

    Multi-delay identification by trend-similarity analysis and its application to hydrocracking process
    WANG Yalin, XIA Haibing, YUAN Xiaofeng, GUI Weihua
    2018, 69(3):  1149-1157.  doi:10.11949/j.issn.0438-1157.20171188
    Abstract ( 433 )   PDF (816KB) ( 275 )  
    References | Related Articles | Metrics

    Multi-delays exist largely in variables between production units of complex industrial processes and are difficult to detect. A trend-similarity analysis was proposed to identify such multi-delays. The trend-similarity was defined via data of variable derivatives after a polynomial least square fitting on key variables with strong correlation between production units. Multi-delay identification was described by minimizing trend-similarity after sampling delay translation. L2 norm was used to quantify trend-similarity vector and thus multi-delay identification was transformed into L2 norm minimization. The optimal sampling delays on variables were determined by fast optimization with improved adaptive particle-swarm algorithm. The proposed method was applied to identify variable sampling delays in a hydrocracking process. Based on these identified sampling delays, a prediction model for flash point of diesel fuel was established with assistance of local weighted kernel principal component regression. The experimental results show that the multi-delay prediction model improves accuracy by 19.05%, which verifies effectiveness of the proposed multi-delay-identification method.

    Kinetic model optimization of n-butane isomerization by improved biogeography optimization algorithm
    LUO Ruihan, CHEN Juan, WANG Qi
    2018, 69(3):  1158-1166.  doi:10.11949/j.issn.0438-1157.20171083
    Abstract ( 418 )   PDF (625KB) ( 362 )  
    References | Related Articles | Metrics

    To overcome the problem that biogeography-based optimization (BBO) algorithm easily falls into precocity in optimization process, a three-dimensional variation biogeography-based optimization (Tdv-BBO) was proposed. With introduction of 3D variation into BBO algorithm, the improved algorithm solved issue of lack of searching power in late stage of BBO algorithm and accelerated optimization speed of BBO algorithm. Further, Tdv-BBO was applied for optimizing kinetic model and settling model parameters of n-butane isomerization. The simulation results show that Tdv-BBO algorithm improves diversity of individual population, enhances searching capacity of algorithm, and accelerates optimization speed. The optimized kinetic model by Tdv-BBO has advantages of high precision and good generalization ability. Hence, Tdv-BBO provides an effective technique for modeling n-butane isomerization.

    Recursive optimization of batch processes based on load cosine similarity in latent variable space
    LIU Xiaofeng, LUAN Xiaoli, LIU Fei
    2018, 69(3):  1167-1172.  doi:10.11949/j.issn.0438-1157.20171174
    Abstract ( 336 )   PDF (469KB) ( 368 )  
    References | Related Articles | Metrics

    A recursive optimization strategy using load cosine similarity in unitized latent variable space was proposed to address information loss problem of variable correlation in batch process when optimized by principal component similarity. An extension matrix of time-segment and index variables was broken down by principal component analysis. The information redistribution generated orthogonal unitized latent variable space of principal components and a non-unitized load matrix containing much more variable information. Load cosine similarity between time-segment and index variables in the latent variable space as well as index increment between each time segment were calculated to recursively correct operation trajectory. The non-unitized load matrix by principal component decomposition not only reduced information loss of variable correlation in the latent variable space but also simplified recursive algorithm for updating operation trajectory. Finally, the effectiveness of the proposed method was verified by batch process optimization of one chemical crystal purification.

    A novel fault diagnosis method based on multilayer optimized PCC-SDG
    DONG Yuxi, LI Lening, TIAN Wende
    2018, 69(3):  1173-1181.  doi:10.11949/j.issn.0438-1157.20171104
    Abstract ( 295 )   PDF (657KB) ( 423 )  
    References | Related Articles | Metrics

    Chemical process failures are often caused by a series of variables with a chain effect. This study utilizes variable correlation characteristics, PCC (Pearson correlation coefficient) statistical index, and SDG (signed directed graph) to describe the causal relationship among variables, and then proposes a PCC-SDG fault diagnosis method based on a multi-layer optimization structure. With the topological network structure of the whole process as reference, this method first performs an initial optimization on the selected variable. An optimal PCC-SDG network is then constructed on the specific variables which have large PCA (principal component analysis) weights in the multilayer correlation coefficient set. After that, the rule of gather weighting coefficient Q is established to identify process fault. The application on Tennessee Eastman process illustrates that the PCC-SDG method can realize fault detection and isolation tasks in an effective pattern. Because its modeling and diagnosis procedures are simple and SDG can be readily probed for the root cause, the proposed method has an advantage in process supervision.

    Control dissolved oxygen in wastewater treatment by interval type-2 fuzzy neural networks
    HAN Honggui, LIU Zheng, QIAO Junfei
    2018, 69(3):  1182-1190.  doi:10.11949/j.issn.0438-1157.20171454
    Abstract ( 394 )   PDF (714KB) ( 455 )  
    References | Related Articles | Metrics

    An intelligent controller, based on interval type-2 fuzzy neural networks (IT2FNN) was proposed for controlling dissolved oxygen (DO) concentration in municipal wastewater treatment processes. First, IT2FNN was applied to design a DO concentration controller. Second, an adaptive learning algorithm was used to online adjust controller parameters such that self-adaptability of the IT2FNN-based DO controller could be improved. Finally, IT2FNN-based DO controller was tested in the benchmark simulation model no. 2 (BSM2). The experimental results demonstrate that the controller is able to accurately monitor DO concentration in the fifth unit and maintain excellent control.

    Structure design for recurrent RBF neural network based on recursive orthogonal least squares
    QIAO Junfei, MA Shijie, YANG Cuili
    2018, 69(3):  1191-1199.  doi:10.11949/j.issn.0438-1157.20170771
    Abstract ( 420 )   PDF (830KB) ( 472 )  
    References | Related Articles | Metrics

    Aiming at the problem of recurrent radial basis function (RRBF) neural network structure which is difficult to be self-adaptive, this paper proposes a structure design method based on recursive orthogonal least square (ROLS) algorithm. Firstly, ROLS algorithm is used to calculate the contribution and the loss function of hidden layer neurons, which determines to increase or be grouped into inactive neurons, and the topology structure of neural network is adjusted accordingly. At the same time, singular value decomposition (SVD) is applied to determine the best number of hidden layer neurons in order to delete the neurons of the inactive group, which effectively solves the problems of RRBF neural network structure which is redundant and hardly self-adaptive. Secondly, the gradient descent algorithm is utilized to update the parameters of RRBF neural network in order to ensure the accuracy of neural network. Finally, several experiments including the Mackey-Glass time series prediction, nonlinear system identification and key water quality parameters dynamic modeling in wastewater treatment process are conducted, and the simulation results prove the feasibility and effectiveness of the structure design method.

    Monitoring non-Gaussian and non-linear batch process based on multi-way kernel entropy component analysis
    CHANG Peng, QIAO Junfei, WANG Pu, GAO Xuejin, LI Zheng
    2018, 69(3):  1200-1206.  doi:10.11949/j.issn.0438-1157.20171329
    Abstract ( 385 )   PDF (593KB) ( 369 )  
    References | Related Articles | Metrics

    Multi-kernel independent component analysis (MKICA) has been widely used in monitoring non-Gaussian and non-linear processes. The technique uses only non-linear extension of linear independent component analysis (ICA) by KPCA data whitening. After KPCA data whitening, the data is considered only to maximize data information but not data cluster structure information. In order to solve this problem, kernel entropy component analysis (kernel entropy component analysis, KECA) was proposed to replace KPCA whitening in process monitoring. First, 3D data is transformed into 2D data by AT expansion. Second, data nonlinearity was resolved during KECA whitening. Third, ICA monitoring model was established for non-Gaussian production process monitoring. The method was applied to simulation and actual industrial process of Penicillin fermentation, which showed effectiveness of the method in comparison with the MKICA method.

    The best priority and variable neighborhood search algorithm for production furnace grouping in NdFeB enterprises
    LIU Yefeng, CHAI Tianyou
    2018, 69(3):  1207-1214.  doi:10.11949/j.issn.0438-1157.20171578
    Abstract ( 278 )   PDF (442KB) ( 232 )  
    References | Related Articles | Metrics

    Production furnace grouping is critical in routine operation of NdFeB processes and the grouping results directly affect production efficiency. Based on actual requirements of a production unit, a mathematical model of grouping performance index, constraint conditions and decision variable was established and algorithm of best priority and variable neighborhood search was proposed for furnace grouping. There were three components of the algorithm, which included multi-layer quick sorting algorithm to determine furnace sequences, best priority and variable neighborhood search algorithm, and heuristic algorithm based on production rules for various grade of raw materials in stock. By using this algorithm for furnace grouping of 20 production work orders, sum of deviations of delivery time, product grade, and production priority was decreased from 58 to 42 with a reduction rate of 27.59% and satisfaction rate of raw material preparation was increased from 4 to 6 with an increase rate of 50%. Compared to results of manual furnace grouping for 40 production work orders, the furnace grouping results of this algorithm was reduced by 2 times of furnace usage. Further comparison with discrete particle swarm algorithm, acoustic variable neighborhood search algorithm, and adaptive variable neighborhood search algorithm demonstrated effectiveness of the proposed algorithm and validity of the mathematical model.

    Prediction of fine particulate matter concentrations based on generalized hidden Markov model
    ZHANG Hao, YU Junyi, LIU Xiaohui, LEI Hong
    2018, 69(3):  1215-1220.  doi:10.11949/j.issn.0438-1157.20171113
    Abstract ( 295 )   PDF (560KB) ( 589 )  
    References | Related Articles | Metrics

    In recent years, severe haze pollution accidents occurred frequently, which caused heavy losses in national economy and health of residents in China. Accurate early warning of severe haze pollution episodes can not only remind people of refraining from the hazards, but also set aside enough time for the government of emergency management before substantial improvement of air quality. According to the non-Gaussian distribution characteristics of PM2.5 precursors and meteorological factors and limitation of the number of known hidden states in traditional hidden Markov models (HMMs), generalized hidden Markov models (GHMMs) were employed to make predictions of PM2.5 concentrations at 11 nationally controlled monitoring sites (except Dingling site) in Beijing from January 2013 to January 2017. Data from Jan. 2013 to Dec. 2015 was used to train the GHMM models and data from Jan. 2016 to Jan. 2017 was used to validate these models. The same data was also used to train traditional HMM which contained 2 hidden states and 6 Gaussian distributions to make comparison with GHMM. Results illustrate that true prediction rate of GHMMs is significantly higher than that of traditional HMMs when applied on the prediction of samples over 250 μg·m−3; while both GHMMs and HMMs have similar performances when applied on the prediction of samples blow 150 μg·m−3.

    Soft-sensing modeling of marine protease fermentation process based on improved PSO-RBFNN
    ZHU Xianglin, LING Jing, WANG Bo, HAO Jianhua, DING Yuhan
    2018, 69(3):  1221-1227.  doi:10.11949/j.issn.0438-1157.20170598
    Abstract ( 504 )   PDF (665KB) ( 288 )  
    References | Related Articles | Metrics

    Some key parameters in the fermentation process of marine protease (MP) are difficult to be detected online. There is the existence of large time-delay and easily stained bacteria in off-line measurement. A soft sensor modeling method based on the improved PSO-RBFNN in the MP fermentation process was proposed. Firstly, exponential decreasing inertia weight (EDIW) strategy was used to improve PSO algorithm, and overcome the disadvantages that PSO with fixed inertia weight and adaptive inertia weight is easy to fall into the local minimum, the convergence rate is slow in late evolution and the global search ability is weak. Then the improved PSO algorithm was used to optimize the connection weight of RBFNN, and the RBFNN topology was successively determined. Finally, the RBFNN soft sensor model was constructed according to the input/output vector of MP fermentation process. The simulation results showed that the training time of the EDIW-PSO-RBFNN model was reduced by at least (about) 40%, and the prediction accuracy of model was improved by more than 3%.

    Fault detection method based on minimum sufficient statistics pattern analysis
    SUN Shuanzhu, DONG Shun, JIANG Yefeng, ZHOU Ting, LI Yiguo
    2018, 69(3):  1228-1237.  doi:10.11949/j.issn.0438-1157.20171054
    Abstract ( 329 )   PDF (652KB) ( 367 )  
    References | Related Articles | Metrics

    Recently, the statistic pattern analysis (SPA) has been used with widespread applications in the field of fault detection. Its essence is to use data statistics matrix for process monitoring instead of the original data matrix. However, SPA lacks reasonable method in choosing the statistics variables, also complex nonlinear interactions exist among these statistics variables. As a result, fault detection cannot be processed by using ordinary principal component analysis (PCA) algorithm. In order to solve these problems, a new minimum sufficient statistics pattern analysis (MSSPA) fault detection method is proposed. This method first eliminates the correlations among variables by performing an orthogonal transformation of the raw data matrix, and then estimates the probability density function of the single variables or joint probability density function of multiple variables, so as to acquire the minimum sufficient statistic of original data, and construct the statistic matrix with it. The introduction of minimum sufficient statistics is also beneficial to handle the problem of non-Gaussian distribution of the raw data. Finally, the feasibility and validity of this method for fault detection are verified by testing on the Tennessee Eastmann (TE) process.

    Application of variation coefficient to fast detection on surface defects of industrial products
    LI Chengfei, TIAN Guo, DONG Chaojun, JI Dengqing
    2018, 69(3):  1238-1243.  doi:10.11949/j.issn.0438-1157.20171486
    Abstract ( 336 )   PDF (1204KB) ( 416 )  
    References | Related Articles | Metrics

    To improve accuracy and speed of quality control detection on surface defects of industrial products, a novel fast detection approach was proposed based on machine visualization. With introduction of variation coefficient, a difference image was obtained by image differencing analysis of the test image and the model image. Threshold was determined by variation coefficient of the difference image and then defects were located by image dicing. Experimental results on wallpaper surface defect detection show that the proposed method improved accuracy and robustness of image detection and reduced false detection rate greatly.

    Soft sensor of wet ball mill load parameter based on domain adaptation with manifold regularization
    DU Yonggui, LI Sisi, YAN Gaowei, CHENG Lan
    2018, 69(3):  1244-1251.  doi:10.11949/j.issn.0438-1157.20170918
    Abstract ( 414 )   PDF (828KB) ( 547 )  
    References | Related Articles | Metrics

    Aiming at the challenging problems such as the measurement of key load parameters of ball mill under multi-operating conditions, a soft sensor model based on domain adaptation with manifold regularization (domain adaptation with manifold regularization,DAMR) for measuring wet ball mill load parameters is proposed. Firstly, the feature transformation matrix is found by using the integrated manifold constraints, the maximum variance and the maximum mean discrepancy. Then, the feature information of the source domain and the target domain are projected into the common subspace. Finally, the model established in the subspace is used to predict critical load parameters. The results show that the proposed method can predict the critical load parameters of wet ball mill under the unknown condition with high precision, and this method has some reference value for the soft sensing of mlti-operating conditions in process industry and process monitoring.