AUTOMATIC LINEAR DIFFERENTIAL EQUATION IDENTIFICATION IN ANALYTICAL FORM


Cite item

Full Text

Abstract

In this paper we suggest a reduction of linear dynamics identification problem to the global optimization task. The current approach allows automatic determining the structure and parameters of a linear differential equation via the usage of the modified hybrid evolutionary algorithm for extremum seeking. The a priori information algorithm needed is only the dynamic system initial point or an estimation of the initial point and the sample of measurements: system output and, if there is one, system input.

Full Text

There are many different approaches of linear differential equation parameters estimation. But since there is no a priori information about the system itself most of them become useless. For different tasks there are some special techniques that allow solving the problem. In this paper we consider the situation when the data can be noised and the system structure is unknown. We can use, for example, stochastic difference equations [1], and build a model using the output observations. But there are some restrictions in using this approach: we still need the information about the order of differential equation and we must observe the system output on the unit step function. To simply estimate the reaction of linear dynamic system on different control input or smooth the output data we can use nonparametric methods, neural network of fuzzy output modeling. Also, there is a possibility to estimate the solution of differential equation for current situation [2] via genetic programming. As for nonpara-metric or neural network approaches it is possible to define the system output for different control function using the Cauchy equation, but the system cannot be presented in an analytical form. As for genetic programming technique we still have a possibility to find the output for different control, but since then, it can be found numerically. Moreover, the models are very complex and the analytical solution for estimation of differential equation seems to be very long and have a superior size. Here we suggest seeking the model in differential equation form. The benefits are the following: it would be easy to estimate the system output numerically for any control function with any desired precision, in some cases it would be easy to define an analytical solution via eigenvalues evaluation, and there are plenty of control methods and analysis techniques for the models in differential equation form. In article [3] the dynamic system approximation with second order linear differential equation via genetic algorithm is examined. The genetic algorithm is well known as effective global optimization technique. The only problem with it is that seeking works on a compact with given boarders and the real values ought to be quantized. In this paper we suggest to use an evolutionary strategies algorithm with local optimization and some modifications to approximate not only the parameters, but also the structure of a ordinary differential equation (ODE). Linear differential equations models are useful in filtering, in articulatory identification [4] and [5] - for stochastic ODE that can be identified in the same way. With some modifications, this method can be used for Bessel equations identification. It is also applicable to Markov processes [6] as a stochastic ODE too. That is why the linear differential equation identification can be useful is some fields related to speech recognition problem. Let us have a sample {y, ui, tt}, i = 1, s , where s is its size, y є R are dynamic system output measurements at ti, and ui = u (ti) are control measurements. It is also known, that the system is linear and dynamic, so it can be described with the ordinary differential equation (ODE): ak • x(k) + ak_1 • x(k_1) +... + a0 • x = b • u(t), x(0) = x0 . (1) Here x0 is supposed to be known. In the case of the transition observation, we can put forward a hypothesis about initial point: the system output is known at initial time and the derivative values can be set to zero, because usually the system observation starts in its steady state. In general, the initial point can be approximated. Using the sample data we need to identify parameters and the system order m, which is assumed to be limited, so m < M, M є N. M is a parameter that is set by the researcher. This value limits the structure of the differential equation, i.e., it limits the ODE order. It is also assumed that there is an additive noise 4: E(4) = 0, D(4) < да, that affects the output measurements: Уг = x(ti ) + 4 . (2) Without information on the system order, we would not be able to solve the identification task, but because of the maximum order limitation, the task can be partially parameterized. The maximum order is supposed to be chosen a priori. It would specify the optimization problem space dimension. Without loss of the generality, let the leading coefficient of ODE be the constant equal to 1, so that x(k) + ak_l • x(k_1) +... + ^0 • x = b • u (t), (3) ak ak ak or x(k) + ak • x(k_1) +... + ai • x = b • u(t) . (4) Then we can seek the solution of the identification task as a linear differential equation with the order m, x(m) + am • x(m_1) +... + ai • x = b • u(t), x(0) = x0 , (5) where the vector of equation parameters a = (0,..., 0, am,..., a1, a0) є Rn, n=m+1, delivers an extremum to the functional N I (a) = XІУ _ x(ti )| ^ min. (6) i=1 ^Rn In general case, the solution x(t) is evaluated with a numerical integration method, because the control function has no analytical from, rather is given algorithmically. We prefer the criterion (6) instead of quadratic criteria because of its robustness. For the correct numerical scheme realization, let us have a coefficient restriction for equation (3), |ak| > 0.05. Otherwise, this parameter is going to be equal to zero, so ak = 0, m = m _ 1. That condition prevents extra computational efforts of the numerical evaluation scheme and is necessary for the local optimization algorithm effecting on the system structure. Now let us consider the specific modelling issue. The identification of linear differential equations system is connected with the optimization problem for the system of equations: 4. •x® +...+4 • xi = Xb • xj + b0 • ui(t), (7) j=1 where xi, i = 1, no , is an observed system output; no is the number of outputs. Equation (7) shows that the system is considered not in general way and every system output depends on other outputs but not on their derivatives. Also, there is only one control input for every equation. This can be easily extended to the case with many control inputs. The identification problem for the system with equation (7) is important and an ability to solve it could be useful. And it is clear, that the functional (6) can be transformed into the functional j (a)=z z| уі -- (8) j=1 i=1 for system (7). The criterion can easily be extended to matrix form of differential equation. The reason why the basics of optimization technique was borrowed from evolutionary strategies algorithm [7] is that the identification problem leads to solving the multimodal optimization task. The goal of the given approach is the identification of the parameters and the structure simultaneously. The system structure and its parameters are defined with one vector. The criteria (6) and (8) for this vector is complex and sensitive to the its components, which are changing by stochastic search operators. This is why we have to develop the specific modification for the global optimization technique. Let every individual be represented with tuple Hi = (op', sp1, f'tness(op1 )^, i = 1, Nj , where opj є R, j = 1, k , is the set of objective parameters of the differential equation; spj є R+, j = 1, k , is the set of strategic parameters; Nj is the population size; 1 fitness(x): R ^ (0,1], fitness(x) = - 1 + J (x) is the fitness function. As the selection types, proportional, rank-based and tournament-based selections were chosen. The algorithm produces one offspring from two parents and every next population have the same size as previous. Recombination types are intermediate and discrete. The mutation of every offspring’s gene happens with the chosen probability pm . If we have the random value z = {0,1}, P( z = 1) = pm which is generated for every current objective gene and its strategic parameter then opfsvrns = opfsPrng + z. N(0, spfspring); N (0,1)|, sr,offsVrinS = sr,offsVrinS + z APi ~\лУі where N(m, с ) is the normally distributed random value with the mean m and the variance ct2. We suggest a new operation that could increase the efficiency of the given algorithm. For every individual, the real value is rounded down to the nearest integer. This provides searching for solutions with near the same structure. Also for N1 randomly chosen individuals and for n2 randomly chosen objective chromosomes we make N3 iterations of local search with the step hl to determine the better solution. This is the random coordinate-wise optimization. To make an investigation 100 systems were generated: 10 for every order from first to tenth. Parameters of the systems were randomly generated: a’k = U (-5,5), bk = U(-5,5), i = 1,10, k = 1, i where U(-5,5) is the uniform distribution. The time of the process was set to 5. The control function was the step function and we know what was the control for every system, so u(t) = 1. Let ЇХ-, t'}, i = 1, T / hi be the numerical solution for the system. We take s < T / ht, s = 100 points randomly. For every system 10 runs of the algorithm were executed with every combination of its parameters. To estimate the efficiency of different approaches we considered the identification without any noise. Having different types of the selection and the crossover, we would also vary the probability 15 1' ,H to find out the most effective combi-[11115 J nation of the algorithm settings. As a pre-set we use the population size in 50, the number of populations in 50, Nj = 50, N2 = 50 and N3 = 1 with h = 0.05. We compared the efficiency of following algorithms: 1 - the evolutionary strategies (ES) algorithm; 2 - ES with the local optimization, hybrid evolutionary strategies (HES); 3 - HES with modified mutation; 4 - HES with turning real numbers into integer numbers; 5 - HES with modified mutation and turning real numbers to integer ones. After testing the algorithms on different samples of the systems, the efficient presets were found: modified HES algorithm with turning the real numbers to integer ones, 50 individuals for 50 populations, N1 = 50 , N2 = 50 and N3 = 1 with hl = 0.05, the tournament selection with the tournament size 25 %, the discrete crossover and the mutation with the probability pm = Ц-. For the proper structure and parameters determination we need an adequate sample that reflects all the transient process. Let us take some stable systems that come into the steady state in time T = 5 . In Table 1 we would make an efficiency investigation for the modified HES algorithm. 20 runs of the algorithm were made for every system. We will say that the algorithm determines the structure and parameters if max(a - a) < 0.05 . As we can see from Table 1, the high value of fitness does not guarantee the success in identification the real structure. Let us highlight that for the most of solutions found from this study for stable systems, the order was found correctly. Let us describe the identification problem for hexa-decane chemical reaction. The disintegration of the hexa-decane gives the following products: the spirits and carbonyl compounds. The initial point is known. There is no control input in this identification problem. We set the maximum order for the first equation to 10. The 50 runs of the algorithm gave us some different solutions that are shown in Table 2. Table 1 (0.3477'ї 0 0 B = The efficiency of “true” parameters estimation Order p(max(a - a) < 0.05) Fitness 1 0,65 0,9593 2 0,95 0,9979 3 0,90 0,9977 4 0,95 1,0000 5 0,80 0,9961 Table 2 The hexadecane disintegration model Models and the error (I) 4.05 • x' + 0.9 • x = 1, I = 0.3022 1.05 • x' + 0.4 • x = 1, I = 0.2834 2.1-x' + 0.55 • x = 1, I = 0.1822 -1.05 • x'"-0.15 • x"-6.85 • x'-0.9 • x = 1, I = 0.227 -3.4 • x'-0.45 • x = 0, I = 0.202 As we can see, the found parameters and system structure forms the first order differential equation, and that fact does not contradict the hypothesis [8], which states that disintegration chemical reactions can be presented as first order linear differential equation. Knowing the structure of the equations we can identify the system itself in a matrix form. The given optimization procedure is a stochastic algorithm, that is why the best solution from the 20 runs was taken. The system outputs and the sample are shown on figure 1 for hexadec-ane, spirits and carbonyl compounds. As we can see on figures, the measurement at the point t = 7 seems to be an abnormal measurement, but it did not effect on the model. The solution for the system can be represented in the matrix form (-0.1671 0.7630 -0.3625^ Modifications of evolutionary strategies algorithm increase the accuracy of model and allow solving two tasks at the same time. The further investigation should be concentrated on the estimation of the performance of algorithm with the different local optimization and mutation parameters. Also, differential equation algorithm or partial swarm optimization are to be tested as basic optimization procedure.
×

About the authors

I. S. Ryzhikov

Siberian state aerospace university named after academician M. F. Reshetnev

Email: ryzhikov-88@yandex.ru
Master of engineering and technologies, graduate student of the chair of system analysis and research of operations of the Siberian state aerospace university named after academician M. F. Reshetnev. Graduated from the Siberian state aerospace university named after academician M. F. Reshetnev in 2011. Area of scientific interests - system analysis, global optimization, dynamic systems, identification, management.

References

  1. Zoteev V. Parametrical identification of linear dynamical system on the basis of stochastic difference equations // Matem. Mod. 2008. Vol. 20. No 9. Р. 120-128.
  2. Evolutionary modeling of systems of ordinary differential equations with genetic programming / H. Cao, L. Kang, Y. Chen, J. Yu // Genetic Programming and Evolvable Machines. Vol. 1 (40). 2000. Р. 309-337.
  3. Parmar G., Prasad R., Mukherjee S. Order reduction of linear dynamic systems using stability equation method and GA // International J. of computer and Infornation Engeneering. 1:1, 2007.
  4. Reimer M., Rudzicz F. Identifying articulatory goals from kinematic data using principal differential analysis // Proceedings of Interspeech 2010, Makuhari Japan. 2010. P. 1608-1611.
  5. Mineiro P., Movell J. R., Williams R. J. Modeling path distributions using partially observable diffusion networks: a Monte-Carlo approach //
  6. Saerens M. Viterbi algorithm for acoustic vectors generated by a linear stochastic differential equation // Proceedings of the IEEE Intern. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Detroit, 1995. Р. 233-236.
  7. Schwefel Hans-Paul Evolution and Optimum Seeking : New York : Wiley & Sons., 1995.
  8. Romanovskii B.V. The foundations of the chemical kinetics. Moscow : Ekzamen, 1996.

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2012 Ryzhikov I.S.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies