АЛГОРИТМ ОБНАРУЖЕНИЯ ПЛАМЕНИ ПО ВИДЕОДАННЫМ


Цитировать

Полный текст

Аннотация

Наличие камер видеонаблюдения, в настоящее время установленных и устанавливаемых на городских тер- риториях, на территориях предприятий, а также на территориях природных парков и лесных массивов, позволяет выполнить мониторинг возгораний на ранних стадиях пожара. Обнаружение пламени по видеодан- ным является актуальной задачей, поскольку позволяет своевременно ликвидировать источник возгорания, тем самым избежать возможных экономических потерь и даже человеческих жертв. Для обнаружения пла- мени предложен алгоритм, основанный на выделении движения, учете цветовых особенностей пламени и ана- лизе его динамических свойств. Для проведения экспериментальных исследований использованы базы данных видеопоследовательностей Билькентского университета и Dyntex. Количество кадров тестовых видеопосле- довательностей составило 6 853, общая продолжительность роликов - 5 мин. Средняя по всем видеороликам точность обнаружения пламени составила 94,47 %, что является хорошим результатом и подтверждает эффективность предложенного алгоритма.

Полный текст

Introduction. In open spaces, a visual detection of fire has advantage over traditional methods, such as sam- pling methods of air particles or ambient temperature measurements. Application of methods based on ultravio- let or infra-red multi-spectral principles for flame detec- tion generally requires substantial material costs. Flame detection on video increases the likelihood of early fire detection and reduces the response time to igni- tion, since in the case of video analyses detection of fire is carried out in the initial phase. In addition, flame detec- tion on video based data gives a precise location of a fire hazard. The use of the additional flame detection module in video surveillance systems will expand their scope and improve fire safety of the sites. This will help to prevent possible losses and significantly reduce the damage caused by ignition. With the development of video sur- veillance systems and image analysis technologies, it has become possible to use the data to detect flames as reli- able sign of fire. Signs of flame on video. Flame has many different characteristics, such as colour, motion, shape and extent, behaviour, etc. Various processes in the flame are very rapid, so it is often impossible to see them with the naked eye. The colour of the flame depends on many factors. First, it could be the chemical composition of a burning object, which may change the shades of the flame under combustion. Secondly, air saturation with various gases, for example oxygen, has a great impact. The colour of the flame may also be influenced by its temperature [1]. The second equally important characteristic for flames detection via video sequences is its dynamics, or move- ment. It is well known that burning of fire is a very dynamic process. Flames regularly change their shape and direction, so such processes can be easily detected. In the video image flame and smoke are represented as a dy- namic 2d texture [2]. Such dynamic textures may have a stochastic and regular component [3]. As a way of classifying the selected smoke areas into “flames” and “no flames”, such approaches as fuzzy logic, production systems, support vector machine (SVM), artifi- cial neural networks (neuro-nets), groups of trees and other solutions are used. Thus, the signs of fire in the video can be attributed to [4-6]: movement, colour, change of borders, flickering on the edges and glowing flames. Among the methods of detecting flames via the video sequences, it is possible to specify approaches based on stochastic models and other mathematical methods, methods based on traffic allocation and chromaticity. Video-based flame detection methods. So far, there are many different methods of flame detection based on video sequences. Most of them use characteristics on the basis of mo- tion cues and finding pixels corresponding to the colour of the flame. Since flame zones are characterized by flickering ability [7], which is to change the boundaries from frame to frame randomly, the flickering rate is 10 Hz [8]; to assess the energy components of an image on the boundary of the candidate zone frequency methods of image analysis are used. Thus, in the present work [5] a two-phase filtering system, which consists of a high frequency and a low- frequency filter, is used. In the research [9] stochastic model is employed to simulate the spatial and temporal characteristics of zones: hidden Markov models, which are trained by a test set of images containing smoke and flames, are applied. To detect flames via video sequences, a sequential image of the flame is processed [10], starting with a low-level representation based on pixels and end- ing with a high-level semantic representation of the video. Each pixel of a particular image that matches certain col- our rules and motion characteristics is marked as a “pixel of the flame colour”. Following that, the candidate region of the flame-like pixels is roughly formed, and the image is divided into separate units. The units are formed using specially trained dictionaries which can identify and rec- ognize marked pixels, for more accurate segmentation of candidate regions with projected flame, and to exclude “no flame” zones, as shown in fig. 1. Another method of flame detection via video se- quences is based on the idea of processing the foreground of images and optical flow techniques [11]. Images are accumulated by processing the foreground of images which are extracted using a differentiated personnel method. Two options are used to differentiate the candidate re- gions of the flame from the smoke candidate regions. Flame zones are recognized via statistical model built on the accumulation of foreground images, while the smoke zones are calculated using the optical flow and the model of the motion function. Flame burning is seen as a turbu- lent movement with some source. If there is no wind or air flow, the zones of continuous flame and intermittent zones of flame will be repeated at regular intervals in a given zone. Thus, the importance of flame region pixels in the foreground of the image is growing. Flame detection on video sequences is also possible using logistic regression and temporary smoothing [12]. Since the colour of the lamp is usually heavily saturated in the red band, the red components of each fire pixel is more common than others in the RGB colour space. Due to RGB colour values are sensitive to light changes, the RGB ink colour is transformed into a colour space that can distinguish between brightness and colour. The YCbCr colour space describes the colour as bright (Y) and coloured (Cb, Cr) components. The background pixels show different shapes as well as different location of the colour factor in the distribution (fig. 2). Thus, because of similarities in colour, fire flames and fire-like objects show similarities in terms of distribution, but with different average values along the axis of the colour coefficient. а b c Fig. 1. Image processing when flames are detected: а - initial video image; b - detecting the pixels of the flame colour; c - pixel movement processing Рис. 1. Обработка изображения при обнаружении пламени: а - исходное видеоизображение; б - выделение пикселей цвета пламени; в - обработка движения пикселей Video-based flame detection algorithm. In this work, the task flames detection on video sequences is solved as follows. 1. The video sequence in silent mode is broken into a series of video images based on 24 frames per second. 2. The resulting video is searched for a motion to separate the background zones from the flame candidate zones. To do this, the Background Subtractor of the com- puter Vision Library OpenCV [13] is used. Background model function is based on the algorithm of Gaussian distributions mixture. The model of Gaussian mixtures is the weighed sum of M component and can be written as: M 4. Analysis of the dynamic properties of flame (fig. 3) is performed by checking the size of the rectangular unit. Change to the unit size from the current and previous frames is taken into account: sd = s1/s2, (6) where s1 is the size of the candidate unit of the previous frame, s2 is the unit size of the current frame. 5. Flame geometry resulting from the formation of ions while combustion is taken into account in the follow- ing manner: circularity = s × (4π × s/P2), (7) squareness = s / (x × y), (8) p (x l) = å pibi (x), i=1 (1) aspectRatio = s × (min(x, y)/max(x, y)), (9) roughness = s*(P1/P), (10) where x - D-dimensional vector of random values; bi(x) - density of distribution of the constituent parts of the model; pi, i = 1, ..., M, - weights of model components. Parameter λ is calculated according to the formula: ì M ü where, s is the area of the candidate zone, P is the perime- ter of the candidate zone, x and y - the width and height of the candidate zone, P1 is the perimeter of the image. 6. The frame rate of the source video is also checked against the frame rate of the selected zones: l = í pi, mi, å pibi ( x )ý. (2) fr = FPS/MAXS*С, (11) î i=1 þ Each component is a D-dimensional Gaussian distri- bution function. After the movement is found, axes x and y have the most extreme detected pixels, and their coordi- nates draw a rectangle that separates the flame zone. 3. In the zones where movement is identified, the col- our mask of the flame is placed. A combination of RGB and HSV colour spaces is used to highlight zones of flame colour: R > G ≥ B, (3) R > RT, (4) S ≥ (255 - R) × ST / RT. (5) In Expressions (4) to (5), RT indicates the threshold value of R; S is the value of the pixel saturation, and ST corresponds to the saturation when R value matches the knowledge of RT parameter for the same pixel. Rules (3) and (4) show that the value of the R channel is greater than of the other objects. where, MAXS is the maximum unit size among all frames in the video sequence, C is the number of changes to the maximum unit size, FPS is the frame rate of the sequence. By comparing the frame rate and the frequency of the candidate zone unit, you can confirm the presence of motion on the video sequence, since each change represents a flame shift. The algorithm flow-chart is presented in fig. 4. After completing steps 1-6 of the algorithm, the can- didate region is assigned to one of the classes; “flame” or “no flame”. To do this, the classification method “ reference vector machine” (SVM) is used. Experimental studies. A series of Bilkent [14] and Dyntex [15] video consequences are used for the pilot studies. The frames used for the videos and their proper- ties are shown in tab.1. The total selection of video images includes 4.031 examples of flames and 2.822 sam- ples with no flames, and the total length of videos is about 5 minutes. The training selection equals 80 %, the test selection - 20 % of the total sample. Fig. 2. Colour factor distribution Рис. 2. Распределение коэффициента цветности а b Fig. 3. Size of the unit change: а - previous frame; b - current frame Рис. 3. Изменение размера блока: а - предыдущий кадр; б - текущий кадр Fig. 4. Flow chart of the flame detection algorithm via video data Рис. 4. Блок-схема алгоритма обнаружения пламени по видеоданным Results of the pilot studies are shown in tabl. 2 and 3. The performance of the flame detection algorithm was evaluated using the TR-true recognition and FAR-false alert rejection. The TR indicator is calculated as a ratio of frames in which the flame is correctly detected to the frames where the flame is skipped. The FAR-false opera- tion indicates the ratio of frames with false positive opera- tion to the total number of frames on the video sequence. As an example, fig. 5 and 6 present the frames of the flame detection in the Bilkent\barbeq.avi and Bilkent\ ForestFire1.avi video sequence, which show initial video image, highlighted colour mask (colour and motion), geometry of flame and flickering, result of the algorithm operation. Conclusion. The work suggests the algorithm to de- tect fire zones on video sequences. The algorithm is based on motion analysis, dynamic properties, and flame colour. The average detection accuracy of 94.47 % was carried out in experimental studies on video sequence containing flames, which is a good result as the flame was skipped in only 247 frames out of 4.031. False alert rejection, which were investigated on a video sequence experiment with- out flame, was obtained in 29 out of the 2.822, averaging 1.37 per cent. Thus, the experiment results confirm the efficiency of the proposed algorithm for the flame detec- tion via on video sequences. 800 Table 1 Test video sequences frames Test sequence description Frame sample Test sequence description Frame sample Video sequences with flame Bilkent\fBackYardFile, frame 334 Resolution, Pixels: 320х240 Number of frames: 1.251 Bilkent\barbeq.avi, frame 186 Resolution, Pixels: 320х240 Number of frames: 516 Bilkent\forest4.avi, frame 113 Resolution, Pixels: 400х256 Number of frames: 251 Bilkent\forest5.avi, frame 45 Resolution, Pixels: 400х256 Number of frames: 246 Bilkent\ForestFire1.avi, frame 54 Resolution, Pixels: 400х256 Number of frames: 247 Bilkent\fire1.avi, frame 146 Resolution, Pixels: 320х240 Number of frames: 542 Bilkent\forest2.avi, frame 154 Resolution, Pixels: 400х256 Number of frames: 273 Bilkent\controlled1.avi, frame 67 Resolution, Pixels: 400х256 Number of frames: 275 Dyntex/66ammj00.avi, frame 158 Resolution, Pixels: 720х576 Number of frames: 227 Dyntex/64cac10.avi, frame 104 Resolution, Pixels: 720х576 Number of frames: 203 Video sequences without flame Bilkent\ sEmptyR1.avi, frame 134 Resolution, Pixels: 400х256 Number of frames: 458 Dyntex/648ab10, frame 1 Resolution, Pixels: 384×288 Number of frames: 716 Bilkent\ sEmptyR2.avi, frame 5 Resolution, Pixels: 400х256 Number of frames: 437 Bilkent\ sParkingLot.avi, frame 563 Resolution, Pixels: 400х256 Number of frames: 1.136 Dyntex/649h320.avi, frame 120 Resolution, Pixels: 720х576 Number of frames: 206 Dyntex/6489610.avi, frame 47 Resolution, Pixels: 720х576 Number of frames: 201 Table 2 Flame detection results (video sequences with flame) Video sequence Total Number of frames Number of frames with true flame detection TR, % FAR, % Bilkent\fBackYardFile.avi 1251 1127 90.09 9.91 Bilkent\barbeq.avi 516 507 98.26 1.74 Bilkent\forest4.avi 251 235 93.63 6.37 Bilkent\forest5.avi 246 234 95.12 4.88 Bilkent\ForestFire1.avi 247 240 97.17 2.83 Bilkent\fire1.avi 542 529 97.60 2.40 Bilkent\forest2.avi 273 264 96.70 3.30 Bilkent\controlled1.avi 275 246 89.45 10.55 Dyntex\6ammj00.avi 227 217 95.59 4.41 Dyntex\64cac10.avi 203 185 91.13 8.87 Average values - - 94.47 5.53 Table 3 Flame detection results (video sequences without flame) Video sequence Total Number of frames Number of frames with false flame detection FAR, % Bilkent\sEmptyR1.avi 458 3 0.65 Bilkent\sEmptyR2.avi 437 12 2.74 Bilkent\sParkinLot.avi 1136 5 0.44 Dyntex\648ab10.avi 384 6 1.56 Dyntex\6489610.avi 201 1 0.49 Dyntex\649h320.avi 206 2 0.97 Average value - - 1.37 а b c d Fig. 5. Fire detection algorithm steps. Video sequence Bilkent\barbeq.avi: а - initial frame; b - flame mask; c - geometry and flickering; d - algorithm operation result Рис. 5. Шаги алгоритма обнаружения пламени. Видеопоследовательность Bilkent\barbeq.avi: 801 а - исходный кадр; б - маска пламени; в - учет геометрии и мерцания; г - результат работы алгоритма а b c d Fig. 6. Fire detection algorithm steps. Video sequence Bilkent\ForestFire1.avi.: а - initial frame; b - flame mask; c - geometry and flickering; d - algorithm operation result Рис. 6. Шаги обнаружения пламени. Видеопоследовательность Bilkent\ForestFire1.avi.: а - исходный кадр; б - маска пламени; в - учет геометрии и мерцания; г - результат работы алгоритма
×

Об авторах

А. В. Пятаева

Сибирский федеральный университет, Институт космических и информационных технологий

Email: anna4u@list.ru
Российская Федерация, 660074, г. Красноярск, ул. Академика Киренского, 26

О. Е. Бандеев

Сибирский федеральный университет, Институт космических и информационных технологий

Российская Федерация, 660074, г. Красноярск, ул. Академика Киренского, 26

Список литературы

  1. Спичкин Ю. В., Калач А. В., Сорокина Ю. Н. К вопросу об особенностях возникновения и развития горения дисперсных материалов // Вестник Воронеж- ского института ГПС МЧС России. 2014. Вып. 3 (12) С. 7-12.
  2. Favorskaya M., Pyataeva A., Popov A. Spatio- temporal smoke clustering in outdoor scenes based on boosted random forests // Procedia Computer Science. 2016. Vol. 96. P. 762-771.
  3. Goncalves W. N., Machado B. B., Bruno O. M. A complex network approach for dynamic texture recog- nition // Neurocomputing. 2015. Vol. 153. P. 211-220.
  4. Богуш P. П., Тычко Д. А. Алгоритм комплексно- го обнаружения дыма и пламени на основе анализа данных систем видеонаблюдения // Техническое зре- ние в системах управления. М., 2015. С. 65-71.
  5. Бровко Н. В., Богуш Р. П. Анализ методов обра- ботки последовательностей видеоизображений в при- ложении к задаче раннего обнаружения пожаров // Вестник Полоцкого государственного университета. 2011. № 12. С. 42-50.
  6. Han D., Lee B. Flame and Smoke Detection Method for Early Real-Time Detection of a Tunnel Fire // Fire Safety Journal. 2009. Vol. 44 (7). P. 951-961.
  7. Toreyin B. U., Dedeoglu Y., Gueduekbay U. Com- puter vision based method for real-time fire and flame detection // Pattern Recognition Letters. 2006. Vol. 27, No. 1. P. 49-58.
  8. Toreyin B. U., Dedeoglu Y., Cetin A. E. Wavelet based real-time smoke detection in video // Signal Proc- essing: Image Communication, EURASIP. 2005. Vol. 20. P. 255-260.
  9. Toreyin B. U., Dedeoglu Y., Cetin A. E. Contour based smoke detection in video using wavelets // 14th European Signal Processing Conference (EUSIPCO - 2006). Italy, 2006. P. 1-5.
  10. Yaqin Z., Guizhong T., Mingming X. Hierarchical detection of wildfire flame video from pixel level to semantic level // Expert Systems with Applications. 2015. Vol. 42, iss. 8. P. 4097-4104.
  11. Chunyu Y., Zhibin M., Xi Zh. A Real-time Video Fire Flame and Smoke Detection Algorithm // Procedia Engineering. 2013. Vol. 62. P. 891-898.
  12. Fast flame detection in surveillance video using logistic regression and temporal smoothing / G. K. Seong [et al.] // Fire Safety Journal. 2016. Vol. 79. P. 37-43.
  13. Open Source Computer Vision Library [Элек- тронный ресурс]. URL: http://opencv.org/. (дата обра- щения: 09.10.2017).
  14. Bilkent database [Электронный ресурс]. URL: http://signal.ee.bilkent.edu.tr/ VisiFire/Demo/FireClips/ (дата обращения: 09.10.2017).
  15. Renaud P., Fazekas S., Huiskes M. J. DynTex: A comprehensive database of dynamic textures // Pattern Recognition Letters. 2010. Vol. 31, No. 12. P. 1627-1632.

Дополнительные файлы

Доп. файлы
Действие
1. JATS XML

© Пятаева А.В., Бандеев О.Е., 2017

Creative Commons License
Эта статья доступна по лицензии Creative Commons Attribution 4.0 International License.

Данный сайт использует cookie-файлы

Продолжая использовать наш сайт, вы даете согласие на обработку файлов cookie, которые обеспечивают правильную работу сайта.

О куки-файлах