ПРОЕКТНОЕ РЕШЕНИЕ ПЕРЕДАЧИ ВИДЕОСИГНАЛА С FULL HD КАМЕРЫ НА WVGA СЕНСОРНЫЙ ДИСПЛЕЙ


Цитировать

Полный текст

Аннотация

Описывается проектное решение передачи сигнала с камеры на сенсорный дисплей. Данное решение разра- ботано на основе ППВМ-технологии с использованием программного обеспечения Xilinx Vivado 2015.2 и SDK 2015.2. Демонстратор, представленный в работе, включает плату-носитель MicroZed 7020, 7-дюймовый сен- сорный дисплей от компании Avnet и промышленную камеру Toshiba 1080Р60 от компании Avnet. Камера пере- дает full HD видеосигнал со скоростью 60 кадров в секунду на плату MicroZed 7020, которая обрабатывает видеосигнал и посылает его на ЖК-дисплей с активной площадью 800×480 пикселей. Поскольку разрешение дисплея меньше, чем разрешение камеры, на дисплее отображается только фрагмент целего видеокадра, тогда как полное изображение сохраняется в памяти на плате. Просмотреть целое изображение возможно, перемещаясь по сенсорному экрану при помощи касания, просматривая отдельные фрагменты изображения. Данное проектное решение может быть использовано, например, в качестве монитора заднего вида в авто- мобиле, тем самым извлекается польза из сенсорных технологий.

Полный текст

Introduction. Touchscreen technology has steadily come into our life. It is hard to imagine that only a few decades ago it was something unattainable. From the first finger-driven touchscreens invented by E. A. Johnson in 1965 and development of resistive and capacitive touch- screens to the latest optical touchscreen technology based on detection of an object’s shadow, it did not passed much time [1; 2]. However, the rapid development of touchscreen technologies due to a large interest in this area is obvious. There is a growing number of devices that benefit from it: smartphones, tablets, computer moni- tors, eBook readers, game devices, GPS navigators etc. The touchscreens are foreseen to be used also in home appliances. The researchers continue working on the touchscreens that will use so called “microfluid” technology, where buttons rise up due to the fluid pressure on the covered layer when the keypad is in use [2]. The present paper aims at describing a field- programmable gate array (FPGA) design, which demon- strates one of the applications the touchscreen technology can be used for. The design is based on two demonstrators available at Avnet’s webpages. The first one, Toshiba TCM3232PB Frame Buffer Design Tutorial, shows the reference design for using Toshiba camera and getting the HDMI output. It describes step-by-step process how to setup the Toshiba camera module and run a simple design that initializes the image sensor and HDMI output interface and implements a frame buffer in the programmable logic (PL) I/Os [3]. The second design is ALI3 Display Reference Design demonstrating the capabilities of touchscreen display and presenting simple interactive GUIs, which can serve as a start point for more complex applications [4]. However, there is no design available at Avnet, which would send a video stream from camera to a display with touchscreen capabilities, so that an end-user could provide some actions with an image he/she sees on the display. This work is aimed at combining two example designs from Avnet mentioned above in a way that it would be possible to receive a cropped image from Full HD camera on a WVGA display and observe the whole scene by moving the image, thus, benefiting from touchscreen technology. Briefly speaking the present design illustrates how to get a video signal from a full HD camera (1920×1080 pixels) to WVGA LCD display with an active area of 800×480 pixels. A video frame is transmitted by the cam- era at 60 fps with pixel clock 148.5 MHz to MicroZed 7020 board. The board processes the signal and sends it to the LCD display working at pixel clock 33.33 MHz. The output signal is in RGB format. The smaller display Using IP Integrator and a list of available IP cores the hardware design has been created. The IP cores used in the design are shown in fig. 1. The camera sends frames coded in Bayer matrix, received by TCM receiver at 24 MHz. These frames pass through Color Filter Array Interpolation (CFA) and Color Correction Matrix (CCM) blocks. CFA aims at reconstructing the missing color compo- nents of an image obtained from an RGB or CMY Bayer filtered sensor by means of interpolations using informa- tion from neighboring pixels. This process is called CFA demosaicing [5]. In the present design it also converts a video signal to RGB color space. CCM is used for adjusting white balance, color, brightness or contrast in an image. It multiplies the pixel values with some coefficients, which strengthen or weaken them. Mathematically it is expressed as a 3×3 matrix multiplication. The weights in equation define a color-correction matrix. The example of color-correction matrix for RGB data can be seen below. For more infor- mation refer to [6]. C ú ê ú 1 ê ú é R ù é K K K ù é R ù éO ù resolution results in video cropping. Thus, only a frag- ê ú ê 11 12 13 ment of the whole frame is seen on the display, whereas êGC ú = êK21 K22 K23 ú êGú + êO2 ú , the full image is stored in the memory. By touching the screen one can move the image and look through the êë BC úû êë K31 K32 K33 úû êë B úû êëO3 úû whole picture. The scene of interest can be then zoomed if extending the presented design with two finger interaction and zoom capabilities. The design can be used as a car rearview mirror or for home door entry applications. Moreover, it is worth to note that the camera-to- touchscreen design benefits from FPGA technology. FPGA is an integrated circuit enabling a designer to con- figure it after its manufacturing. It is widely used in common embedded applications. To create an FPGA design, Xilinx tools - Vivado 2015.2 for a hardware part and SDK 2015.2 for the software - are required. Methodology. The camera-to-touchscreen design described in the paper is created using the following HW components: 1. MicroZed Embedded Vision Carrier Card and MicroZed 7020 board with a processing system with two Cortex A9 cores and a 28 nm programmable logic. It includes two DDR3 memory components totaling 1GB of random access memory [3; 4]. 2. Avnet Toshiba TCM3232PB full HD color image sensor capable of delivering a video signal at 60 fps. Two technologies are implemented there: High Dynamic Range (HDR) technology and Color Noise Reduction (CNR) technology [3]. 3. Avnet 7-inch WVGA TFT-LCD display with an industrial projective capacitive touch sensor. The display has an active area of 800×480 pixels, frame rate of 60 Hz, pixel rate of 33.33 MHz and pixel format 24 bits RGB [4]. The hardware design is prepared in Xilinx Vivado 2015.2. This software is purposed for synthesis and analysis of HDL designs and suitable for system on a chip (SoC) development as well as for high-level synthesis, which enables C, C++ and SystemC programs to be directly targeted into Xilinx devices without manually creating RTL. where RC, GC, BC are corrected colors for RGB input data; Kn are weights; On are offsets used for achieving black levels [6]. Video Direct Memory Access (VDMA) writes to and results from DDR3 24-bit RGB frames. This core provides handling three frame buffers with internal lock between them to avoid image tearing. VDMA output is sent to Video Out block to map video data on output tim- ing coming from Video Timing Controller (VTC). ALI3 Controller passes the video signal through physical inter- face to display with 33 MHz pixel clock. Communication between the blocks runs at clock rate of 150 MHz, while peripheral configurations use 100 MHz clock speed. The Zynq processing system performs the whole video chain initialization. The image processing operations are ful- filled outside the processor. It contributes to fast function- ing of the system. To detect touches on display the system uses interrupts. The interrupt routine analyzes it and per- forms appropriate actions (a video frame shift in certain direction on the display). The next step after finishing the design is to validate it. The validation process helps to find the errors in the design that could prevent the hardware from working properly. The most frequent errors can appear in connec- tions between blocks or in parameter settings for individ- ual blocks. If validation is successful, then so called HDL (Hard- ware Description Language) wrapper can be generated. It is basically a top-level description of the system. The synthesis process in its turn will generate all source files for the IP blocks as well as any relevant constraints files. After design implementation, i. e. placing and routing the netlist onto the FPGA device resources, and genera- tion of a bitstream file with configuration data for imple- mentation in the PL, the building of the hardware image is complete and the hardware platform can be exported to SDK (Software Development Kit) environment, where different software applications can be created. As a basis for software development of camera-to- touchscreen application the previously mentioned designs from Avnet are used [3; 4]. Results. The complete camera-to-touchscreen demo kit can be seen in fig. 2 [7]. Its components are the same as it was described in previous chapters. The resultant software is programmed in such a way, that it enables to transfer a full HD image from TCM receiver to touchscreen LCD display with resolution 800×480 pixels at frame rate 60 fps. By touching the screen we can see the whole picture part by part. After interrupt occurs, the processor knows that a touch event has taken place and evaluates the concurrent touch events in a way, that it saves X and Y coordinates of two concur- rent events, and makes subtraction: X = X2 - X1, resp. Y = Y2 - Y1. If there is 0 for some axis, then there will be no movement in this direction. If X is a non-zero variable, the image will move for 40 pixels left or right depending on a variable sign. The same is valid for Y, but the shift is down/up. The directions of movement are similar one has in his touchscreen mobile phone [7]. The number of pixels for moving in each direction has been taken optionally after testing image incremental move- ment. The chosen number is optimal for demonstrating the capabilities of the design, but it can be changed if nec- essary. The example of the camera-to-touchscreen demo functioning is shown in fig. 3. Fig. 1. Hardware block diagram Рис. 1. Диаграмма аппаратного блока Fig. 2. Complete camera-to-touchscreen demo kit Рис. 2. Демонстрационный вариант Full HD камеры Fig. 3. Example of SW application Рис. 3. Пример применения программного обеспечения Discussion. The present design demonstrates a cam- era-to-touchscreen technology, which can be used for different applications. It can be complemented with more sophisticated de- signs such as face detection, vehicle recognition, edge detection, optical flow designs and many others. For these purposes the SDSoC (Software-Defined System On Chip) environment is a good tool to use. The SDSoC is a system compiler, which targets a base platform and is capable to compile C/C++ functions into programmable logic. The system compiler works one level above the Vivado HLS compiler. After analyzing a program and determining the data flow between soft- ware and hardware functions, it generates an application specific SoC including a complete boot image with firm- ware, operating system, and application executable. Com- pilation itself is performed by the Xilinx HLS compiler. HLS compiles the transformed C/C++ to the HDL code. The HDL code and the corresponding cores are automati- cally packed into the IP-XACT format and serve as input for Vivado IP integrator. The Xilinx SDSoC environment also automatically generates compatible data-mover IP-cores and the interface IP-cores for the programmable logic part of the ZYNQ device. This can result in auto- mated generation of new SoC system with new HW accelerators, which replace the original SW based system. It can reduce the energy per pixel in case of video proc- essing algorithms [8]. It is also worth to note that the SDSoC system includes OpenCV libraries, which comprise different mathematical functions such as Gaussian, Median, Bilat- eral, Canny edge detection, SVM, LK Optical Flow and etc. [9] Moreover, there is still a space for improvements and optimization for the presented design. One of the direc- tions for improvement is to solve the problem of a false touch event registration, which was not fully managed in the present demonstrator. Though the design functions satisfactory for demonstration purposes, in future it would be desirable to get rid of these errors by making more precise calibration and, perhaps, by implementing some additional filtering. Besides, the display supports two finger touches and gesture recognition. It means that there is a possibility to use the image scaling in a way we are used to making it in our mobile phones. This function could be a useful exten- sion of the presented demonstrator bringing further bene- fits for some applications including, e. g., home entry systems or car in-cabin systems. There are also other possibilities, which can be profit- able for some applications. For example, it is possible to connect one more block, on Screen Display block, in the hardware design, that enables alpha blending and compo- sition of external video inputs. It supports several layers, so the user can configure multiple input video sources. Each video source layer can be displayed at different cropped sizes, positions, and transparency [10]. Conclusion. The present paper describes camera-to- touchscreen-display design, which allows getting a full HD image from camera module and sends it to 800×480 display, so that on display there is only a part of the whole image, but the full image is stored in memory. We can see the whole image part by part by touching display and so moving the image right/left or/and up/down. The mathe- matical models are a part of system generation process and are hidden in the separate blocks of the design. The design can be also complemented with complex mathe- matical functions for edge detection, optical flow or image recognition applications using the SDSoC environment, which compiles C/C++ functions for the target platform. The present design is based on two original designs for the camera and for the display from Avnet, Inc. How- ever, the Avnet’s designs do not allow getting the image from camera to display and move along it as it was described in the present article. In this paper the principle of design was presented in more details. Besides, this paper summarizes the strengths and drawbacks of the design, and names some other pos- sibilities and space for improvements as, for example, to get rid out of false detection of touch events, to add an image scaling or even to use On Screen Display and bene- fit from several layers for multiple input video sources, to use complex algorithms for edge detection and image recognition capabilities and etc. Such kind of displays can be used as a car rear viewer with more functionality or, for example, in home entry systems. The application field of such displays can be quite broad and is not limited by above mentioned exam- ples. There is a space for engineers’ imagination.
×

Об авторах

Р. Н. Лихонина

Академия наук Чешской Республики, Институт теории информации и автоматизации

Email: likhonina@utia.cas.cz
Чешская Республика, CZ-182 08, Прага 8, Pod Vodárenskou věží, 4

Л. Когоут

Академия наук Чешской Республики, Институт теории информации и автоматизации

Чешская Республика, CZ-182 08, Прага 8, Pod Vodárenskou věží, 4

И. Кадлец

Академия наук Чешской Республики, Институт теории информации и автоматизации

Чешская Республика, CZ-182 08, Прага 8, Pod Vodárenskou věží, 4

Список литературы

  1. Types of Touch Screen Technologies. Baanto. Available at http://baanto.com/types-of-touch-screen- technologies (accessed 05.03.2017).
  2. Ion F. The past, present, and future of touch: From touch displays to the Surface: A brief history of touch- screen technology. ARSTechnica. 2013, P. 1-3. Available at http://arstechnica.com/gadgets/2013/04/from-touch-displays- to-thesurface-a-brief-history-of-touchscreen-technology/1/ (accessed 05.03.2017).
  3. Toshiba Industrial 1080P60 Camera Module: Get- ting Started Guide. Avnet Electronics, 2015, Version 1.2,P. 4-12. Available at https://zedboard.org/content/getting- started-guide-3 - 2 (accessed 10.03.2017).
  4. MicroZed Embedded Vision Carrier (EMBV): ALI3 Display Reference Design and Tutorial. Avnet Electronics. 2015, Version 2014.4, P. 1-4. Available at http://zedboard.org/product/microzed-embedded-vision- kits (accessed 10.03.2017).
  5. Color Filter Array Interpolation v7.0. LogiCORE IP Product Guide. Xilinx. 2015, PG002, P. 4-7. Available at http://www.xilinx.com/support/documentation/ip documentation/vcfa/v70/pg002vcfa.pdf (accessed 11.03.2017).
  6. Color Correction Matrix v6.0. LogiCORE IP Prod- uct Guide. Xilinx. 2015, PG001, P. 4-6. Available at http://www.xilinx.com/support/documentation/ipdocu- mentation/vccm/v60/pg001vccm.pdf (accessed 11.03.2017).
  7. Likhonina R., Kohout L., Kadlec J. Camera to Touchscreen Demonstration for MicroZed 7020 carrier board, Avnet 7-inch Zed Touch Display and Avnet Toshiba Industrial 1080P60 Camera Module. Czech Acad Sci, Inst Inform Th & Autom, 2016, P. 3-9.
  8. Kadlec J., Pohl Z., Steven van der Vlugt, Jääskeläi- nen P., Koskinen L. Algorithms, Design Methods, and Many-Core Execution Platform for Low-Power Massive Data-Rate Video and Image Processing. Almarvi, 2016, P. 45-46.
  9. SDSoC Environment User Guide. Xilinx. 2017, UG1027 (v2017.2), P. 6-9. Available at https:// www.xilinx.com/cgi-bin/docs/rdoc?v=2017.2;d=ug1027- sdsoc-user-guide.pdf (accessed 15.08.2017).
  10. Video On-Screen Display v6.0. LogiCORE IP Product Guide. Xilinx. 2015, PG010, P. 4-7. Available at http://www.xilinx.com/support/documentation/ipdocu- mentation/vosd/v60/pg010vosd.pdf (accessed 15.03.2017).

Дополнительные файлы

Доп. файлы
Действие
1. JATS XML

© Лихонина Р.Н., Когоут Л., Кадлец И., 2018

Creative Commons License
Эта статья доступна по лицензии Creative Commons Attribution 4.0 International License.

Данный сайт использует cookie-файлы

Продолжая использовать наш сайт, вы даете согласие на обработку файлов cookie, которые обеспечивают правильную работу сайта.

О куки-файлах