Movatterモバイル変換


[0]ホーム

URL:


CN110636221A - System and method for super frame rate of sensor based on FPGA - Google Patents

System and method for super frame rate of sensor based on FPGA
Download PDF

Info

Publication number
CN110636221A
CN110636221ACN201910901213.4ACN201910901213ACN110636221ACN 110636221 ACN110636221 ACN 110636221ACN 201910901213 ACN201910901213 ACN 201910901213ACN 110636221 ACN110636221 ACN 110636221A
Authority
CN
China
Prior art keywords
module
data
frame
sensor
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910901213.4A
Other languages
Chinese (zh)
Inventor
陈东亮
朱健立
李庆新
王汝杰
唐波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Tian Di Man And Enterprise Management Consulting Co Ltd
Original Assignee
Tianjin Tian Di Man And Enterprise Management Consulting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tian Di Man And Enterprise Management Consulting Co LtdfiledCriticalTianjin Tian Di Man And Enterprise Management Consulting Co Ltd
Priority to CN201910901213.4ApriorityCriticalpatent/CN110636221A/en
Publication of CN110636221ApublicationCriticalpatent/CN110636221A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention provides a system and a method for sensor super frame rate based on FPGA, the system generates video through a sensor chip module, receives the video through a video receiving module to analyze data, transmits the analyzed video into a DDR control controller module to form front and back frame image data, reads the front frame image data in the DDR control controller module after the back frame image is formed, transmits the front frame image data to a neural network together, performs frame transmission through the processing of the neural network module, generates an intermediate frame in the process to realize super frame rate, stores data which are not the same frame in line buffer, and transmits and outputs the data one by one. The sensor super-frame rate method can generate a decimal frame rate video stream which is required by a back-end processor, controllable, uniform in frame interval and continuous in motion of a moving object in a scene through deep learning GAN based on an original frame rate video stream input by an image sensor.

Description

System and method for super frame rate of sensor based on FPGA
Technical Field
The invention belongs to the field of communication, and particularly relates to a system and a method for sensor super frame rate based on an FPGA.
Background
When the original frame rate of the image sensor does not conform to the stable frame rate required by the back-end processor, for example, the high frame rate required by the back-end processing algorithm exceeds the design frame rate of the sensor; or when the sensor generates special frames such as residual frames, ROI frames and the like in a specific application occasion, the frame rate of the video stream received by the back-end processor is unstable, and a back-end video processing algorithm is interfered. The FPGA is used as a transfer station of the video and generally directly participates in sensor driving, so that the working state of the sensor is quite clear. In the past, if FIFO is adopted to generate stable frame rate, but there may be a scene that the actual frame data is repeated and the moving object is not consistent, and the influence on the rear-end video processing algorithm is large.
The frame interpolation method realized by the common method is used for target matching realized by calculating the corner points of the moving target by using the traditional method, has poor adaptability to light and scenes, has large calculation amount and cannot well utilize the architectural characteristics of the FPGA.
Disclosure of Invention
In view of this, the present invention is directed to a method for generating a super frame rate (super frame rate) video stream by using an image sensor based on an FPGA, so as to reduce the difficulty of a back-end processor in processing an algorithm.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a sensor super frame rate system based on FPGA comprises a sensor module, a video receiving module, a DDR controller module, a neural network module and a line buffer module;
an output port of the Sensor module is connected with an input port of the video receiving module, an output port of the video receiving module is connected with an input port of the DDR control module, an output port of the DDR control module is connected with an input port of the neural network, an output port of the neural network is connected with an input port of the line buffer, and frame data are output through the line buffer.
The sensor module is used for generating a video source;
the neural network module is used for processing the transmission of frame data;
the DDR module is used for storing data;
the video receiving module is used for analyzing sensor data, and the analyzed data is stored in the DDR module;
the line buffer is used to store data that is not the same frame, and the output is transmitted together.
Further, the sensor module is an image sensor, and scans an external image to form an image or a video stream.
Further, the neural network module is a network formed by inputting a large amount of training data.
Further, the DDR module forms front and rear frame image data, and after the rear frame image is formed, the front frame image data is read out from the DDR module and is transmitted to the neural network together.
A sensor super frame rate method based on FPGA includes the following steps:
A. initializing network data;
B. inputting data to the network, generating a frame image with a specific frame proportion by using the data of the front frame and the data of the rear frame, and continuing training;
C. judging whether the training steps reach the standard or not, finishing the training when the training steps reach the standard, judging the LOSS value when the training steps do not reach the standard, and finishing the training when the LOSS value is smaller; training is continued when the loss value is large.
The input data continues to cycle sequentially until the loss value is sufficiently small to complete the training.
Further, the loss value is an error in training.
Compared with the prior art, the method for the super frame rate of the sensor based on the FPGA has the following advantages:
the sensor super frame rate method can be used for generating a controllable decimal frame rate video stream which is required by a rear-end processor, uniform in frame interval and continuous in motion of a moving object in a scene on the basis of an original frame rate video stream input by an image sensor through a neural network.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is an overall schematic diagram of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network according to an embodiment of the present invention
Fig. 3 is a schematic diagram of training according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1 to 3, a system for super frame rate of a sensor based on an FPGA is characterized in that: the system comprises a sensor module, a video receiving module, a DDR controller module, a neural network module and a line buffer module;
an output port of the Sensor module is connected with an input port of the video receiving module, an output port of the video receiving module is connected with an input port of the DDR control module, an output port of the DDR control module is connected with an input port of the neural network, an output port of the neural network is connected with an input port of the line buffer, and frame data are output through the line buffer.
The sensor module is used for generating a video source; the neural network module is used for processing the transmission of frame data; the DDR module is used for storing data; the video receiving module is used for analyzing sensor data, and the analyzed data is stored in the DDR module; the line buffer is used to store data that is not the same frame, and the output is transmitted together. The sensor module is an image sensor and scans an external image to form an image or a video stream. The neural network module is a network formed by inputting a large amount of training data. The DDR module forms front and rear frame image data, and after the rear frame image is formed, the front frame image data is read out from the DDR module and transmitted to the neural network together.
A sensor super frame rate method based on FPGA is characterized in that: the method comprises the following steps:
A. initializing network data;
B. inputting data to the network, generating a frame image with a specific frame proportion by using the data of the front frame and the data of the rear frame, and continuing training;
C. judging whether the training steps reach the standard or not, finishing the training when the training steps reach the standard, judging the LOSS value when the training steps do not reach the standard, and finishing the training when the LOSS value is smaller; training is continued when the loss value is large.
The loss value is the error in training.
The working process of the embodiment is as follows:
the method comprises the steps that a video is generated through a sensor chip module, the video is received through a video receiving module to be subjected to data analysis, the analyzed video is transmitted into a DDR module to form front and rear frame image data, after the rear frame image is formed, the front frame image data is read out in the DDR module and transmitted to a neural network together, frame transmission is carried out through processing of the neural network module, an intermediate frame is generated in the process to achieve a super frame rate, data which are not the same frame are stored in a line buffer mode, and the data are transmitted and output one by one.
The neural network carries out line buffering and pixel buffering processing on the front frame image data and the back frame image data, the line buffering and the pixel buffering are used for storing the front frame image data and the back frame image data, the front frame image data and the back frame image data are put together and transmitted to an image processing program, and the neural network is formed by carrying out multiple times of image data input and image processing processes.
The system and the method can generate a small frame rate video stream which is required by a back-end processor, controllable, uniform in frame interval and continuous in motion of a moving object in a scene through a neural network based on an original frame rate video stream input by an image sensor. According to the method, inter-frame feature matching of deep learning is achieved on an FPGA, a GAN is used for generating a required frame at a specific moment, and a frame image can be automatically generated after a large number of traffic scenes are trained. Through a large amount of data training, the method has strong adaptability to vehicles, and the matching of the scale, the color and the specific details is good; in addition, the method makes full use of the FPGA architecture, and has good realizability and strong processing real-time performance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

CN201910901213.4A2019-09-232019-09-23System and method for super frame rate of sensor based on FPGAPendingCN110636221A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910901213.4ACN110636221A (en)2019-09-232019-09-23System and method for super frame rate of sensor based on FPGA

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910901213.4ACN110636221A (en)2019-09-232019-09-23System and method for super frame rate of sensor based on FPGA

Publications (1)

Publication NumberPublication Date
CN110636221Atrue CN110636221A (en)2019-12-31

Family

ID=68974113

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910901213.4APendingCN110636221A (en)2019-09-232019-09-23System and method for super frame rate of sensor based on FPGA

Country Status (1)

CountryLink
CN (1)CN110636221A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106686472A (en)*2016-12-292017-05-17华中科技大学 A method and system for generating high frame rate video based on deep learning
CN108416422A (en)*2017-12-292018-08-17国民技术股份有限公司A kind of convolutional neural networks implementation method and device based on FPGA
CN108696727A (en)*2018-08-102018-10-23杭州言曼科技有限公司Industrial camera
CN108921182A (en)*2018-09-262018-11-30苏州米特希赛尔人工智能有限公司The feature-extraction images sensor that FPGA is realized
CN109068174A (en)*2018-09-122018-12-21上海交通大学Video frame rate upconversion method and system based on cyclic convolution neural network
CN109379550A (en)*2018-09-122019-02-22上海交通大学 Video frame rate up-conversion method and system based on convolutional neural network
US20190095776A1 (en)*2017-09-272019-03-28Mellanox Technologies, Ltd.Efficient data distribution for parallel processing
US20190141088A1 (en)*2017-11-072019-05-09ConnectWise Inc.Sytems and methods for remote control in information techngology infrastructure
US20190178631A1 (en)*2014-05-222019-06-13Brain CorporationApparatus and methods for distance estimation using multiple image sensors
CN109922372A (en)*2019-02-262019-06-21深圳市商汤科技有限公司Video data processing method and device, electronic equipment and storage medium
CN110248102A (en)*2019-07-222019-09-17中国大恒(集团)有限公司北京图像视觉技术分公司A kind of industrial camera way to play for time

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190178631A1 (en)*2014-05-222019-06-13Brain CorporationApparatus and methods for distance estimation using multiple image sensors
CN106686472A (en)*2016-12-292017-05-17华中科技大学 A method and system for generating high frame rate video based on deep learning
US20190095776A1 (en)*2017-09-272019-03-28Mellanox Technologies, Ltd.Efficient data distribution for parallel processing
US20190141088A1 (en)*2017-11-072019-05-09ConnectWise Inc.Sytems and methods for remote control in information techngology infrastructure
CN108416422A (en)*2017-12-292018-08-17国民技术股份有限公司A kind of convolutional neural networks implementation method and device based on FPGA
CN108696727A (en)*2018-08-102018-10-23杭州言曼科技有限公司Industrial camera
CN109068174A (en)*2018-09-122018-12-21上海交通大学Video frame rate upconversion method and system based on cyclic convolution neural network
CN109379550A (en)*2018-09-122019-02-22上海交通大学 Video frame rate up-conversion method and system based on convolutional neural network
CN108921182A (en)*2018-09-262018-11-30苏州米特希赛尔人工智能有限公司The feature-extraction images sensor that FPGA is realized
CN109922372A (en)*2019-02-262019-06-21深圳市商汤科技有限公司Video data processing method and device, electronic equipment and storage medium
CN110248102A (en)*2019-07-222019-09-17中国大恒(集团)有限公司北京图像视觉技术分公司A kind of industrial camera way to play for time

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张猛: "基于空间连续生成对抗网络的视频帧间图像生成", 《中国优秀硕士学位论文全文数据库(月刊)》*

Similar Documents

PublicationPublication DateTitle
US11967045B2 (en)Image processing device and method
CN100551048C (en)Multiple-camera supervisory control system and tracking thereof based on the three-dimensional video-frequency dynamic tracking
US10728474B2 (en)Image signal processor for local motion estimation and video codec
US10846551B2 (en)Video data processing
US20030174773A1 (en)Real-time video object generation for smart cameras
CN108986166A (en)A kind of monocular vision mileage prediction technique and odometer based on semi-supervised learning
CN103379266A (en)High-definition web camera with video semantic analysis function
CN109688382B (en) An underwater image processing system for an underwater robot
CN109584186A (en)A kind of unmanned aerial vehicle onboard image defogging method and device
US10311311B1 (en)Efficient two-stage object detection scheme for embedded device
CN101594533B (en)Method suitable for compressing sequence images of unmanned aerial vehicle
CN111598775B (en)Light field video time domain super-resolution reconstruction method based on LSTM network
CN112995465A (en)Image transmission system and method based on ZYNQ
CN110636221A (en)System and method for super frame rate of sensor based on FPGA
CN119991760A (en) A single target tracking method suitable for the terminal side
WO2025086836A1 (en)Image processing method and system
CN105430297B (en)The automatic control system that more video formats are changed to IIDC protocol videos form
CN109709561B (en)Ranging method, terminal, and computer-readable storage medium
CN102075754A (en)Dynamic zooming search window-based motion estimation method and system
CN109951667A (en)High-definition video signal processing unit and method based on FPGA
CN104333726B (en)A kind of interlace-removing method and system for transcoded video source
CN109903216B (en)System and method for realizing positioning image dot matrix extraction based on FPGA platform
CN109688314B (en)Camera system and method with low delay, less cache and controllable data output mode
CN104717445B (en)Automatic switching method of more video formats to BT.656 agreement TSC-system formula videos
CN116208851A (en)Image processing method and related device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20191231


[8]ページ先頭

©2009-2025 Movatter.jp