Disclosure of Invention
In view of this, the present invention is directed to a method for generating a super frame rate (super frame rate) video stream by using an image sensor based on an FPGA, so as to reduce the difficulty of a back-end processor in processing an algorithm.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a sensor super frame rate system based on FPGA comprises a sensor module, a video receiving module, a DDR controller module, a neural network module and a line buffer module;
an output port of the Sensor module is connected with an input port of the video receiving module, an output port of the video receiving module is connected with an input port of the DDR control module, an output port of the DDR control module is connected with an input port of the neural network, an output port of the neural network is connected with an input port of the line buffer, and frame data are output through the line buffer.
The sensor module is used for generating a video source;
the neural network module is used for processing the transmission of frame data;
the DDR module is used for storing data;
the video receiving module is used for analyzing sensor data, and the analyzed data is stored in the DDR module;
the line buffer is used to store data that is not the same frame, and the output is transmitted together.
Further, the sensor module is an image sensor, and scans an external image to form an image or a video stream.
Further, the neural network module is a network formed by inputting a large amount of training data.
Further, the DDR module forms front and rear frame image data, and after the rear frame image is formed, the front frame image data is read out from the DDR module and is transmitted to the neural network together.
A sensor super frame rate method based on FPGA includes the following steps:
A. initializing network data;
B. inputting data to the network, generating a frame image with a specific frame proportion by using the data of the front frame and the data of the rear frame, and continuing training;
C. judging whether the training steps reach the standard or not, finishing the training when the training steps reach the standard, judging the LOSS value when the training steps do not reach the standard, and finishing the training when the LOSS value is smaller; training is continued when the loss value is large.
The input data continues to cycle sequentially until the loss value is sufficiently small to complete the training.
Further, the loss value is an error in training.
Compared with the prior art, the method for the super frame rate of the sensor based on the FPGA has the following advantages:
the sensor super frame rate method can be used for generating a controllable decimal frame rate video stream which is required by a rear-end processor, uniform in frame interval and continuous in motion of a moving object in a scene on the basis of an original frame rate video stream input by an image sensor through a neural network.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1 to 3, a system for super frame rate of a sensor based on an FPGA is characterized in that: the system comprises a sensor module, a video receiving module, a DDR controller module, a neural network module and a line buffer module;
an output port of the Sensor module is connected with an input port of the video receiving module, an output port of the video receiving module is connected with an input port of the DDR control module, an output port of the DDR control module is connected with an input port of the neural network, an output port of the neural network is connected with an input port of the line buffer, and frame data are output through the line buffer.
The sensor module is used for generating a video source; the neural network module is used for processing the transmission of frame data; the DDR module is used for storing data; the video receiving module is used for analyzing sensor data, and the analyzed data is stored in the DDR module; the line buffer is used to store data that is not the same frame, and the output is transmitted together. The sensor module is an image sensor and scans an external image to form an image or a video stream. The neural network module is a network formed by inputting a large amount of training data. The DDR module forms front and rear frame image data, and after the rear frame image is formed, the front frame image data is read out from the DDR module and transmitted to the neural network together.
A sensor super frame rate method based on FPGA is characterized in that: the method comprises the following steps:
A. initializing network data;
B. inputting data to the network, generating a frame image with a specific frame proportion by using the data of the front frame and the data of the rear frame, and continuing training;
C. judging whether the training steps reach the standard or not, finishing the training when the training steps reach the standard, judging the LOSS value when the training steps do not reach the standard, and finishing the training when the LOSS value is smaller; training is continued when the loss value is large.
The loss value is the error in training.
The working process of the embodiment is as follows:
the method comprises the steps that a video is generated through a sensor chip module, the video is received through a video receiving module to be subjected to data analysis, the analyzed video is transmitted into a DDR module to form front and rear frame image data, after the rear frame image is formed, the front frame image data is read out in the DDR module and transmitted to a neural network together, frame transmission is carried out through processing of the neural network module, an intermediate frame is generated in the process to achieve a super frame rate, data which are not the same frame are stored in a line buffer mode, and the data are transmitted and output one by one.
The neural network carries out line buffering and pixel buffering processing on the front frame image data and the back frame image data, the line buffering and the pixel buffering are used for storing the front frame image data and the back frame image data, the front frame image data and the back frame image data are put together and transmitted to an image processing program, and the neural network is formed by carrying out multiple times of image data input and image processing processes.
The system and the method can generate a small frame rate video stream which is required by a back-end processor, controllable, uniform in frame interval and continuous in motion of a moving object in a scene through a neural network based on an original frame rate video stream input by an image sensor. According to the method, inter-frame feature matching of deep learning is achieved on an FPGA, a GAN is used for generating a required frame at a specific moment, and a frame image can be automatically generated after a large number of traffic scenes are trained. Through a large amount of data training, the method has strong adaptability to vehicles, and the matching of the scale, the color and the specific details is good; in addition, the method makes full use of the FPGA architecture, and has good realizability and strong processing real-time performance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.