Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As described in the related art, the current thermal imaging camera has a poor imaging effect, and the thermal imaging camera with a visible light camera improves the imaging effect by fusing a visible light image and an infrared light image, but the current thermal imaging camera has high requirements on a structure and a lens, the resolution of a fused image obtained by fusion is low, the imaging effect is improved, but the equipment cost is improved, and the imaging effect is not optimal.
Therefore, the image fusion method provided by the application can be used for realizing the processes of acquiring the infrared light image and the visible light image at the same moment, extracting the edge information of the visible light image and fusing the edge information into the infrared light image so as to enhance the imaging effect of the infrared light image.
The image fusion method is applied to terminal equipment. For example, the terminal device may be a mobile phone provided with an infrared camera and a visible light camera, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a thermal imaging camera, and the like, and the specific type of the terminal device is not limited in this embodiment.
By way of example and not limitation, when the terminal device is a wearable device, the wearable device may also be a generic term for intelligently designing daily wearing by applying wearable technology, developing wearable devices, such as glasses, gloves, watches, clothing, shoes, and the like. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable intelligent device has the advantages of complete functions, large size and capability of realizing complete or partial functions without depending on a smart phone, such as a smart watch or smart glasses.
By way of example and not limitation, when the terminal device is a thermal imaging camera, the thermal imaging camera may be a camera with complete functions and capable of realizing complete functions without depending on an intelligent terminal such as a computer, such as a surveillance camera for video surveillance, and the surveillance camera may be a gun-type camera, a dome-type camera, an indoor panoramic camera (such as a fish eye or a sky eye), a small pan-tilt camera, a micro-camera, and the like according to shape differentiation; the thermal imaging camera can also be a device which is only concentrated on a certain application function and needs to be matched with other intelligent devices such as computers for use, such as a network camera for daily activities such as video chatting, camera shooting and the like. The infrared camera and the visible light camera provided in the terminal device have substantially the same imaging field of view, but the direction of the infrared camera and the direction of the visible light camera are not strictly the same. The scene of the infrared light image collected by the infrared camera is ensured to be the same as the scene of the visible light image collected by the visible light camera. For example, an infrared camera is disposed adjacent to a visible camera. It is understood that the arrangement of the infrared camera and the visible camera is not limited to the adjacent arrangement.
Further, the image fusion method applied to the terminal device applies to the affine transformation vector space principle, the bilinear interpolation method, the Canny edge detection algorithm and other algorithms or principles to process the image. It should be understood that the image fusion method of the embodiments of the present application may use one or more of the algorithms or principles described above for image processing, and may also use other non-described algorithms or principles in addition to the algorithms or principles described above for image processing.
For ease of understanding and explanation, the affine transformation vector space principle, the bilinear interpolation method, and the Canny edge detection algorithm applied to the embodiments of the present application are further explained below.
The affine transformation vector space principle, called affine transformation principle or affine mapping principle for short, is characterized by that in the geometric process, a vector space is undergone the process of linear transformation and is connected with translation, and then transformedIs another vector space process. The linear transformation includes, but is not limited to, rotation, scaling, translation, and miscut operations. For example, the vector matrix of image A before transformation is
Affine transformation matrix of
The vector matrix of the transformed image B is
Then
When the image a is converted into the image B by rotating clockwise by θ degrees around (x, y) as the axis, the variable corresponding to the affine transformation matrix is
Wherein a, b, c and d are rotation variables, and c and f are translation variables. It should be understood that embodiments of the present application may perform a linear transformation on an image only once, without a subsequent translation.
The bilinear interpolation method, also called as bilinear interpolation method, is a process of performing linear interpolation on an interpolation function containing two variables (X, Y) in two directions (X-axis direction and Y-axis direction of a coordinate system) respectively, and the result of the linear interpolation is independent of the interpolation sequence. For example, if it is desired to obtain the value of an unknown function f (x, y) at point P ═ x, y, linear interpolation is performed using a known function f (x, y): obtaining the values of a known function f (X, y) at four points, namely (X1, y1), L (X1, y2), M (X2, y1) and N (X2, y2), and performing linear interpolation on the known function f (X, y) in the X-axis direction to obtain a point R (X, y1) and a point S (X, y2), wherein:
and performing linear interpolation on the known function f (x, Y) in the Y-axis direction to obtain a point P which is (x, Y), wherein
It should be understood that, in addition to the above linear interpolation in the two-dimensional space, the embodiment of the present application may also apply linear interpolation in the three-dimensional space.
The Canny edge detection algorithm refers to a process of detecting edges between different objects or between different media in an image. Specifically, the original image is smoothed by gaussian filtering, intensity gradients (intensities) of the smoothed image are searched, a non-maximum suppression (non-maximum suppression) technology is adopted to eliminate false detection edges, a double-threshold method is adopted to determine possible boundaries, and finally a hysteresis technology is adopted to track the boundaries.
Fig. 1 shows a schematic flow diagram of an image fusion method provided herein, which may be applied, by way of example and not limitation, in the thermal imaging camera described above.
S101, acquiring a frame of infrared light image and a frame of visible light image under the same timestamp.
The timestamp is a difference value from a time point when a certain action is performed to a fixed time point, and the action can be the acquisition of the infrared light image or the visible light image. The measurement unit of the timestamp can be accurate to microsecond, so that the thermal imaging camera can acquire the infrared light image and the visible light image at the same time, and the environment state of the acquired infrared light image and the environment state of the acquired visible light image can be consistent. For example, in order to acquire an infrared light image and a visible light image during a motion process of a person, and to ensure that a motion of the person in the infrared light image acquired by the infrared camera is consistent with a motion of the person in the visible light image acquired by the visible light camera, it is necessary to ensure that the time for acquiring the images by the infrared camera and the time for acquiring the images by the visible light camera are the same as far as possible. It should be understood that in other embodiments, the minimum unit of measure of the timestamp may be other units of time such as seconds, milliseconds, and the like.
By acquiring the infrared light image and the visible light image under the same timestamp, the environment state of the two images is ensured to be consistent when the two images are acquired, the problem that the current camera has high requirements on the structure and the lens is solved, and the equipment cost is reduced.
And S102, cutting out a local size image of the visible light image matched with the infrared light image.
The local size image can be an image containing complete edge information and partial visible light information obtained by removing local pixel points in the visible light image. Since the infrared light image has a problem of imaging blur, and it is difficult to distinguish the boundary between objects in the image, the present embodiment adds the boundary of the visible light image to the infrared light image. And the visible light image has many pixel points, and in order to reduce unnecessary calculation amount, the local size image containing the object boundary in the visible light image matched with the infrared light image is cut out.
S103, extracting edge information of the local size image and generating a gray level image with the edge information.
The edge information is boundary information between different objects or different media in the local size image, for example, boundary information between a person and an environmental background in the image. The cut partial size image contains a large number of visible light pictures, and the edge information between the objects is added into the infrared light image to increase the boundary between the objects in the infrared light image, so the edge information of the partial size image is extracted to generate a gray scale image with the edge information for being fused with the infrared light image.
Optionally, the embodiment of the present application may detect edge information of the local size image through a canny edge detection algorithm.
And S104, performing fusion operation on the infrared light image and the gray level image to obtain a fusion image.
The fusion operation is to calculate each pixel point of the infrared image and the pixel point of the gray image at the corresponding position so as to add the edge information of the gray image to the infrared image.
According to the method and the device, the infrared light image and the visible light image are obtained according to the timestamp, and the infrared light image and the visible light image are obtained by the camera at the same moment, so that the fusion precision of the fused image is ensured; the local size image of the visible light image matched with the infrared light image is cut out, the edge information of the local size image is extracted, and the gray level image with the edge information and the infrared light image are subjected to fusion operation, so that the edge information in the gray level image is added into the infrared light image, the fused image has a stronger edge effect, and the problem that an existing thermal imaging camera does not have imaging or imaging blurring is solved.
On the basis of the embodiment shown in fig. 1, fig. 2 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 2, the step S101 specifically includes steps S201 to S203. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S201, acquiring a frame of infrared light image and a preset continuous frame number of visible light images in real time, recording a first time stamp for acquiring the infrared light image, and recording a second time stamp for acquiring each frame of the visible light image;
optionally, the timestamp is accurate to microseconds and free of system time interference. For example, the starting time point when the camera starts to shoot is taken as a starting point, and the first timestamp is a difference value between the time point when the infrared light image is acquired and the starting time point, wherein the difference value is accurate to microsecond; similarly, the second timestamp is a difference between a time point when each frame of visible light image is acquired and the start time point. It should be understood that the above description is only for illustrative purposes, and is not intended to limit the specific means implemented in the present application, taking the starting time point of the camera to start the image capturing as a starting point.
Because the infrared camera and the visible light camera of the camera may have a large difference in the time points when the infrared camera and the visible light camera receive the signals of the collected images, when a frame of infrared light image is obtained, a preset continuous number of frames of visible light images are obtained, so that it is easy to find a frame of visible light image with the same timestamp as the infrared light image from the plurality of frames of visible light images, wherein the preset continuous number of frames may be 10 frames. It can be understood that, in the process of acquiring one infrared light image by the infrared camera, the visible light camera acquires a plurality of visible light images continuously, i.e. the speed of acquiring images by the visible light camera is faster than that of the infrared camera.
S202, matching the first time stamp with each second time stamp;
the matching process is a process of calculating a difference between the first timestamp and each of the second timestamps, and determining whether each difference is within a preset difference range.
S203, when the difference value between the first time stamp and the second time stamp is within a preset difference value range, judging that the first time stamp is the same as the second time stamp, and acquiring a visible light image corresponding to the second time stamp which is the same as the first time stamp.
The preset difference range can be 0-10 microseconds, and when the difference value between the first time stamp and the second time stamp is within the preset difference range, the obtained environment states of the visible light image and the infrared light image are infinitely close to be consistent, so that the visible light image corresponding to the second time stamp is obtained for matching and fusing with the infrared light image. Further, when the difference between the plurality of second timestamps and the first timestamp is within the preset difference range, the second timestamp with the smallest difference can be used as a basis for judging that the first timestamp is the same as the second timestamp. Still further, when there is no difference between the second timestamp and the first timestamp within a preset difference range, the second timestamp with the smallest difference may be used as a basis for determining that the first timestamp is the same as the second timestamp.
On the basis of the embodiment shown in fig. 1, the present application provides another embodiment of an image fusion method. Step S1011 is also included after step S101. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
And S1011, converting the formats of the infrared light image and the visible light image into a preset format.
In this embodiment, the image format is converted into a preset format, so as to facilitate data operation of the image processor in the image processing process. Optionally, the preset format may be a YUV format, and the format conversion process may be implemented by an ISP image processing process.
On the basis of the embodiment shown in fig. 1, fig. 3 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 3, the step S102 specifically includes steps S301 to S303. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S301, amplifying the infrared light image to a preset size;
the predetermined size may be D1, i.e., 704 × 576 mm. The resolution ratio of the infrared image acquired by the infrared camera is low, so that the infrared image needs to be amplified to a preset size, the infrared camera with high resolution ratio does not need to be adopted, and the equipment cost is reduced. The problem that the current camera has high requirements on the structure and the lens is solved by a software code mode.
S302, matching the amplified infrared light image to a local size image of the visible light image through an affine transformation vector space principle;
in this embodiment, since the infrared camera and the visible light camera are not arranged in an overlapping manner, and may be arranged adjacently, the two cameras inevitably have a difference in the acquired image field of view due to the difference in angle. According to the embodiment, the infrared light image is matched with the local size image of the visible light image through the affine transformation principle, so that the visual field of the infrared light image is consistent with that of the visible light image, the camera structure does not need to have strict requirements, and the equipment cost is reduced.
Alternatively, in order to reduce unnecessary operations, the present embodiment employs a linear transformation in the affine transformation principle without the need for a translation operation.
It should be understood that the local size image may be an image containing edge information and partial visible light information obtained by removing local pixel points in the visible light image, and the image size of the image is the same as the size of the visible light image.
S303, cutting out a local size image of the visible light image, and zooming the local size image to the preset size.
In this embodiment, the cost of the visible light camera is lower than that of the infrared light camera, and the visible light camera can collect a visible light image with high resolution, so that for the convenience of subsequent image fusion, the local size image of light can be zoomed to the size same as that of the infrared light image.
Optionally, the local size image is scaled to a preset size by bilinear interpolation to improve the scaling accuracy of the local size image.
Optionally, the visible light camera may adopt a zoom lens to obtain a fused image with higher precision and higher resolution.
On the basis of the embodiment shown in fig. 1, fig. 4 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 4, the step S104 specifically includes steps S401 to S402. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S401, obtaining the value of each first pixel point in the infrared light image and obtaining the value of each second pixel point in the gray level image;
s402, performing OR operation on the value of each first pixel point and the value of the second pixel point at the corresponding position to obtain the fusion image.
For the above S401 and S402, the or operation is a kind of computer logic operation, which means that when two constants having an or relationship are simultaneously false, the operation result is false, otherwise it is true. For example, if the value "0" is false and other constants are true, 0| | |0 ═ 0 and 0| | | |1 ═ 1 are obtained, where "|" indicates an operator of the or operation. In this embodiment, the value of the pixel point that does not contain the edge information in the gray-scale image may be set to "0", and when the value of the first pixel point is 129 and the value of the second pixel point at the corresponding position is 0, the value of the pixel point at the corresponding position in the fused image is 129| |0 ═ 129; when the value of the first pixel is 129 and the value of the second pixel at the corresponding position is 132, the value of the pixel at the corresponding position in the fused image is 129| |132 ═ 133. The value of each first pixel point and the value of the second pixel point at the corresponding position are subjected to OR operation, so that the original image information of the infrared light image can be reserved, the abundant details of the visible light image are increased, the edge information of the infrared light image is enhanced, and the infrared light image is clearer and has better imaging effect.
It should be understood that the above-described image or operation process is only used for illustration, and other forms of or operation processes are also possible in other embodiments.
On the basis of the embodiment shown in fig. 1, the present application provides another embodiment of an image fusion method. Step S1041 is also included before step S104. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S1041, performing time domain filtering processing on the gray level image to obtain a filtered gray level image.
In the present embodiment, the time-domain filtering process includes, but is not limited to, wiener filtering or kalman filtering. The noise in the gray level image can be removed through time domain filtering processing, discrete useless information in the gray level image is eliminated, and edge information is enhanced, so that the edge of the gray level image is clearer, and a better edge effect is achieved.
On the basis of the embodiment shown in fig. 1, fig. 5 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 5, the above step S104 is followed by steps S501 to S502. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S501, acquiring a fused image corresponding to each frame of infrared light image;
and S502, splicing the fused images according to the same sequence as the infrared light images to obtain a real-time fused stream video.
With regard to the above-described S501 and S502, in the present embodiment, the above-described sequence may be the same frame number sequence as the infrared light image, or may be a time sequence in which the infrared light image is acquired. The fusion images are spliced according to the same sequence as the infrared light images to obtain a real-time fusion stream video, so that a user can watch clear infrared light video pictures, and the user experience is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of animage fusion apparatus 600 provided in the embodiment of the present application, corresponding to the image fusion method described in the above embodiment, and only shows the relevant parts in the embodiment of the present application for convenience of description.
Referring to fig. 6, the apparatus includes:
an obtainingmodule 601, configured to obtain a frame of infrared light image and a frame of visible light image under the same timestamp;
aclipping module 602, configured to clip a local size image of the visible light image that matches the infrared light image;
an extractingmodule 603, configured to extract edge information of the local size image, and generate a grayscale image with the edge information;
and afusion module 604, configured to perform fusion operation on the infrared light image and the grayscale image to obtain a fusion image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, theterminal device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), amemory 71, and acomputer program 72 stored in thememory 71 and executable on the at least oneprocessor 70, wherein theprocessor 70 implements the steps of any of the various image fusion method embodiments described above when executing thecomputer program 72.
Theterminal device 7 may be a mobile phone, a desktop computer, a notebook, a palm computer, a thermal imaging camera, or other computing devices. The terminal device may include, but is not limited to, aprocessor 70, amemory 71. Those skilled in the art will appreciate that fig. 7 is only an example of theterminal device 7, and does not constitute a limitation to theterminal device 7, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include input/output devices, network access devices, and the like.
TheProcessor 70 may be a Central Processing Unit (CPU), and theProcessor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Thememory 71 may in some embodiments be an internal storage unit of theterminal device 7, such as a hard disk or a memory of theterminal device 7. In other embodiments, thememory 71 may also be an external storage device of theterminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on theterminal device 7. Further, thememory 71 may also include both an internal storage unit and an external storage device of theterminal device 7. Thememory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. Thememory 71 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.