Disclosure of Invention
In view of the above, embodiments of the present application provide a method, an apparatus, a display device and a computer program product for reducing storage cost during frame detection.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a method for detecting a picture, which is applied to a display device, where a detection template is stored in the display device, and the method includes:
acquiring first picture data, wherein the first picture data comprises one line of picture data to be detected in a detection area;
Detecting the first picture data according to a first detection unit in a detection template, and recording the number of detected target data units and the corresponding target positions of the detected first target data units in the row direction under the condition that the target data units are detected, wherein the detection template comprises N detection units which are arranged in the column direction, each detection unit comprises gray-scale thresholds corresponding to a plurality of pixels adjacent in the row direction, and the target data units are pixel units matched with the detection units adopted in the detection in the picture data;
sequentially acquiring N-1 rows of second picture data after the first picture data, detecting the second picture data from the target position according to the corresponding detection units after each row of second picture data is acquired, and recording the number of detected target data units, wherein the N-1 rows of second picture data are in one-to-one correspondence with the rear N-1 detection units in the detection template;
And determining the number of target data blocks according to the number of target data units corresponding to the recorded first picture data and each second picture data, wherein the target data blocks are pixel blocks matched with the detection template in the picture data.
In a possible implementation manner of the first aspect, the number of target data blocks is a minimum value of the number of target data units corresponding to the recorded first picture data and each second picture data.
In a possible implementation manner of the first aspect, the received picture data is stored in the target storage area, and the picture data of the next line overlaps the picture data of the previous line.
In a possible implementation manner of the first aspect, after the detecting the first frame data according to the first detecting unit in the detecting template, the method further includes:
And returning to execute the step of acquiring the first picture data under the condition that the target data unit is not detected and the detection area is not detected.
In a possible implementation manner of the first aspect, the method further includes determining a total number of the target data blocks in case the detection area is detected;
and starting a picture detection function under the condition that the total number of the target data blocks meets a target condition.
In a possible implementation manner of the first aspect, the target condition includes that a ratio of a total number of target data blocks to a total number of pixel blocks included in the detection area is greater than a target ratio.
In a second aspect, an embodiment of the present application provides a picture detection apparatus, including an acquisition module, a detection module, and a determination module,
The acquisition module is used for acquiring first picture data, wherein the first picture data comprises one line of picture data to be detected in a detection area;
The detection module is used for detecting the first picture data according to a first detection unit in a detection template, and recording the number of detected target data units and the corresponding target positions of the detected first target data units in the row direction under the condition that the target data units are detected, wherein the detection template comprises N detection units which are arranged in the column direction, each detection unit comprises gray-scale thresholds corresponding to a plurality of pixels adjacent in the row direction, and the target data units are pixel units matched with the detection units adopted in the detection in the picture data;
the detection module is also used for sequentially acquiring N-1 rows of second picture data after the first picture data, detecting the second picture data from the target position according to the corresponding detection unit after each row of second picture data is acquired, and recording the number of detected target data units, wherein the N-1 rows of second picture data are in one-to-one correspondence with the rear N-1 detection units in the detection template;
The determining module is used for determining the number of target data blocks according to the number of target data units corresponding to the recorded first picture data and each second picture data, wherein the target data blocks are pixel blocks matched with the detection template in the picture data.
In a possible implementation manner of the second aspect, the number of the target data blocks is a minimum value of the number of target data units corresponding to the recorded first picture data and each second picture data.
In a possible implementation manner of the second aspect, the received picture data is stored in the target storage area, and the picture data of the next line overlaps the picture data of the previous line.
In a possible implementation manner of the second aspect, the detection module is further configured to:
And returning to execute the step of acquiring the first picture data under the condition that the target data unit is not detected and the detection area is not detected.
In a possible implementation manner of the second aspect, the determining module is further configured to determine a total number of the target data blocks in case the detection area is detected.
In a possible implementation manner of the second aspect, the apparatus further includes a starting module, where the starting module is configured to:
and starting a picture detection function under the condition that the total number of the target data blocks meets a target condition.
In a possible implementation manner of the second aspect, the target condition includes that a ratio of a total number of target data blocks to a total number of pixel blocks included in the detection area is greater than a target ratio.
In a third aspect, an embodiment of the present application provides a display device, including a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to execute the method according to the first aspect or any implementation manner of the first aspect when the computer program is called.
In a fourth aspect, an embodiment of the present application provides a computer program product, which when run on a display device, causes the display device to perform the picture detection method according to any one of the first aspects.
In a fifth aspect, an embodiment of the present application provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to the first aspect or any implementation of the first aspect.
The embodiment of the application provides a picture detection method, a device, a display device and a computer program product, wherein the method is applied to the display device, the display device stores a detection template, and the method comprises the following steps:
The method comprises the steps of obtaining first picture data, wherein the first picture data comprise one row of picture data to be detected in a detection area, detecting the first picture data according to a first detection unit in a detection template, recording the number of detected target data units and the corresponding target positions of the detected first target data units in the row direction under the condition that the target data units are detected, wherein the detection template comprises N detection units distributed in the column direction, each detection unit comprises a gray level threshold corresponding to a plurality of pixels adjacent in the row direction, the target data units are pixel units matched with the detection units adopted in the detection process in the picture data, sequentially obtaining N-1 rows of second picture data after the first picture data, detecting the second picture data according to the corresponding detection units from the target positions after each row of second picture data is obtained, recording the number of detected target data units, wherein the N-1 rows of second picture data are pixel units matched with the first picture data in the detection template, and determining the number of the corresponding target data blocks in the first picture data according to the number of the detection units after the N-1 rows of second picture data are corresponding to the first picture data. The technical scheme provided by the application can reduce the storage cost during picture detection.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. The terminology used in the description of the embodiments of the application is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The liquid crystal display device has advantages of low power consumption compared with the conventional CRT (Cathode Ray Tube) and plasma, and also exhibits an environmentally friendly performance compared with the conventional CRT, because there is no high voltage component like the CRT inside the liquid crystal display device, so that the condition that the radioactive rays are out of standard due to high voltage does not occur, and the display area of the liquid crystal display device itself does not radiate only a small amount of electromagnetic waves from the driving circuit at all. In practical production, electromagnetic interference (Electromagnetic Interference, EMI) can be reduced by tightly sealing the casing, so that the radiation index of the liquid crystal display is generally lower than that of a CRT, the visible area of the liquid crystal display is larger, the liquid crystal display achieves the display purpose by controlling the state of liquid crystal molecules through electrodes on a display panel, the volume of the liquid crystal display is not increased in proportion even if the screen is enlarged, and the liquid crystal display is much lighter in weight than the traditional display with the same display area, and the weight of the liquid crystal display is about 1/3 of that of the traditional display, so that the liquid crystal display is also called cold display or environment-friendly display, and the liquid crystal display is developed to higher resolution, higher display image quality and larger size at present.
When a liquid crystal display device (Thin Film Transistor Liquid CRYSTAL DISPLAY, TFT-LCD) drives a display panel to display a picture, the driving mode may be Line-by-Line (Line-by-Line) driving. When the display panel works, a scanning signal is applied to the scanning line, and when the scanning signal is at a high level, the switch circuit can be conducted. The data line can apply gray scale voltage, so that the data line can write the gray scale voltage into the corresponding pixel unit through the switch circuit to charge the pixel unit. In the process of displaying a frame of image, the display panel outputs scanning signals line by line from the first line scanning line so as to control the pixel units to emit light line by line. Each pixel unit comprises a liquid crystal capacitor Clc and a storage capacitor Cst, wherein an input end of the pixel unit is connected to an output end of a corresponding TFT, a control end of the TFT is connected to a scanning line responsible for transmitting a switching signal, an input end of the TFT is connected to a data line responsible for transmitting a data signal, one ends of Cst and Clc are pixel electrodes, and the other ends are common lines (VCOMs).
When the display panel displays in one row or multiple rows, the gray scale voltages of all the data lines with positive and negative polarities in one row can influence VCOM (the current row data generally only affects the current row and the previous data), if the polarity is positive, the VCOM voltage shakes upwards, if the polarity is negative, the VCOM voltage shakes downwards, and the shaking easily causes crosstalk (cross talk) between the common line and the data lines, so that the screen display is brighter or darker, the problem of abnormal picture is caused, and the actual display of the picture is affected, so that the quality is poor.
In the prior art, PDF is usually adopted to solve the problem, referring to fig. 1 and 2, the middle black-white interval area is an area that causes abnormal peripheral display, and then the display screen needs to be changed to a display mode, in the LCD display technology, the display polarity of the data is changed, so as to eliminate the abnormal display, specifically, the inside of TCON is designed with a minimum detection square, for example, 2x2 detection squares in fig. 2 are detected, then whether the gray level of the minimum detection square is consistent with the gray level of the display data is detected, if so, 1 is calculated (thick line virtual frame in the figure), if not, 0 is calculated (thin line virtual frame in the figure), and when the detection is finished in the whole picture, whether the number of 1 exceeds the preset number is calculated, so as to determine whether to start the PDF function.
According to the method, at least two lines of line buffers are occupied for receiving display data when the minimum squares are compared, and then the total number of hit squares in the two lines is calculated. If calculated at a resolution of 3840×2160, 2×3840 data are required to be stored, which is a significant memory consumption, and the grid type is even 4×4,6×6 or more grids in addition to 2×2, which is a larger space for line buffers. In order to meet the storage requirement, a large wafer is required for manufacturing when preparing TCON, thereby increasing the cost of TCON.
In view of this, the embodiment of the application provides a picture detection method, which can save storage space when loading picture data, thereby reducing storage cost.
In order to facilitate understanding of the technical solutions in the embodiments of the present application, the following first explains some terms related to the embodiments of the present application:
Picture data includes image information of a plurality of frames of pictures. In a frame of picture, the picture data includes at least a gray level of each sub-pixel, etc.
The detection area is a part or all of the display screen.
Gray scale each pixel on the display panel of the display device has its corresponding gray scale value, which represents the light emitting brightness of the pixel. In general, the gray scale value is a value between 0 and 255, and the higher the value is, the higher the luminance is. In addition, it is understood that the gray scale values of the individual pixels in the display panel are dynamically changed.
The following explains the picture detection method provided in the embodiment of the present application in detail.
In particular implementations, the method may be implemented according to the method shown in fig. 3, which may be applied to a display device that stores a detection template. Fig. 3 is a flowchart of a method for detecting a frame according to an embodiment of the present application, as shown in fig. 3, the method includes the following steps:
Step S110, acquiring first picture data.
Taking a timing controller (Timing Contoller, TCON) as an example, the TCON may be a main control chip of the display device, may receive the picture data from the front end chip, and convert the received picture data into a synchronous line control signal and a data output signal, so as to implement picture display. The front-end Chip includes, but is not limited to, a main board Chip or a System On Chip (SOC), and in one frame of image, the first image data may include a line of image data to be detected in the detection area, and the image data may be used to form a digital signal of the image, and may include at least a gray scale of each pixel.
S120, detecting the first picture data according to a first detection unit in the detection template.
The detection template may include N detection units arranged along a column direction, where N is a positive integer greater than 1, and each detection unit may include a gray level threshold corresponding to a plurality of pixels adjacent in a row direction. The detection templates may be square templates such as 2×2, 4*4, 6*6, etc., referring to fig. 4, taking a 2×2 square template as an example, where N is 2, the template may be split into 2 detection units arranged along the row direction, which are respectively a first detection unit a and a second detection unit b, and each detection unit includes 2 pixels corresponding to each other.
After the first frame data is acquired, a detection template may be used to match the first frame data in the row direction (also referred to as detection). Referring to fig. 4 and fig. 5, taking the offset of the first detecting unit a in the row direction as one pixel, and the first frame data as the first row data in fig. 5 as an example, after the TCON receives the first frame data, the first detecting unit a may first match the gray scale of the pixel unit A1, and then move one pixel to the right to continue matching until all the pixels of the first frame data are matched.
Step S130, judging whether the target data unit is detected, if yes, executing step S140, and if no, executing step S150.
After the gray level threshold of the first detection unit is matched with the first line of frame data, different steps can be executed according to the matching result, i.e. whether the target data unit is detected. The target data unit may be a pixel unit in the frame data, which is matched with the detection unit used in the detection.
Step S140, recording the number of detected target data units and the corresponding target positions of the detected first target data unit in the row direction.
For example, referring to fig. 5, in the case that the first detection unit and the pixel unit A1 of the 4 th column data are successfully detected in the column direction, the target position x (1) of the pixel unit A1 that is successfully detected may be recorded, and the pixel unit A1 is determined as the target detection unit, and the number is recorded as 1. The first detecting unit can detect 2 target detecting units in the line of picture data, and the number of the target detecting units is S (A2) =2. The target location and the number of target detection units may be stored in any location of the display device, for example, also in the memory of the TCON, which is not particularly limited in the embodiment of the present application.
Step S150, judging whether the detection area is detected, if not, returning to step S110, otherwise, executing step S180.
With continued reference to fig. 5, the first detecting unit may continue to acquire the first frame data when the 14 th line data (i.e., the 3 rd line) is not matched to the target detecting unit. The first detection unit may continue to match the newly received first frame data.
It should be noted that, after the TCON acquires the new picture data, the TCON may store a line of picture data received last in the target storage area, and cover a line of picture data received last by a cyclic coverage method, so that only the target position and the number of detected target detection units need to be reserved, that is, the data of the previous line is no longer needed, so as to implement cyclic application of a line buffer, and when the detection template is of other sizes, for example 4*4, only the target position and the number of target detection units of each line need to be reserved, so that the storage space can be reduced, and the cost is reduced. The target storage area may be a line buffer.
Step S160, sequentially acquiring N-1 rows of second picture data after the first picture data, detecting the second picture data from the target position according to the corresponding detection unit after each row of second picture data is acquired, and recording the number of the detected target data units.
For example, when N is 2, after the first picture data matching is completed, the 5 th line data (i.e., the 1 st line second picture data) of fig. 5 may be acquired. After the data reception of the 5 th data is completed, the second detection unit B can start to detect at the target position x (1) and the pixel unit B2 of the second picture data, and record the corresponding number S (B2) of the target data units, where S (B2) =2, so that the matching speed can be improved.
After the detection of the 5 th line of data is completed, both detection units of the detection template complete one line of detection, that is, the detection template completes one round of detection, in other words, the 1-line second frame data may correspond to the last 1 detection units in the detection template one by one.
Step S170, determining the number of target data blocks according to the number of target data units corresponding to the recorded first picture data and each second picture data.
The target data block may be a pixel block in the frame data that matches the detection template. After the detection template completes one round of detection, the number S (AB 2) of the target detection units determined by the detection template in the first round of detection can be determined according to the number S (A2) of the target data units corresponding to the recorded first picture data and the number S (B2) of the target data units corresponding to the second picture data, where S (A2) =s (B2) =s (AB 2), and the data block includes the pixel unit A1 and the pixel unit B2.
In consideration of the case where S (A2) is not equal to S (B2), the minimum value of the number of target data units corresponding to the first picture data and each second picture data that can be recorded in order to improve the detection accuracy is recorded as a target data block.
Then, the 6 th line data (the 6 th line data is the first frame data at this time) can be obtained, the 6 th line data is detected again by the first detecting unit, and the number S (A3) of the detected target data units and the corresponding target position x (2) of the detected first target data unit in the line direction are recorded.
Step S180, determining the total number of the target data blocks under the condition that the detection area is detected, and starting the picture detection function under the condition that the total number of the target data blocks meets the target condition.
After the detection area is detected, the total number of the target data blocks can be determined through accumulation according to the number of the target data blocks determined by the detection template in each round. The target condition may be that a ratio of a total number of target data blocks to a total number of pixel blocks included in the detection area is greater than a target ratio. The target ratio may be stored in the display device in advance. After the PDF function is started, voltage polarity, display gray scale and the like can be changed to improve display quality.
It can be understood that in the technical scheme provided by the application, the more the number of the square checks of the detection template is, the larger the storage space saved by the scheme is.
It will be appreciated by those skilled in the art that the above embodiments are exemplary and not intended to limit the application. The order of execution of one or more of the above steps may be modified, if possible, or may be combined selectively to yield one or more other embodiments. Those skilled in the art can select and combine any of the above steps according to the need, and all the steps do not depart from the spirit of the present application.
The embodiment of the application provides a picture detection method, which is applied to display equipment, wherein the display equipment stores a detection template, and the method comprises the following steps:
The method comprises the steps of obtaining first picture data, wherein the first picture data comprises one row of picture data to be detected in a detection area, detecting the first picture data according to a first detection unit in a detection template, recording the number of detected target data units and the corresponding target positions of the detected first target data units in the row direction under the condition that the target data units are detected, wherein the detection template comprises N detection units which are distributed along the column direction, each detection unit comprises a gray level threshold value corresponding to a plurality of pixels adjacent in the row direction, the target data units are pixel units matched with the detection units adopted in the detection process in the picture data, sequentially obtaining N-1 rows of second picture data after the first picture data, detecting the second picture data according to the corresponding detection units after each row of second picture data is obtained, and recording the number of the detected target data units, wherein the N-1 rows of second picture data correspond to the N-1 detection units in the detection template, determining that the target data of each block corresponds to the first picture data according to the number of the detection units, and the number of the corresponding target blocks in the first picture data are the picture data. The technical scheme provided by the application can reduce the storage cost during picture detection.
Based on the same inventive concept, as an implementation of the above method, the embodiment of the present application provides a picture detection apparatus, which corresponds to the foregoing method embodiment, and for convenience of reading, the embodiment of the present application does not describe details of the foregoing method embodiment one by one, but it should be clear that the device in the present embodiment can correspondingly implement all the details of the foregoing method embodiment.
Fig. 6 is a schematic structural diagram of a frame detection device according to an embodiment of the present application, as shown in fig. 6, the device provided in this embodiment includes an acquisition module 110, a detection module 120 and a determination module 130,
The acquisition module 110 is configured to acquire first frame data, where the first frame data includes a line of frame data to be detected in a detection area;
The detection module 120 is configured to detect the first frame data according to a first detection unit in the detection template, and record the number of detected target data units and the corresponding target positions of the detected first target data units in the row direction when the target data units are detected; the detection template comprises N detection units which are arranged along the row direction, wherein each detection unit comprises a gray level threshold value corresponding to a plurality of pixels adjacent in the row direction;
The detection module 120 is further configured to sequentially obtain N-1 rows of second frame data after the first frame data, detect the second frame data from the target position according to the corresponding detection unit after each row of second frame data is obtained, and record the number of detected target data units, where the N-1 rows of second frame data corresponds to the N-1 detection units in the detection template one by one;
the determining module 130 is configured to determine the number of target data blocks according to the number of target data units corresponding to the recorded first frame data and each second frame data, where the target data blocks are pixel blocks in the frame data that match the detection template.
In one possible embodiment, the number of target data blocks is the minimum value of the number of target data units corresponding to the recorded first picture data and each second picture data.
In one possible implementation, the received picture data is stored in the target storage area, with the picture data of the next line overlapping the picture data of the previous line.
In one possible implementation, the detection module 120 is further configured to:
and returning to execute the step of acquiring the first picture data under the condition that the target data unit is not detected and the detection area is not detected.
In a possible implementation, the determining module 130 is further configured to determine the total number of target data blocks in case the detection area is detected.
In a possible embodiment, the apparatus further comprises a start-up module 140, the start-up module 140 being configured to:
In the case that the total number of target data blocks satisfies the target condition, the picture detection function is started.
In one possible implementation, the target condition includes that the ratio of the total number of target data blocks to the total number of pixel blocks included in the detection area is greater than the target ratio.
The image detection device provided in this embodiment may execute the above method embodiment, and its implementation principle and technical effects are similar, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Based on the same inventive concept, the embodiment of the application also provides a display device. Fig. 7 is a schematic structural diagram of a display device according to an embodiment of the present application, and as shown in fig. 7, the display device according to this embodiment includes a memory 210 and a processor 220, where the memory 210 is configured to store a computer program, and the processor 220 is configured to execute the method described in the foregoing method embodiment when the computer program is called.
The display device provided in this embodiment may perform the above method embodiment, and its implementation principle is similar to that of the technical effect, and will not be described herein again.
The present application also provides a computer program product which, when executed, causes the method described in the method embodiments described above to be performed.
The embodiment of the present application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to the first aspect or any implementation manner of the first aspect.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, hard disk, or magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid state disk (Solid STATE DISK, SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. The storage medium may include a ROM or a random access memory RAM, a magnetic disk or an optical disk, etc. which can store various program codes.
The naming or numbering of the steps in the present application does not mean that the steps in the method flow must be executed according to the time/logic sequence indicated by the naming or numbering, and the execution sequence of the steps in the flow that are named or numbered may be changed according to the technical purpose to be achieved, so long as the same or similar technical effects can be achieved.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the description of the present application, "/" means that the related objects are in a "or" relationship, for example, a/B may mean a or B, and "and/or" in the present application is merely an association relationship describing the related objects, means that three relationships may exist, for example, a and/or B, and that three cases of a alone, a and B together, and B alone exist, wherein a, B may be singular or plural, unless otherwise stated.
Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of the following" or similar expressions thereof, means any combination of these items, including any combination of single or plural items. For example, at least one of a, b, or c may represent a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, in the description of the present specification and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present application.