Disclosure of Invention
In order to overcome the problems in the related art, the disclosure provides a camera shooting method, a camera shooting device and a terminal.
According to a first aspect of an embodiment of the present disclosure, there is provided an image capturing method including:
reading the shooting time period and the shooting range of the next task to be shot in a shooting task list, wherein the shooting time periods and the corresponding shooting ranges of a plurality of shooting tasks are recorded in the shooting task list;
and when the starting time of the shooting time period is reached, shooting the shooting range.
Optionally, the shooting range includes:
and shooting the shooting range according to the shooting angle corresponding to the shooting range, wherein a plurality of different shooting ranges correspond to a plurality of different shooting angles in a one-to-one manner.
Optionally, before reading the shooting time period and the shooting range of the next task to be shot in the shooting task list, the method further includes:
receiving a first user instruction, wherein the first user instruction carries a shooting time period and a corresponding shooting range;
extracting the shooting time period and the shooting range in the first user instruction;
and taking the shooting time period and the shooting range as a task to be shot and storing the task to be shot in the shooting task list.
Optionally, before reading the shooting time period and the shooting range of the next task to be shot in the shooting task list, the method further includes:
receiving a second user instruction, wherein the second user instruction carries a shooting time period;
extracting the shooting time period in the second user instruction;
acquiring a current shooting range, wherein the current shooting range is a shooting range adjusted by a user;
and taking the shooting time period and the shooting range as a task to be shot and storing the task to be shot in the shooting task list.
Optionally, after the shooting of the shooting range, the method further includes:
analyzing the shooting data to determine whether potential safety hazards exist;
if the potential safety hazard exists, sending an analysis result of the potential safety hazard to intelligent equipment; or,
and if the potential safety hazard exists, outputting alarm information.
Optionally, the analyzing the shooting data to determine whether a potential safety hazard exists includes:
recognizing a face image within the photographing range;
matching the facial image with a preset facial image of a legal user to obtain similarity;
and if the similarity is lower than a set threshold, determining that potential safety hazards exist.
Optionally, the analyzing the shooting data to determine whether a potential safety hazard exists includes:
collecting the motion information of the children or pets in the shooting range;
and determining whether the children or the pets have potential safety hazards or not according to the motion information of the children or the pets and the surrounding environment.
Optionally, the intelligent device includes an intelligent terminal, an intelligent appliance, a wearable device, or a community security center.
Optionally, after the shooting of the shooting range, the method further includes:
generating a security report based on the shooting data and the analysis result every preset time;
and sending the security report to an intelligent terminal or wearable equipment.
According to a second aspect of the embodiments of the present disclosure, there is provided an image pickup apparatus including: a reading module and a shooting module;
the reading module is configured to read a shooting time period and a shooting range of a next task to be shot in a shooting task list, and the shooting time periods and the corresponding shooting ranges of a plurality of shooting tasks are recorded in the shooting task list;
the shooting module is configured to shoot the shooting range when the starting time of the shooting time period read by the reading module is reached.
Optionally, the shooting module includes: a shooting submodule;
the shooting submodule is configured to shoot the shooting range according to the shooting angle corresponding to the shooting range read by the reading module, and a plurality of different shooting ranges correspond to a plurality of different shooting angles in a one-to-one mode.
Optionally, the apparatus further comprises: the device comprises a first instruction receiving module, a first extracting module and a first storage module;
the first instruction receiving module is configured to receive a first user instruction, wherein the first user instruction carries a shooting time period and a corresponding shooting range;
the first extraction module configured to extract the shooting time period and the shooting range in the first user instruction received by the first instruction receiving module;
the first storage module is configured to store the shooting time period and the shooting range extracted by the first extraction module into the shooting task list as a task to be shot.
Optionally, the apparatus further comprises: the device comprises a second instruction receiving module, a second extraction module, a shooting range acquisition module and a second storage module;
the second instruction receiving module is configured to receive a second user instruction, and the second user instruction carries a shooting time period;
the second extraction module is configured to extract the shooting time period from the second user instruction received by the second instruction receiving module;
the shooting range acquisition module is configured to acquire a current shooting range, and the current shooting range is a shooting range adjusted by a user;
the second storage module is configured to store the shooting time period extracted by the second extraction module and the shooting range acquired by the shooting range acquisition module as a task to be shot in the shooting task list.
Optionally, the apparatus further comprises: the device comprises an analysis module, an analysis result sending module and an alarm module;
the analysis module is configured to analyze the shooting data and determine whether potential safety hazards exist;
the analysis result sending module is configured to send the analysis result of the potential safety hazard to the intelligent device if the analysis module determines that the potential safety hazard exists; or,
the alarm module is configured to output alarm information if the analysis module determines that the potential safety hazard exists.
Optionally, the analysis module includes: the method comprises the steps that a sub-module is identified, a matching sub-module and a first hidden danger determining sub-module are matched;
the recognition sub-module configured to recognize a face image within the photographing range;
the matching submodule is configured to match the facial image identified by the identification submodule with a preset facial image of a legal user to obtain similarity;
the first hidden danger determining submodule is configured to determine that a potential safety hazard exists if the similarity obtained by the matching submodule is lower than a set threshold.
Optionally, the analysis module includes: the acquisition submodule and the second hidden danger determination submodule are connected;
the acquisition sub-module is configured to acquire motion information of children or pets in the shooting range;
the second hidden danger determining submodule is configured to determine whether the child or the pet has a potential safety hazard or not according to the motion information of the child or the pet and the surrounding environment, which are acquired by the acquisition submodule.
Optionally, the intelligent device includes an intelligent terminal, an intelligent appliance, a wearable device, or a community security center.
Optionally, the apparatus further comprises: the report generation module and the report sending module;
the report generation module is configured to generate a security report based on shooting data and an analysis result at preset intervals;
the report sending module is configured to send the security report generated by the report generating module to a smart terminal or a wearable device.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including: a processor; a memory configured to store processor-executable instructions; wherein the processor is configured to:
reading the shooting time period and the shooting range of the next task to be shot in a shooting task list, wherein the shooting time periods and the corresponding shooting ranges of a plurality of shooting tasks are recorded in the shooting task list;
and when the starting time of the shooting time period is reached, shooting the shooting range.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the imaging device in the present disclosure may pre-store a shooting task list including a plurality of shooting tasks, in which a shooting time period and a shooting range of each shooting task may be recorded, and may shoot a set shooting range based on the shooting time period of each shooting task. In this way, the camera device can shoot different ranges in different time periods, so that the utilization rate of the camera device is improved, multiple purposes are realized, convenience is provided for users, and user experience is optimized.
The shooting range stored in the shooting device in the disclosure may also be a shooting angle, and the shooting range is determined based on the shooting angle, so that the shooting range can more accurately meet the requirements of the user.
The camera device in the disclosure can receive an instruction carrying a shooting time period and a shooting range input by a user, and store the instruction as a task to be shot into the shooting task list. The mode for setting the shooting task list is simple and easy to realize.
The camera device can also receive an instruction carrying a shooting time period from a user, acquire a shooting range adjusted by the user in real time, and store the shooting time period and the shooting range as a task to be shot into a shooting task list. In the mode, the shooting range is adjusted in real time by the user, so that the shooting range can better meet the requirements of the user.
The camera device can also analyze the shot data, and inform the intelligent equipment of the result or send an alarm when the potential safety hazard exists in the analysis, so that a user is timely reminded to process the potential safety hazard event, and unnecessary loss is avoided.
The camera device can determine whether potential safety hazards exist or not by identifying whether the shot facial images are facial images of legal users or not, and the mode can effectively determine whether strangers break into the camera device or not to play a role in security monitoring.
The camera device can determine whether potential safety hazards exist through analysis of motion information and surrounding environment of children or pets, and the mode can effectively monitor the safety conditions of the children and the pets.
No matter whether the analysis result shows that the potential safety hazard exists or not, the camera device can generate a security report and send the security report to the intelligent terminal or the wearable equipment of the user. For the conditions that the user is on business, travels or is not at home for a long time, because the intelligent terminal or the wearable device is usually the device carried by the user, the mode can enable the user to know the security condition at home in time, and can ensure that the user is confident to work or other things under the condition that no potential safety hazard occurs at home; under the condition that potential safety hazards appear, the user can be convenient for handle the potential safety hazards in time so as to avoid causing unnecessary loss.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1, fig. 1 is a flowchart illustrating an image capturing method, which may be used in an image capturing apparatus, according to an exemplary embodiment, and includes the steps of:
step 101, reading a shooting time period and a shooting range of a next task to be shot in a shooting task list.
In the embodiment of the present disclosure, the shooting task list is a list including a plurality of shooting tasks, and may be stored in the image capturing apparatus, and the shooting time periods of the respective shooting tasks and the corresponding shooting ranges are recorded in the list.
And 102, shooting the shooting range when the starting time of the shooting time period is reached.
In the above-described embodiment, the image capturing apparatus may pre-store a shooting task list including a plurality of shooting tasks, and the shooting time period and the shooting range of each shooting task may be recorded in the list, and the image capturing apparatus may capture a set shooting range based on the shooting time period of each shooting task. In this way, the camera device can shoot different ranges in different time periods, so that the utilization rate of the camera device is improved, multiple purposes are realized, convenience is provided for users, and user experience is optimized.
As shown in fig. 2, fig. 2 is a flowchart illustrating another image capturing method according to an exemplary embodiment, which may be used in an image capturing apparatus, including the steps of:
step 201, receiving a first user instruction, where the first user instruction carries a shooting time period and a corresponding shooting range.
In the embodiment of the present disclosure, a user may set a shooting task list in advance, for example, the user sends a user instruction to the image capturing device, where the instruction carries a shooting time period and a shooting range of a shooting task.
Step 202, extracting a shooting time period and a shooting range in a first user instruction.
The shooting time periods correspond to the shooting ranges one by one, and different shooting ranges correspond to different shooting angles one by one. For example, in a shooting task for monitoring whether a stranger intrudes into a house, a shooting time period is from 9 am to 17 pm, and a shooting range is aimed at a gate; in another shooting task for monitoring the learning condition of children, the shooting time period is 17 pm to 21 pm, and the shooting range is aimed at the study room.
And step 203, storing the shooting time period and the shooting range as a task to be shot in a shooting task list.
Through the mode, the shooting task and the shooting task list are set.
In another disclosure, the shooting tasks and the shooting task list may be set as follows.
Receiving a second user instruction, wherein the second user instruction carries a shooting time period; extracting a shooting time period in a second user instruction; acquiring a current shooting range, wherein the current shooting range is a shooting range adjusted by a user; and storing the shooting time period extracted from the second user instruction and the acquired current shooting range into a shooting task list as a task to be shot.
In the implementation mode, the shooting time period is set by a user and is sent to the camera device for setting by being carried in a second user instruction; the shooting range is determined by the camera device acquiring the current shooting range in real time based on the current shooting range adjusted by the user. For example, a user inputs the starting time and the ending time of a shooting task, the shooting device stores the shooting time period, then the user adjusts the shooting angle, the shooting device collects the shooting range adjusted by the user, and the shooting range is stored corresponding to the shooting time period input by the user.
Step 204, reading the shooting time period and the shooting range of the next task to be shot in the shooting task list, wherein the shooting time periods and the shooting ranges corresponding to the shooting tasks are recorded in the shooting task list.
After the shooting task list is set in step 201 and 203, the image capturing apparatus can read the shooting time period and the shooting range of the next task to be shot. And the next shooting task closest to the current time is the task to be shot.
And step 205, when the starting time of the shooting time period is reached, shooting the shooting range according to the shooting angle corresponding to the shooting range.
In the embodiment of the disclosure, when the shooting start time of the task to be shot is reached, the shooting angle of the camera is adjusted to be aligned with the shooting range. The camera in this disclosure has a motor mounted below it for driving the camera to rotate so as to align different shooting ranges. The camera can rotate in the horizontal direction, can also rotate to a certain extent in other directions, and can adopt a wide-angle lens. In addition, the camera device can be provided with a wireless communication module for wireless connection with the router, so that the camera device can communicate with an intelligent terminal of a user and an intelligent household appliance with the wireless communication module.
And step 206, analyzing the shooting data to determine whether potential safety hazards exist.
In one disclosed approach, the determination of whether a potential safety hazard exists may be made by:
identifying a face image within a shooting range; matching the face image with a preset face image of a legal user to obtain similarity; and if the similarity is lower than a set threshold, determining that potential safety hazards exist.
In this manner, the face images of legitimate users may be stored in advance. If the shooting range is the indoor range of the house, the legal user can be a family member of the house or a relatives and friends approved by the family member. As long as the similarity is judged to be higher than the set threshold, the person appearing in the shooting range can be considered as a legal user and a trusted user, so that potential safety hazards do not exist; and otherwise, the person appearing in the shooting range is considered to be a stranger, and the potential safety hazard is determined to exist.
In another disclosure, whether a potential safety hazard exists may also be determined by:
collecting motion information of children or pets in a shooting range; and determining whether the children or the pets have potential safety hazards or not according to the motion information of the children or the pets and the surrounding environment.
In this manner, the home equipment that the child or the pet should not touch can be regarded as a dangerous object, and the image information of the dangerous object can be stored in advance. The method comprises the steps of determining whether the object is a child or a pet based on the height and shape information of the object collected in the shooting range, determining motion information of the child or the pet, such as touch action, determining the surrounding environment of the child or the pet, namely whether dangerous goods exist around the child or the pet, and determining that potential safety hazards exist if the child or the pet is determined to be within a set range from the dangerous goods or touch action is performed on the dangerous goods.
Step 207, if the potential safety hazard exists, sending an analysis result of the potential safety hazard to the intelligent equipment; or if the potential safety hazard exists, outputting alarm information.
In the step of this disclosure, when it is determined that there is a potential safety hazard, the analysis result may be sent to an intelligent device of the user, for example, information that "stranger is shot" is sent to a mobile phone of the user, or a facial image of stranger that is shot is sent to a mobile phone of the user, or the like, or alarm information may be directly output through an alarm, or an alarm may be sent to a security center of an intelligent cell to warn strangers, or help the security center.
In the embodiment of the disclosure, a security report can be generated based on the shooting data and the analysis result every preset time; and sending the security report to an intelligent terminal or wearable equipment.
In the mode, no matter whether the analysis result shows that the potential safety hazard exists or not, a security report is generated and sent to the intelligent terminal or the wearable device of the user. For the conditions that the user is on business, travels or is not at home for a long time, because the intelligent terminal or the wearable device is usually the device carried by the user, the mode can enable the user to know the security condition at home in time, and can ensure that the user is confident to work or other things under the condition that no potential safety hazard occurs at home; under the condition that potential safety hazards appear, the user can be convenient for handle the potential safety hazards in time so as to avoid causing unnecessary loss.
The intelligent device in the embodiments of the present disclosure may include: intelligent terminal, wearable equipment, intelligent household electrical appliances, intelligent residential district security protection center.
In addition, the camera device can also transmit the shot video to a designated device of the user for storage. For example, since the video occupies a large space, in order to prevent the captured video from being unable to be stored, the captured video may be transmitted to another device such as a smart television for storage at set intervals, or the video determined to have no potential safety hazard may be emptied at regular intervals.
As shown in fig. 3, fig. 3 is a schematic view of a camera application scenario shown in accordance with an exemplary embodiment of the present disclosure. In the scenario shown in fig. 3, the following are included: a smart camera as an imaging device, and a smart phone as a smart device. The intelligent camera is used for reading the shooting time periods and the shooting ranges of the next to-be-shot tasks in the stored shooting task list, the starting time of the shooting time periods is 9:00 in the morning, and the shooting ranges are ranges over against a gate. When 9:00 is reached, the camera device adjusts the orientation of the camera to be opposite to the gate, and starts shooting in the range opposite to the gate.
In the application scenario shown in fig. 3, reference may be made to the foregoing description of fig. 1 and fig. 2 for a specific process of implementing image capturing, and details are not described here again.
Corresponding to the embodiment of the image pickup method, the disclosure also provides an embodiment of an image pickup device and a terminal applied by the image pickup device.
As shown in fig. 4, fig. 4 is a block diagram of an image pickup apparatus according to an exemplary embodiment of the present disclosure, the apparatus may include: a reading module 410 and a shooting module 420.
The reading module 410 is configured to read a shooting time period and a shooting range of a next task to be shot in a shooting task list, where the shooting time periods and the shooting ranges corresponding to the multiple shooting tasks are recorded in the shooting task list;
and a photographing module 420 configured to photograph the photographing range when a start time of the photographing time period read by the reading module is reached.
In the above-described embodiment, the image capturing apparatus may pre-store a shooting task list including a plurality of shooting tasks, and the shooting time period and the shooting range of each shooting task may be recorded in the list, and the image capturing apparatus may capture a set shooting range based on the shooting time period of each shooting task. In this way, the camera device can shoot different ranges in different time periods, so that the utilization rate of the camera device is improved, multiple purposes are realized, convenience is provided for users, and user experience is optimized.
As shown in fig. 5, fig. 5 is a block diagram of another image capturing apparatus according to an exemplary embodiment of the present disclosure, and on the basis of the foregoing embodiment shown in fig. 4, the capturing module 420 may include: a photographing sub-module 421.
The shooting sub-module 421 is configured to shoot the shooting range according to the shooting angle corresponding to the shooting range read by the reading module 410, where the plurality of different shooting ranges correspond to the plurality of different shooting angles in a one-to-one manner.
In the above embodiment, the image capturing range stored in the image capturing apparatus may also be an image capturing angle, and the image capturing range is determined based on the image capturing angle, so that the image capturing range can more accurately meet the requirements of the user.
As shown in fig. 6, fig. 6 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 4, and the apparatus may further include: a first instruction receiving module 430, a first fetching module 440 and a first storing module 450.
The first instruction receiving module 430 is configured to receive a first user instruction, where the first user instruction carries a shooting time period and a corresponding shooting range;
a first extraction module 440 configured to extract a photographing time period and a photographing range in the first user instruction received by the first instruction receiving module 430;
the first storage module 450 is configured to store the shooting time period and the shooting range extracted by the first extraction module 440 as one task to be shot in the shooting task list.
In the above embodiment, the image capturing apparatus may receive an instruction carrying a shooting time period and a shooting range input by a user, and store the instruction as a task to be shot in the shooting task list. The mode for setting the shooting task list is simple and easy to realize.
As shown in fig. 7, fig. 7 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 4, and the apparatus may further include: a second instruction receiving module 460, a second extracting module 470, a shooting range acquiring module 480, and a second storing module 490.
The second instruction receiving module 460 is configured to receive a second user instruction, where the second user instruction carries a shooting time period;
a second extraction module 470 configured to extract the photographing time period in the second user instruction received by the second instruction receiving module 460;
a shooting range obtaining module 480 configured to obtain a current shooting range, where the current shooting range is a shooting range adjusted by a user;
a second storing module 490 configured to store the shooting time period extracted by the second extracting module 470 and the shooting range acquired by the shooting range acquiring module 480 as one task to be shot in the shooting task list.
In the above embodiment, the camera device may further receive an instruction carrying a shooting time period from the user, acquire a shooting range adjusted by the user in real time, and store the shooting time period and the shooting range as the task to be shot in the shooting task list. In the mode, the shooting range is adjusted in real time by the user, so that the shooting range can better meet the requirements of the user.
As shown in fig. 8, fig. 8 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 4, and the apparatus may further include: an analysis module 4100, an analysis result sending module 4110, and an alarm module 4120.
The analysis module 4100 is configured to analyze the shooting data and determine whether a potential safety hazard exists;
an analysis result sending module 4110 configured to send an analysis result of the potential safety hazard to the intelligent device if the analysis module 4100 determines that the potential safety hazard exists; or,
an alarm module 4120 configured to output alarm information if the analysis module 4100 determines that a potential safety hazard exists.
In the above embodiment, the camera device may further analyze the shot data, and notify the intelligent device of the result or send an alarm when the potential safety hazard is analyzed to exist, so as to prompt the user to process the potential safety hazard event in time, thereby avoiding unnecessary loss.
As shown in fig. 9, fig. 9 is a block diagram of another image capturing apparatus according to an exemplary embodiment of the present disclosure, and the analysis module 4100 may include, on the basis of the foregoing embodiment shown in fig. 8: a recognition sub-module 4101, a matching sub-module 4102 and a first risk determination sub-module 4103.
Wherein the recognition sub-module 4101 is configured to recognize a face image within a photographing range;
a matching sub-module 4102 configured to match the face image identified by the identifying sub-module 4101 with a preset face image of a legal user, so as to obtain a similarity;
a first potential risk determining submodule 4103 configured to determine that a potential safety risk exists if the similarity obtained by the matching submodule 4102 is lower than a set threshold.
In the above embodiment, the camera device can determine whether potential safety hazards exist by identifying whether the shot facial images are facial images of legal users, and the mode can effectively determine whether strangers break into the camera device, so that the security monitoring effect is achieved.
As shown in fig. 10, fig. 10 is a block diagram of another image capturing apparatus according to an exemplary embodiment of the present disclosure, and the analysis module 4100 may include, on the basis of the foregoing embodiment shown in fig. 8: an acquisition sub-module 4104 and a second risk determination sub-module 4105.
Wherein, the collecting sub-module 4104 is configured to collect motion information of children or pets within a shooting range;
a second risk determination submodule 4105 configured to determine whether the child or the pet has a safety risk according to the motion information of the child or the pet collected by the collection submodule 4104 and the surrounding environment.
In the embodiment, whether potential safety hazards exist can be determined through analysis of the motion information and the surrounding environment of the children or the pets, and the mode can effectively monitor the safety conditions of the children and the pets.
In the above embodiment, the intelligent device includes an intelligent terminal, an intelligent appliance, a wearable device, or a community security center.
As shown in fig. 11, fig. 11 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 4, and further includes: a report generation module 4130 and a report transmission module 4140.
The report generation module 4130 is configured to generate a security report based on the shooting data and the analysis result at preset intervals;
a report sending module 4140 configured to send the security report generated by the report generating module 4130 to the smart terminal or the wearable device.
In the above embodiment, no matter whether the analysis result indicates that the potential safety hazard exists, a security report is generated and sent to the intelligent terminal or the wearable device of the user. For the conditions that the user is on business, travels or is not at home for a long time, because the intelligent terminal or the wearable device is usually the device carried by the user, the mode can enable the user to know the security condition at home in time, and can ensure that the user is confident to work or other things under the condition that no potential safety hazard occurs at home; under the condition that potential safety hazards appear, the user can be convenient for handle the potential safety hazards in time so as to avoid causing unnecessary loss.
The embodiments of the camera device shown in fig. 4 to 11 can be applied to a terminal.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the disclosure also provides a terminal, which comprises a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
reading the shooting time period and the shooting range of the next task to be shot in a shooting task list, wherein the shooting time periods and the corresponding shooting ranges of a plurality of shooting tasks are recorded in the shooting task list;
and when the starting time of the shooting time period is reached, shooting the shooting range.
As shown in fig. 12, fig. 12 is a schematic structural diagram of an apparatus 1200 for imaging according to an exemplary embodiment of the present disclosure. For example, the apparatus 1200 may be a mobile phone with routing capability, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
Referring to fig. 12, an apparatus 1200 may include one or more components to: processing component 1202, memory 1204, power component 1206, multimedia component 1208, audio component 1210, input/output (I/O) interface 1212, sensor component 1214, and communications component 1216.
The processing component 1202 generally controls overall operation of the apparatus 1200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1202 may include one or more processors 1220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1202 can include one or more modules that facilitate interaction between the processing component 1202 and other components. For example, the processing component 1202 can include a multimedia module to facilitate interaction between the multimedia component 1208 and the processing component 1202.
The memory 1204 is configured to store various types of data to support operation at the apparatus 1200. Examples of such data include instructions for any application or method operating on the device 1200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1204 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 1206 provides power to the various components of the device 1200. Power components 1206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for apparatus 1200.
The multimedia components 1208 include a screen that provides an output interface between the device 1200 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1200 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1210 is configured to output and/or input audio signals. For example, audio component 1210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1204 or transmitted via the communication component 1216. In some embodiments, audio assembly 1210 further includes a speaker for outputting audio signals.
The I/O interface 1212 provides an interface between the processing component 1202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1214 includes one or more sensors for providing various aspects of state assessment for the apparatus 1200. For example, the sensor assembly 1214 may detect an open/closed state of the apparatus 1200, the relative positioning of the components, such as a display and keypad of the apparatus 1200, the sensor assembly 1214 may also detect a change in the position of the apparatus 1200 or a component of the apparatus 1200, the presence or absence of user contact with the apparatus 1200, orientation or acceleration/deceleration of the apparatus 1200, and a change in the temperature of the apparatus 1200. The sensor assembly 1214 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, a microwave sensor, or a temperature sensor.
The communications component 1216 is configured to facilitate communications between the apparatus 1200 and other devices in a wired or wireless manner. The apparatus 1200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1216 receives the broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 1204 comprising instructions, executable by processor 1220 of apparatus 1200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.