Movatterモバイル変換


[0]ホーム

URL:


CN120639928A - Event reminder method, electronic device and system - Google Patents

Event reminder method, electronic device and system

Info

Publication number
CN120639928A
CN120639928ACN202410283815.9ACN202410283815ACN120639928ACN 120639928 ACN120639928 ACN 120639928ACN 202410283815 ACN202410283815 ACN 202410283815ACN 120639928 ACN120639928 ACN 120639928A
Authority
CN
China
Prior art keywords
real
screen
time monitoring
event
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410283815.9A
Other languages
Chinese (zh)
Inventor
李澄
孙略
杨智勇
李小花
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co LtdfiledCriticalHuawei Technologies Co Ltd
Priority to CN202410283815.9ApriorityCriticalpatent/CN120639928A/en
Priority to PCT/CN2025/071238prioritypatent/WO2025189946A1/en
Publication of CN120639928ApublicationCriticalpatent/CN120639928A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The application provides an event reminding method, which comprises the steps of obtaining a real-time monitoring picture of a first object, and when the first object is determined to generate a first event, displaying the real-time monitoring picture of the first object on a screen of first equipment, wherein the first equipment is equipment which a user is looking at, equipment which is in a using state, or equipment which is nearest to the user. Through the method, linkage between the camera and the household intelligent equipment can be realized, a user can realize nursing of a specific object through the household intelligent equipment, and the user can normally use the household intelligent equipment to perform activities such as film watching and the like while nursing.

Description

Event reminding method, electronic equipment and system
Technical Field
The present application relates to the field of event reminding, and more particularly, to a method, an electronic device, and a system for event reminding.
Background
With the development of the technology of the monitoring cameras, more and more users mount the monitoring cameras in families to nurse specific objects, for example, children or pets are nursed, but the intelligent service experience provided by the cameras is still greatly different from the convenience and usability expected by the daily life of the users due to the factors such as the volume, the cost and the like of the monitoring cameras, and when the users nurses the specific objects in real time through the monitoring cameras, the users cannot really be liberated from nursing tasks, so that the use experience of the users is greatly influenced.
Disclosure of Invention
The application provides an event reminding method, electronic equipment and a system, by which the linkage between a camera and home intelligent equipment can be realized, a user can realize the nursing of a specific object through the home intelligent equipment, and the user can normally use the home intelligent equipment to perform activities such as film watching and the like while nursing the specific object, when the specific object has a specific event, the user can be reminded through a film watching screen and a corresponding real-time nursing picture is played, so that the user can be liberated from the current nursing task while the nursing quality is improved, and the anxiety of the user using the electronic equipment while nursing can be greatly reduced.
In a first aspect, a method for reminding an event is provided, which includes obtaining a real-time monitoring picture of a first object, and when the first object is determined to generate the first event, displaying the real-time monitoring picture of the first object on a screen of a first device, wherein the first device is a device currently watched by a user, or the first device is a device currently in use, or the first device is a device nearest to the user.
Optionally, the first device is any one of a smart television, a tablet, a smart phone, and a central control screen.
In one example, the first object is a "child," and the first event is an activity-triggered event that occurs with the child, such as a child kicking off a quilt, the child's position moving to the bedside, or the like.
In yet another example, the first object is a "pet" and the first event is an event triggered by the pet entering the administration area, such as the pet entering the exclusion area.
In yet another example, the first object is a "kitchen" and the first event is an event triggered by a change in cooking progress, such as a water boiling, a porridge overflow, etc.
According to the embodiment of the application, the linkage between the camera and the home intelligent device can be realized, a user can realize the nursing of a specific object through the home intelligent device, and the user can normally use the home intelligent device to perform activities such as watching while nursing the specific object, when a specific event occurs to the specific object, the user can be reminded through the watching screen and a corresponding real-time nursing picture is played, so that the user can be liberated from a current nursing task while the nursing quality is improved, the user can pay more energy to perform other activities (such as watching a television, starting a video conference and the like) while nursing, and the reminding of the specific event related to nursing can be timely received when performing other activities, thereby greatly reducing the anxiety of the user.
With reference to the first aspect, in a possible implementation manner, before determining that the first event occurs in the first object, the method further includes displaying a real-time monitoring screen of the first object in a form of a small window on a screen of the first device, and the displaying of the real-time monitoring screen of the first object on the screen of the first device is significantly performed, including switching a display form of the real-time monitoring screen of the first object from a form of the small window to a form of the large window or a form of the full screen, or switching a display form of the real-time monitoring screen of the first object from a form of static display of the small window to a form of highlighting the small window or a form of flashing display of the small window.
In the embodiment of the application, when a user nurses a specific object through the home intelligent equipment, the home intelligent equipment can be normally used for activities such as watching, when the specific object does not have a specific event, the real-time nursing picture of the specific object can be displayed on the watching screen of the home intelligent equipment in a small window, and when the specific object has a specific event, the user can be reminded in a mode of obviously displaying the real-time nursing picture of the specific object on the watching screen, so that the user can be liberated from the current nursing task well while the nursing quality is improved, the user can pay more energy to carry out other activities while nursing, and in addition, the user can timely receive the specific event reminding related to nursing when carrying out other activities, thereby greatly reducing the anxiety of the user.
With reference to the first aspect, in a possible implementation manner, before the first event occurs to the first object, the method further includes acquiring a real-time monitoring screen of the first object in the background of the first device, and the displaying the real-time monitoring screen of the first object on the screen of the first device includes popping up and displaying the real-time monitoring screen of the first object on the screen of the first device, where the real-time monitoring screen of the first object is displayed in a form of a small window, a large window or a full screen on the screen of the first device, and before the real-time monitoring screen of the first object is popped up and displayed on the screen of the first device, the real-time monitoring screen of the first object is not displayed on the screen of the first device, or the real-time monitoring screen of the first object is displayed in a hidden manner on the screen of the first device.
The implementation can also be described as:
Before determining that the first event occurs, the method further comprises acquiring a real-time monitoring picture of the first object in the background of the first device, wherein the real-time monitoring picture of the first object is remarkably displayed on the screen of the first device, and the method comprises switching the display state of the real-time monitoring picture of the first object from non-display on the screen of the first device to pop-up display on the screen of the first device or switching the display state of the real-time monitoring picture of the first object from hidden display on the screen of the first device to pop-up display on the screen of the first device, wherein the real-time monitoring picture of the first object is displayed in a form of a small window, a form of a large window or a form of a full screen on the screen of the first device.
The hidden display may be understood as hiding the real-time monitoring screen of the first object to the background of the first device.
In the embodiment of the application, when a user nurses a specific object through the home intelligent equipment, the home intelligent equipment can be normally used for activities such as watching, when the specific object does not have a specific event, the real-time nursing picture of the specific object can be hidden to the background of the home intelligent equipment, when the specific object has a specific event, the user can be reminded in a mode of popping up the real-time nursing picture of the specific object on the watching screen, the user can be well liberated from the current nursing task while the nursing quality is improved, the user can put more effort into other activities while nursing, and the reminding of the specific event related to nursing can be timely received when other activities are carried out, so that the anxiety of the user is greatly reduced.
With reference to the first aspect, in a possible implementation manner, the method further includes displaying the real-time monitoring screen of the first object on the screen of the first device in a further enlarged manner after the real-time monitoring screen of the first object is popped up and displayed on the screen of the first device, or displaying the real-time monitoring screen of the first object on the screen of the first device in a further enlarged manner in response to a user's enlarged display operation after the real-time monitoring screen of the first object is popped up and displayed on the screen of the first device.
The enlarged display may be, for example, a display form switched from a small window form to a large window form or a full screen form, or a display form switched from a large window form to a full screen form.
In the embodiment of the application, when a specific event occurs to a specific object, a prompt can be sent to a user in a mode of popping up a real-time nursing picture of the specific object on a viewing screen, and the popped-up prompt picture can be further automatically amplified or amplified based on the operation of the user, so that the probability that the occurrence of the specific event can be notified to the user can be further improved, and the anxiety of the user can be further relieved.
With reference to the first aspect, in a possible implementation manner, the method further includes determining that the first event occurs in the first object by analyzing a real-time monitoring screen of the first object.
In one example, when the first object is a "child," the real-time monitoring screen of the first object is analyzed according to a smart child care algorithm to determine whether the first event occurred for the first object.
In one example, when the first object is a "pet," the real-time monitoring screen of the first object is analyzed according to a smart pet care algorithm to determine whether the first event occurs to the first object.
In the embodiment of the application, the household intelligent equipment (such as an intelligent television) analyzes and processes the real-time monitoring picture, so that whether an event needing to be reminded occurs to a nursing object or not is determined according to the analysis and processing result, compared with the camera equipment, the household intelligent equipment has stronger analysis and processing capability, intelligent nursing reminding of different scenes can be realized by arranging a plurality of different algorithms, and the application range of the scheme of the application is wider.
With reference to the first aspect, in a possible implementation manner, the method further includes sending, when the first device is in a screen-off state and it is determined that the first object has a first event, first indication information to the second device, where the first indication information is used to instruct the second device to remind a user that the first object has the first event.
Alternatively, the second device may be a mobile phone, tablet, notebook, etc. device that the user is using.
In the embodiment of the application, when the home intelligent equipment is in the screen-off state, the home intelligent equipment can analyze and process the real-time monitoring picture of the nursing object, determine whether the nursing object has an event needing to be reminded according to the analysis and processing result, and inform other equipment currently used by a user when the nursing object has the event needing to be reminded, and send event reminding to the user by the other equipment, so that the probability of reminding the event to the user can be further improved.
With reference to the first aspect, in a possible implementation manner, the method further includes receiving second indication information, where the second indication information is used to indicate that the first event occurs to the first object, and displaying a real-time monitoring screen of the first object on a screen of the first device based on the second indication information.
In the embodiment of the application, the home intelligent device can send out the prompt to the user in a mode of obviously displaying the real-time nursing picture of the nursing object on the screen when receiving the indication message for indicating the event needing to be reminded of the nursing object without analyzing and processing the real-time monitoring picture of the nursing object, so that the calculation cost of the home intelligent device can be saved.
With reference to the first aspect, in one possible implementation manner, acquiring the real-time monitoring picture of the first object includes receiving the real-time monitoring picture of the first object transmitted by the first camera through an Open Network Video Interface Forum (ONVIF) protocol or a custom protocol, where the first camera is used to acquire the real-time monitoring picture of the first object.
In the embodiment of the application, the video stream can be transmitted between the camera and the household intelligent equipment (for example, the intelligent television) through the ONVIF protocol, so that the video stream can be directly transmitted between the camera and the household intelligent equipment, and the privacy of a user can be better protected without passing through a cloud.
With reference to the first aspect, in a possible implementation manner, the method further includes displaying a first text reminder on a screen of the first device when it is determined that the first object has a first event, where the first text reminder is used for reminding a user that the first object has the first event, and/or sending a first voice reminder through the first device when it is determined that the first object has the first event, where the first voice reminder is used for reminding the user that the first object has the first event.
In some embodiments, when it is determined that the first event occurs to the first object, a text alert or a voice alert is sent through the first device, and then the real-time monitoring screen of the first object is displayed on the screen of the first device in a remarkable manner in response to the operation of viewing the real-time monitoring screen by the user.
In the embodiment of the application, when the event needing to be reminded of the nursing object is determined, the reminding can be sent to the user in a mode of obviously displaying the real-time nursing picture of the nursing object on the viewing screen, and the reminding can be sent to the user in a mode of assisting in text reminding, voice reminding and the like, so that the probability of reminding the event to the user can be further improved, and the anxiety of the user in using the electronic equipment while nursing can be further reduced.
With reference to the first aspect, in a possible implementation manner, while the real-time monitoring screen of the first object is displayed on the screen of the first device, the method further includes displaying one or more shortcut controls on the screen of the first device, where the one or more shortcut controls are used for the user to respond remotely to the first event.
In the embodiment of the application, when the event needing to be reminded of the nursing object is determined, and the prompt is sent to the user in a mode of obviously displaying the real-time nursing picture of the nursing object on the viewing screen, one or more controls for remotely responding to the event needing to be reminded can be simultaneously displayed on the viewing screen, so that the user can quickly respond after knowing that the specific event of the nursing object occurs, the round-trip frequency of the user between the nursing area and the current position can be reduced, and the psychological comfort of the user is further improved.
With reference to the first aspect, in a possible implementation manner, the method further includes acquiring first sensing data of a first area in real time, and reminding a user of the second event occurring in the first area through the first device when determining that the second event occurs in the first area according to the first sensing data.
In one example, the first area is a "bathroom" and the second event is a "someone falls.
In the embodiment of the application, the household intelligent equipment can not only acquire the real-time monitoring picture acquired by the camera, but also acquire the sensing data of the sensor arranged in the household scene, and determine whether an event needing to be reminded occurs according to the sensing data, and when the event needing to be reminded occurs, the household intelligent equipment sends out a reminder to the user, so that the application range of the scheme of the application can be further enlarged.
The method for reminding the event comprises the steps of collecting a real-time monitoring picture of a first object, determining that the first device is a device currently watched by a user or is a device currently in a use state or is a device nearest to the user, and transmitting the real-time monitoring picture of the first object to the first device through an Open Network Video Interface Forum (ONVIF) protocol or a custom protocol so that the first device can display the real-time monitoring picture of the first object on a screen of the first device obviously when determining that the first event occurs to the first object.
Optionally, the first device is any one of a smart television, a tablet, a smart phone, and a central control screen.
In one example, the first object is a "child," and the first event is an activity-triggered event that occurs with the child, such as a child kicking off a quilt, the child's position moving to the bedside, or the like.
In yet another example, the first object is a "pet" and the first event is an event triggered by the pet entering the administration area, such as the pet entering the exclusion area.
In yet another example, the first object is a "kitchen" and the first event is an event triggered by a change in cooking progress, such as a water boiling, a porridge overflow, etc.
According to the embodiment of the application, the linkage between the camera and the home intelligent device can be realized, a user can realize the nursing of a specific object through the home intelligent device, and the user can normally use the home intelligent device to perform activities such as watching while nursing the specific object, when a specific event occurs to the specific object, the user can be reminded through the watching screen and a corresponding real-time nursing picture is played, so that the user can be liberated from a current nursing task while the nursing quality is improved, the user can pay more energy to perform other activities (such as watching a television, starting a video conference and the like) while nursing, and the reminding of the specific event related to nursing can be timely received when performing other activities, thereby greatly reducing the anxiety of the user.
In a third aspect, an electronic device is provided, the electronic device comprising a processor configured to obtain a real-time monitoring picture of a first object, and a display configured to display the real-time monitoring picture of the first object on a screen of the electronic device when it is determined that the first object has a first event, where the electronic device is a device currently being watched by a user, or the electronic device is a device currently being used, or the electronic device is a device nearest to the user.
Optionally, the electronic device is any one of a smart television, a tablet, a smart phone and a central control screen.
In one example, the first object is a "child," and the first event is an activity-triggered event that occurs with the child, such as a child kicking off a quilt, the child's position moving to the bedside, or the like.
In yet another example, the first object is a "pet" and the first event is an event triggered by the pet entering the administration area, such as the pet entering the exclusion area.
In yet another example, the first object is a "kitchen" and the first event is an event triggered by a change in cooking progress, such as a water boiling, a porridge overflow, etc.
According to the embodiment of the application, the linkage between the camera and the home intelligent device can be realized, a user can realize the nursing of a specific object through the home intelligent device, and the user can normally use the home intelligent device to perform activities such as watching while nursing the specific object, when a specific event occurs to the specific object, the user can be reminded through the watching screen and a corresponding real-time nursing picture is played, so that the user can be liberated from a current nursing task while the nursing quality is improved, the user can pay more energy to perform other activities (such as watching a television, starting a video conference and the like) while nursing, and the reminding of the specific event related to nursing can be timely received when performing other activities, thereby greatly reducing the anxiety of the user.
With reference to the third aspect, in a possible implementation manner, the display is further configured to display, on a screen of the electronic device, a real-time monitoring screen of the first object in a form of a small window before the first event occurrence of the first object is determined, and is further specifically configured to switch, when the first event occurrence of the first object is determined, a display form of the real-time monitoring screen of the first object from a form of a small window to a form of a large window or a form of a full screen, or switch, when the first event occurrence of the first object is determined, a display form of the real-time monitoring screen of the first object from a form of a small window static display to a form of a small window highlighting or a form of a small window blinking display.
In the embodiment of the application, when a user nurses a specific object through the home intelligent equipment, the home intelligent equipment can be normally used for activities such as watching, when the specific object does not have a specific event, the real-time nursing picture of the specific object can be displayed on the watching screen of the home intelligent equipment in a small window, and when the specific object has a specific event, the user can be reminded in a mode of obviously displaying the real-time nursing picture of the specific object on the watching screen, so that the user can be liberated from the current nursing task well while the nursing quality is improved, the user can pay more energy to carry out other activities while nursing, and in addition, the user can timely receive the specific event reminding related to nursing when carrying out other activities, thereby greatly reducing the anxiety of the user.
With reference to the third aspect, in a possible implementation manner, before the first event occurrence of the first object is determined, the real-time monitoring screen of the first object is hidden in the background of the electronic device, and the display is further specifically configured to pop-up and display the real-time monitoring screen of the first object on the screen of the electronic device when the first event occurrence of the first object is determined, where the real-time monitoring screen of the first object is displayed in a form of a small window, a large window or a full screen on the screen of the first device, and before the real-time monitoring screen of the first object is popped up and displayed on the screen of the first device, the real-time monitoring screen of the first object is not displayed on the screen of the first device, or the real-time monitoring screen of the first object is hidden and displayed on the screen of the first device.
The implementation can also be described as:
Before the first event occurs to the first object, the real-time monitoring picture of the first object is hidden in the background of the electronic device, and the display is further specifically used for switching the display state of the real-time monitoring picture of the first object from non-display on the screen of the first device to pop-up display on the screen of the first device or switching the display state of the real-time monitoring picture of the first object from hidden display on the screen of the first device to pop-up display on the screen of the first device, wherein the real-time monitoring picture of the first object is displayed in a form of a small window, a form of a large window or a form of a full screen on the screen of the first device.
The hidden display may be understood as hiding the real-time monitoring screen of the first object to the background of the first device.
In the embodiment of the application, when a user nurses a specific object through the home intelligent equipment, the home intelligent equipment can be normally used for activities such as watching, when the specific object does not have a specific event, the real-time nursing picture of the specific object can be hidden to the background of the home intelligent equipment, when the specific object has a specific event, the user can be reminded in a mode of popping up the real-time nursing picture of the specific object on the watching screen, the user can be well liberated from the current nursing task while the nursing quality is improved, the user can put more effort into other activities while nursing, and the reminding of the specific event related to nursing can be timely received when other activities are carried out, so that the anxiety of the user is greatly reduced.
With reference to the third aspect, in a possible implementation manner, the display is further configured to further enlarge and display the real-time monitoring screen of the first object on the screen of the electronic device after the real-time monitoring screen of the first object is displayed on the screen of the electronic device in a pop-up manner, or further enlarge and display the real-time monitoring screen of the first object on the screen of the electronic device in response to an enlarged and displayed operation of a user after the real-time monitoring screen of the first object is displayed on the screen of the electronic device in a pop-up manner.
In the embodiment of the application, the enlarged display can be, for example, switching the display form from the form of the small window to the form of the large window or the form of the full screen, or switching the display form from the form of the large window to the form of the full screen.
In the embodiment of the application, when a specific event occurs to a specific object, a prompt can be sent to a user in a mode of popping up a real-time nursing picture of the specific object on a viewing screen, and the popped-up prompt picture can be further automatically amplified or amplified based on the operation of the user, so that the probability that the occurrence of the specific event can be notified to the user can be further improved, and the anxiety of the user can be further relieved.
With reference to the third aspect, in a possible implementation manner, the processor is further configured to determine that the first event occurs in the first object by analyzing a real-time monitoring screen of the first object.
In one example, when the first object is a "child," the analysis module analyzes the real-time monitoring screen of the first object according to a smart child care algorithm to determine whether the first event occurred for the first object.
In one example, when the first object is a "pet," the analysis module analyzes the real-time monitoring screen of the first object according to a smart pet care algorithm to determine whether the first event occurs to the first object.
In the embodiment of the application, the household intelligent equipment (such as an intelligent television) analyzes and processes the real-time monitoring picture, so that whether an event needing to be reminded occurs to a nursing object or not is determined according to the analysis and processing result, compared with the camera equipment, the household intelligent equipment has stronger analysis and processing capability, intelligent nursing reminding of different scenes can be realized by arranging a plurality of different algorithms, and the application range of the scheme of the application is wider.
With reference to the third aspect, in a possible implementation manner, the electronic device further includes a transceiver, configured to send, when determining that the first object has a first event, first indication information to the second device in a case where the electronic device is in a screen-off state, where the first indication information is used to instruct the second device to remind a user of the first event of the first object.
Alternatively, the second device may be a mobile phone, tablet, notebook, etc. device that the user is using.
The transceiver may be any device having both a receiving function and a transmitting function, or may be configured by a plurality of devices, some of which have a receiving function and some of which have a transmitting function.
In the embodiment of the application, when the home intelligent equipment is in the screen-off state, the home intelligent equipment can analyze and process the real-time monitoring picture of the nursing object, determine whether the nursing object has an event needing to be reminded according to the analysis and processing result, and inform other equipment currently used by a user when the nursing object has the event needing to be reminded, and send event reminding to the user by the other equipment, so that the probability of reminding the event to the user can be further improved.
With reference to the third aspect, in a possible implementation manner, the electronic device further includes a transceiver configured to receive second indication information, where the second indication information is configured to indicate that the first event occurs to the first object.
In the embodiment of the application, the home intelligent device can send out the prompt to the user in a mode of obviously displaying the real-time nursing picture of the nursing object on the screen when receiving the indication message for indicating the event needing to be reminded of the nursing object without analyzing and processing the real-time monitoring picture of the nursing object, so that the calculation cost of the home intelligent device can be saved.
With reference to the third aspect, in a possible implementation manner, the processor is specifically configured to obtain, through an open network video interface forum on vif protocol or a custom protocol, a real-time monitoring picture of the first object transmitted by a first camera, where the first camera is configured to collect the real-time monitoring picture of the first object.
In the embodiment of the application, the video stream can be transmitted between the camera and the household intelligent equipment (for example, the intelligent television) through the ONVIF protocol, so that the video stream can be directly transmitted between the camera and the household intelligent equipment, and the privacy of a user can be better protected without passing through a cloud.
With reference to the third aspect, in a possible implementation manner, the display is further configured to send a first text alert on a screen of the electronic device when the first object is determined to have the first event, where the first text alert is used to alert a user that the first object has the first event, and the electronic device further includes a sounder configured to send a first voice alert when the first object is determined to have the first event, where the first voice alert is used to alert the user that the first object has the first event.
In the embodiment of the application, when the event needing to be reminded of the nursing object is determined, the reminding can be sent to the user in a mode of obviously displaying the real-time nursing picture of the nursing object on the viewing screen, and the reminding can be sent to the user in a mode of assisting in text reminding, voice reminding and the like, so that the probability of reminding the event to the user can be further improved, and the anxiety of the user in using the electronic equipment while nursing can be further reduced.
With reference to the third aspect, in a possible implementation manner, the display is further configured to display one or more shortcut controls on a screen of the electronic device while the real-time monitoring screen of the first object is displayed on the screen of the electronic device, where the one or more shortcut controls are used for a user to remotely respond to the first event.
In the embodiment of the application, when the event needing to be reminded of the nursing object is determined, and the prompt is sent to the user in a mode of obviously displaying the real-time nursing picture of the nursing object on the viewing screen, one or more controls for remotely responding to the event needing to be reminded can be simultaneously displayed on the viewing screen, so that the user can quickly respond after knowing that the specific event of the nursing object occurs, the round-trip frequency of the user between the nursing area and the current position can be reduced, and the psychological comfort of the user is further improved.
With reference to the third aspect, in a possible implementation manner, the processor is further configured to acquire first sensing data of a first area in real time, and the display and/or the sounder is further configured to prompt a user that a second event occurs in the first area when it is determined that the second event occurs in the first area according to the first sensing data.
In one example, the first area is a "bathroom" and the second event is a "someone falls.
In the embodiment of the application, the household intelligent equipment can not only acquire the real-time monitoring picture acquired by the camera, but also acquire the sensing data of the sensor arranged in the household scene, and determine whether an event needing to be reminded occurs according to the sensing data, and when the event needing to be reminded occurs, the household intelligent equipment sends out a reminder to the user, so that the application range of the electronic equipment provided by the application can be further enlarged.
In a fourth aspect, a camera device is provided, which includes an acquisition module configured to acquire a real-time monitoring picture of a first object, a determination module configured to determine that the first device is a device currently being watched by a user, or the first device is a device currently being used, or the first device is a device closest to the user, and a transmission module configured to transmit the real-time monitoring picture of the first object to the first device through an Open Network Video Interface Forum (ONVIF) protocol or a custom protocol, so that the first device significantly displays the real-time monitoring picture of the first object on a screen of the first device when determining that the first event occurs to the first object.
Optionally, the first device is any one of a smart television, a tablet, a smart phone, and a central control screen.
In one example, the first object is a "child," and the first event is an activity-triggered event that occurs with the child, such as a child kicking off a quilt, the child's position moving to the bedside, or the like.
In yet another example, the first object is a "pet" and the first event is an event triggered by the pet entering the administration area, such as the pet entering the exclusion area.
In yet another example, the first object is a "kitchen" and the first event is an event triggered by a change in cooking progress, such as a water boiling, a porridge overflow, etc.
According to the embodiment of the application, the linkage between the camera and the home intelligent device can be realized, a user can realize the nursing of a specific object through the home intelligent device, and the user can normally use the home intelligent device to perform activities such as watching while nursing the specific object, when a specific event occurs to the specific object, the user can be reminded through the watching screen and a corresponding real-time nursing picture is played, so that the user can be liberated from a current nursing task while the nursing quality is improved, the user can pay more energy to perform other activities (such as watching a television, starting a video conference and the like) while nursing, and the reminding of the specific event related to nursing can be timely received when performing other activities, thereby greatly reducing the anxiety of the user.
In a fifth aspect, an electronic device is provided, the electronic device comprising a memory for storing computer program code and a processor for executing the computer program code stored in the memory for implementing the method of the first aspect or any of the possible implementations of the first aspect or for implementing the method of the second aspect or any of the possible implementations of the second aspect.
In a sixth aspect, a system is provided, which comprises the electronic device in the third aspect or any one of the possible implementation manners of the third aspect, and one or more camera devices in the fourth aspect or any one of the possible implementation manners of the fourth aspect.
In a seventh aspect, a computer readable storage medium is provided, in which a computer program or instructions is stored which, when executed, implement the method of the first aspect or any one of the possible implementations of the first aspect or the method of the second aspect or any one of the possible implementations of the second aspect.
In an eighth aspect, there is provided a chip having instructions stored therein which, when run on a device, cause the chip to perform the method of or the method of any of the possible implementations of the first aspect or the second aspect.
A ninth aspect provides a computer program product having a computer program or instructions stored therein which, when executed, performs the method of the first aspect or any one of the possible implementations of the first aspect or the second aspect or performs the method of the second aspect or any one of the possible implementations of the second aspect.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a block diagram of a software architecture of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a method of monitoring visual analysis;
fig. 4 is a schematic view of an application scenario of the present application provided by the embodiment of the present application;
fig. 5 is a schematic view of an application scenario of another embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of a method for event alerting provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart diagram of a method of event alerting provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a system architecture corresponding to a system for event reminding according to an embodiment of the present application;
FIG. 9 is a schematic flow chart diagram of a method of event alerting provided by an embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of a method of event alerting provided by an embodiment of the present application;
FIG. 11 is a schematic flow chart diagram of a method for providing yet another event reminder according to an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application.
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the description of the embodiment of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B, and "and/or" herein is merely an association relationship describing an association object, which means that three relationships may exist, for example, a and/or B, and that three cases, i.e., a alone, a and B together, and B alone, exist. In addition, in the description of the embodiments of the present application, "plural" or "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one, two or more than two. The term "and/or" is used to describe an associative relationship of associative objects, and indicates that three relationships may exist, for example, a and/or B may indicate that a exists alone, while a and B exist together, and B exists alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "one embodiment," "some embodiments," "another embodiment," "other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more, but not all, embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The method provided by the embodiment of the application can be applied to electronic equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal DIGITAL ASSISTANT, PDA) and the like, and the embodiment of the application does not limit the specific type of the electronic equipment.
By way of example, fig. 1 shows a schematic diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an ear-piece interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a user identification (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques can include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. Thus, the electronic device 100 may play or record video in a variety of encoding formats, such as moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent recognition of the electronic device 100, for example, image recognition, face recognition, voice recognition, text understanding, etc., can be realized through the NPU.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an App (such as a sound playing function, an image playing function, etc.) and the like required for at least one function of the operating system. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs an embedded SIM (eSIM) card, i.e., an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
It should be appreciated that the phone cards in embodiments of the present application include, but are not limited to, SIM cards, eSIM cards, universal subscriber identity cards (universal subscriber identity module, USIM), universal integrated phone cards (universal integrated circuit card, UICC), and the like.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library comprises two parts, wherein one part is a function required to be called by java language, and the other part is an android core library.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. Such as surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be understood that the technical scheme in the embodiment of the application can be used in Android, IOS, hong Meng and other systems.
The technical scheme of the embodiment of the application can be applied to electronic equipment with a screen. As an example, the method may be applied to a television, a desktop computer, a notebook computer, a portable electronic device such as a mobile phone, a folding screen, a tablet computer, a camera, a video recorder, a camera or a sensor such as a monitoring camera, an infrared sensor, etc., other electronic devices having a screen display function, electronic devices in a 5G network or electronic devices in a future evolution public land mobile network (public land mobile network, PLMN), etc., and the main application scenario may be a reminder scenario of a specific event, for example, a specific event reminder in a child care scenario, a specific event reminder in a pet care scenario, a specific event reminder in a kitchen care scenario, a reminder of a bathroom having a person falling detected by a sensor, etc.
With the development of the technology of monitoring cameras, more and more users mount the monitoring cameras in the home to care for specific objects, such as children or pets. The camera manufacturers put forward various intelligent solutions for the child and pet nursing scenes, for example, when a specific event occurs, the camera sends out alarm sounds or pushes reminding messages to the mobile phone end, however, due to factors such as the volume and cost of the monitoring camera, the intelligent solutions of the camera have a great gap from convenience and usability expected by the daily life of the user, for example, inaccurate detection of the specific event is easy to cause false reminding, related reminding cannot be timely transmitted to the user, so that the user cannot timely process the specific event, the user cannot see a real-time picture, and whether the processing is needed to be done cannot be remotely judged, so that normal activities of the user are influenced, and the user also generates strong anxiety.
That is, in the current nursing strategy, the monitoring picture is analyzed and processed in real time through the camera end, which is limited by factors such as the volume and cost of the monitoring camera, so that the calculation power of the camera is weaker, and only the monitoring function can be completed or only the simple AI identification at the camera end side is supported.
For the problem of weak image analysis and processing capability at the camera end side, fig. 3 shows a schematic diagram of a method of monitoring screen analysis, for example. In the method, as shown in fig. 3, a monitoring camera supporting an open network video interface forum (open network video interface forum, ONVIF) protocol can access a network-attached storage (NAS) storage device, a real-time video stream corresponding to a real-time monitoring picture collected by the monitoring camera can flow to the NAS storage device, and the NAS storage device performs real-time analysis and processing on the real-time video stream, for example, provides services such as people flow statistics, traffic flow statistics, and the like, and a user can log in the NAS storage device by using a computer to view the corresponding real-time monitoring picture.
In the method, a user can only check the picture of the external access camera by logging in the NAS storage device through a computer, the operation is complicated, the user cannot be liberated from a nursing task, the normal living activities of the user are seriously influenced, and the real-time monitoring picture analysis service provided by the method is mainly oriented to commercial clients and is not suitable for a family nursing scene.
In view of the above, the embodiments of the present application provide a method, an electronic device, and a system for event reminding, by which a linkage between a camera and an intelligent device in a home can be realized, a real-time occurrence of a specific event can be determined through real-time images or real-time data monitored by the camera, a sensor, etc., and a reminder can be sent to a user through the intelligent device in a home scene (for example, the intelligent device currently gazing or being used by the user, or the intelligent device closest to the user), so that the user can be liberated from a current care task while improving the care quality, so that the user can put more into other activities (for example, watch a television, open a video conference, use a mobile phone or a tablet, etc.) while taking care, and the reminder of the specific event related to care can be received in time when other activities are performed, thereby greatly reducing the anxiety of the user.
It can be understood that with the popularization of smart home devices (such as smart televisions, central control screens and the like) and portable electronic devices (such as mobile phones, tablets and the like) and the increasing of chip processing capacity, many smart home devices and portable electronic devices have computing power far higher than cameras and have the capability of better managing and analyzing images of monitoring cameras, so that the scheme of the embodiment of the application can combine display devices, computing centers and image/information acquisition devices (such as monitoring cameras or sensors) in smart home scenes to realize linkage between the cameras and/or sensors and the smart home devices.
In the smart home device in the normal electric state, the chip processing capacity of the smart television is high, so that linkage is preferably realized between the smart television and the camera and/or the sensor, so that intelligent home intelligent reminding service is provided for a user.
Fig. 4 is an exemplary schematic diagram of an application scenario corresponding to the scheme of the present application according to the embodiment of the present application.
As shown in fig. 4, a monitoring camera 1 (not shown) is used for monitoring child activities in a child's house, i.e., for taking care of children, a monitoring camera 2 (not shown) is used for monitoring pet activities in a balcony area, i.e., for taking care of whether a pet enters a forbidden area a, and a monitoring camera 3 (not shown) is used for monitoring cooking progress in a kitchen, e.g., whether water is boiled, cooking time has been reached, etc.
As shown in fig. 4 (a), a user is in a living room area and is watching a video picture (for example, watching a television play, a movie, etc.) by using a smart television in the living room area, the real-time monitoring picture is displayed in a form of a small window on the screen of the smart television, and forms a picture-in-picture effect with the video picture watched by the user, for example, the real-time monitoring picture of a child room, the real-time monitoring picture of a kitchen and the real-time monitoring picture of a balcony are alternately played in the small window, as shown in fig. 4 (b), when the child is found to be active through the real-time monitoring picture of the child room, the small window for displaying the real-time monitoring picture on the screen of the smart television is switched to be a large window or a full screen, and the real-time monitoring picture displayed in the switched large window or the full screen is the real-time monitoring picture of the child room, namely, the child is cared, at the same time, a corresponding text reminder appears on the screen of the smart television, for example, the child is cared about the child, and the child is cared about.
It should be understood that when a specific event does not occur to a nursing object (for example, a nursing child moves, a pet enters a forbidden area, etc.), a real-time monitoring picture acquired by a corresponding camera can be hidden to the background and not displayed on a screen of the intelligent television, and when the specific event occurs, the corresponding real-time monitoring picture is popped up on the screen of the intelligent television, and/or a corresponding reminder (for example, a text reminder, a voice reminder, etc.) is sent to a user through the screen of the intelligent television, so that the interference to the current viewing activity of the user can be reduced to a greater extent, or the real-time monitoring picture acquired by the camera can be displayed on the screen of the intelligent television in other forms, for example, in a half-screen display form, which is not limited by the application.
In some embodiments, the user may close the current reminding window by means of a remote controller or voice of the smart television, and restore the real-time monitoring picture to be displayed in the form of a small window or hide the real-time monitoring picture in the background.
In some embodiments, the user may move the position of the widget displaying the real-time monitoring screen through remote control operation or voice operation, or hide the widget displaying the real-time monitoring screen to the background.
In some embodiments, when a user is watching a video picture by using the smart television located in a living room area, a plurality of real-time monitoring pictures can be displayed on a screen of the smart television, wherein the plurality of real-time monitoring pictures can be played in turn in a suspended small window, the playing frequency and the single playing time duration of each real-time monitoring picture can be set automatically by a system, can be determined according to the input of the user, can be determined according to the priority of each nursing event, and can be displayed in a mode that part or all of the plurality of real-time monitoring pictures are simultaneously displayed in a multi-picture mode in one window.
In some embodiments, the monitoring camera 1 transfers the collected real-time monitoring picture to the smart tv through the ONVIF standard protocol or the custom protocol, the monitoring camera 2 transfers the collected real-time monitoring picture to the smart tv through the ONVIF standard protocol or the custom protocol, and the monitoring camera 3 transfers the collected real-time monitoring picture to the smart tv through the ONVIF standard protocol or the custom protocol.
In a specific implementation mode, the intelligent television can automatically search and access a camera supporting the ONVIF standard protocol or the same custom protocol in the same local area network, and in addition, the intelligent television can also automatically search and access a sensor in the same local area network.
In some embodiments, the smart tv may be replaced with other screens (e.g., a mobile phone, a PC, a notebook computer, a tablet, etc.) that the user is currently looking at, may be replaced with other screens that are currently in use, and may be replaced with screens that are closest to the user.
In some embodiments, on the basis of the embodiment shown in (b) in fig. 4, after the real-time monitoring screen of the child room is remarkably displayed on the screen of the smart television and a prompt of "nursing child activity |" is sent, controls corresponding to recommended operations, such as "play sleep music of the child room", "open air conditioner of the child room", etc., may be further displayed on the screen of the smart television, and the user may quickly make a remote response operation to the specific event that occurs through these controls.
In the embodiment of the application, a user can watch a specific object normally while watching the specific object, a real-time watching picture of the specific object can be displayed or hidden to the background in a small window on a watching screen, when the specific object has a specific event (namely, when the real-time watching picture meets a reminding trigger condition), a reminder can be sent to the user through the watching screen, and the corresponding real-time watching picture is played, so that the user can be liberated from the current watching task well while the watching quality is improved, the user can pay more energy to perform other activities (such as watching a television, starting a video conference, using a mobile phone or a tablet) while watching, and the specific event related to watching can be timely received when other activities are performed, thereby greatly reducing the anxiety of the user.
Fig. 5 illustrates an application scenario schematic diagram corresponding to another embodiment of the present application.
As shown in fig. 5, a monitoring camera 1 (not shown) is used for monitoring child activities in a child's house, i.e., for taking care of a child, a monitoring camera 2 (not shown) is used for monitoring pet activities in a balcony area, i.e., for taking care of whether a pet enters a forbidden area a, and a monitoring camera 3 (not shown) is used for monitoring cooking progress in a kitchen, e.g., whether water is boiled, cooking time has been reached, etc.
As shown in fig. 5 (a), a user is in a living room area and is watching a video picture (for example, watching a television play, a movie, etc.) by using a smart television in the living room area, the real-time monitoring picture is displayed in a form of a small window on the screen of the smart television, and forms a picture-in-picture effect with the video picture watched by the user, for example, the real-time monitoring picture of a child room, the real-time monitoring picture of a kitchen and the real-time monitoring picture of a balcony are alternately played in the small window, the small window is currently playing the real-time monitoring picture of the child room, and as shown in fig. 5 (b), when the real-time monitoring picture of the balcony finds that a pet enters the forbidden area a, the small window for displaying the real-time monitoring picture on the screen of the smart television is switched to a large window or a full screen, and the real-time monitoring picture displayed in the full screen is the real-time monitoring picture of the balcony, that is the pet care picture, and at the same time, a corresponding text reminder, for example, a voice reminder can be played on the screen of the smart television to remind the user of the pet.
Further description of the embodiments of the present application is the same as that of the embodiment shown in fig. 4, and is not repeated here for brevity.
Illustratively, FIG. 6 shows a schematic flow chart of a method 600 for event alerting provided by an embodiment of the present application. As shown in fig. 6, the method 600 includes:
s601, a first camera collects real-time monitoring pictures of a first object.
And S602, the first camera sends a real-time video stream of the first object to the first intelligent device in real time.
The first camera transfers the collected real-time monitoring pictures to the first intelligent device in a real-time video stream mode through an ONVIF standard protocol or a custom protocol.
In some embodiments, when the first camera and the first smart device are of the same vendor, the custom protocol may be, for example, a vendor custom protocol.
When the first camera transfers the collected real-time monitoring pictures to the first intelligent device in a real-time video stream mode through the ONVIF standard protocol, the first camera is a camera supporting the ONVIF standard protocol.
In one implementation, the first camera and the first smart device are located in the same local area network, and the first smart device can automatically search for and access to a first camera supporting the ONVIF standard protocol in the local area network.
The first intelligent device may be an intelligent device that the user is looking at, may be an intelligent device that is in a use state, or may be an intelligent device nearest to the user, which is not limited in the present application.
More specifically, the first smart device may be, for example, a smart television, and may also be a device such as a central control screen, a tablet, a mobile phone, a notebook computer, a PC, a smart watch, or the like.
And S603, the first intelligent device acquires a real-time monitoring picture of the first object according to the real-time video stream of the first object.
In some implementations, after receiving the real-time video stream of the first object, the first smart device decodes the real-time video stream of the first object to obtain the real-time monitoring picture of the first object.
In a specific implementation manner, the current screen of the first intelligent device is in a normal use state (for example, a television or a movie is being played), and after the first intelligent device acquires the real-time monitoring picture of the first object, the first intelligent device displays the real-time monitoring picture of the first object in a form of a small window on the current display interface of the screen of the first intelligent device, or the first intelligent device hides the real-time monitoring picture of the first object to the background and does not display the real-time monitoring picture of the first object on the screen.
S604, the first intelligent device determines that the first object has a first event.
The first event refers to a specific event related to the activity of the first object, for example, when the first object is a child, the first event may be "child is active", when the first object is a pet, the first event may be "pet enters a forbidden area", when the first object is a cooking appliance, the first event may be "water is boiled, food cooking time is up", and the like.
In one implementation, the first intelligent device has a picture analysis processing capability, and the first intelligent device analyzes and processes the acquired real-time monitoring picture of the first object and determines whether the first event occurs to the first object according to the analysis and processing result.
In some embodiments, when the first intelligent device is currently in a screen-off state, the first intelligent device sends a first indication message to the second device when determining that the first object has a first event according to analysis and processing results, wherein the first indication message is used for indicating the second device to remind a user that the first object has the first event, and the second device reminds the user that the first object has the first event through the second device after receiving the first indication message, wherein the second device can be, for example, a mobile phone, a tablet and the like which are used by the user, or can be other devices nearest to the user.
In yet another implementation, the first smart device receives a second indication message for indicating that the first event occurred to the first object, the second indication message may be from, for example, a smart device having picture analysis processing capabilities, or may be from a device having picture analysis processing capabilities, and the smart device having picture analysis processing capabilities and the device having picture analysis processing capabilities may also receive a real-time video stream of the first object.
And S605, the first intelligent device obviously displays a real-time monitoring picture of the first object on a screen of the first intelligent device.
In one implementation manner, a television play is being played on a current screen of the first intelligent device, a real-time monitoring picture of a first object is displayed on the screen of the first intelligent device in a form of a small window, and a picture-in-picture scene is formed with the playing television play picture, at this time, if the first intelligent device determines that the first object has a first event, the small window displaying the real-time monitoring picture is switched to be displayed in a large window or to be displayed in a full screen, that is, the real-time monitoring picture of the first object is displayed in the large window or is displayed in the full screen on the current display screen, or the display form of the real-time monitoring picture of the first object can be switched from the form of the static display of the small window to the form of the highlight display of the small window or the form of the flicker display of the small window, and meanwhile, corresponding reminding information can be displayed on the screen, or corresponding voice prompt and the like can be sent. In yet another implementation manner, a television play is being played on the current screen of the first intelligent device, the real-time monitoring picture of the first object is hidden to the background and is not displayed on the screen of the first intelligent device, the user is in a state of watching the television play in a full screen, at this time, if the first intelligent device determines that the first object has a first event, the real-time monitoring picture of the first object is popped up on the screen of the first intelligent device, the method can be specifically a small window display, a large window display or a real-time monitoring picture of the first object displayed in a full screen manner, namely the process can be equally understood as obviously displaying the real-time monitoring picture of the first object on the current display screen, and simultaneously, corresponding reminding information can be displayed on the screen, or corresponding voice prompts and the like can be sent out.
In some embodiments, when it is determined that the first event occurs to the first object, a text alert or a voice alert is sent through the first intelligent device, and then the real-time monitoring screen of the first object is displayed on the screen of the first intelligent device in a remarkable manner in response to the operation of viewing the real-time monitoring screen by the user.
The method comprises the steps of popping up a real-time monitoring picture of a first object on a screen of a first intelligent device, wherein the realization level can be that a small window of the real-time monitoring picture of the first object is popped up first, then the display is automatically amplified to a large window of the real-time monitoring picture of the first object or to a full screen of the real-time monitoring picture of the first object, under the condition that the time interval between popping up and amplifying display is very short, a user can not perceive the process, on the user level, the large window or the full screen is directly displayed, and under the condition that the time interval between popping up and amplifying display is enough for naked eyes to perceive the amplifying process, the user can perceive the amplifying process.
The method comprises the steps of displaying the real-time monitoring picture of the first object on the screen of the first intelligent device in a pop-up mode, or displaying the real-time monitoring picture of the first object on the screen of the first intelligent device in a pop-up mode, responding to the amplifying display operation of a user, and displaying the real-time monitoring picture of the first object on the screen of the first intelligent device in a further pop-up mode.
In some embodiments, for example, the care function of the first smart device may be started through a setup function, a specific application function, or a voice manner of the first smart device, and the care function of the first smart device may also be started through any other possible manner, which is not limited in the present application.
It should be understood that the number of cameras is not limited in the embodiment of the present application, and the first intelligent device in the embodiment of the present application may acquire real-time monitoring pictures of a plurality of care objects acquired by a plurality of cameras, so as to implement care of a plurality of care objects at the same time.
In the embodiment of the application, a user can watch a specific object normally while watching the specific object, a real-time watching picture of the specific object can be displayed or hidden to the background in a small window on a watching screen, when the specific object has a specific event (namely, when the real-time watching picture meets a reminding trigger condition), a reminder can be sent to the user through the watching screen, and the corresponding real-time watching picture is played, so that the user can be liberated from the current watching task well while the watching quality is improved, the user can pay more energy to perform other activities (such as watching a television, starting a video conference, using a mobile phone or a tablet) while watching, and the specific event related to watching can be timely received when other activities are performed, thereby greatly reducing the anxiety of the user.
In some embodiments, the first intelligent device may further receive sensing data collected by the first sensor in real time, determine whether a specific event occurs according to the sensing data, and send a reminder to the user through the first intelligent device when the specific event is determined to occur. The sensor is used for detecting whether an event occurs or not, and the sensor can be independently used or used together with equipment such as a camera.
In one example, the first sensor may be a sensor installed in a bathroom, for example, the first sensor may be an infrared sensor, an acceleration sensor, or an integrated body of multiple types of sensors, and the acquired sensing data is used for monitoring emergencies such as falling events of old people.
In some embodiments, the first intelligent device may further receive a video stream from the visual doorbell, and when it is monitored that a person stays at the doorway for a long time or that an express delivery is put at the doorway for a period of time, a real-time picture collected by the visual doorbell may be popped up on a screen of the first intelligent device, and a reminder may be sent to the user.
Taking the first smart device as an example on the basis of the embodiment shown in fig. 6, fig. 7 shows a schematic flowchart of a method 700 for reminding an event according to another embodiment of the present application. As shown in fig. 7, the method 700 includes:
s701, a first camera collects real-time monitoring pictures of a first object.
Wherein the first object may be understood as a first care object.
S702, the second camera collects real-time monitoring pictures of the second object.
Wherein the second subject may be understood as a second care subject.
Here, S701 and S702 are continuously performed steps, and the execution order of S701 and S702 is not limited, and may or may not be started at the same time.
And S703, the first camera sends the real-time video stream of the first object to the intelligent television in real time.
The explanation of this step is the same as that of S602 in the embodiment shown in fig. 6, and is not repeated here for brevity.
And S704, the second camera sends the real-time video stream of the second object to the intelligent television in real time.
The second camera is used for transferring the acquired real-time monitoring pictures to the intelligent television in a real-time video stream mode through an ONVIF standard protocol or a custom protocol.
In some embodiments, when the second camera and the smart television are of the same vendor, the custom protocol may be, for example, a vendor custom protocol.
When the second camera transfers the collected real-time monitoring pictures to the intelligent television in a real-time video stream mode through the ONVIF standard protocol, the second camera is a camera supporting the ONVIF standard protocol.
In one implementation, the second camera and the smart television are located in the same local area network, and the smart television can automatically search for and access to the second camera supporting the ONVIF standard protocol in the local area network.
Here, S703 and S704 are also continuously performed steps, and the execution order of S703 and S704 is not limited.
And S705, the intelligent television acquires a real-time monitoring picture of the first object according to the real-time video stream of the first object.
The explanation of this step is the same as that of S603 in the embodiment shown in fig. 6, and is not repeated here for brevity.
S706, the intelligent television acquires a real-time monitoring picture of the second object according to the real-time video stream of the second object.
In some implementations, after receiving the real-time video stream of the second object, the smart television decodes the real-time video stream of the second object to obtain the real-time monitoring picture of the second object.
And S707, the intelligent television displays the real-time monitoring picture of the first object and the real-time monitoring picture of the second object on the current screen in the form of small windows.
In a specific implementation manner, the current screen of the smart television is in a normal use state (for example, a television series or a movie is being played), and after the smart television acquires the real-time monitoring picture of the first object and the real-time monitoring picture of the second object, the real-time monitoring picture of the first object and the real-time monitoring picture of the second object are displayed in a small window form on the current display interface of the screen of the smart television.
When the real-time monitoring picture of the first object and the real-time monitoring picture of the second object are displayed in the form of small windows, the real-time monitoring picture of the first object and the real-time monitoring picture of the second object can be alternately displayed in the same small window, or the real-time monitoring picture of the first object and the real-time monitoring picture of the second object are simultaneously displayed in the same small window in the form of multiple pictures.
It should be understood that, in the step S707, after the smart tv acquires the real-time monitoring frame of the first object and the real-time monitoring frame of the second object, the smart tv may hide the real-time monitoring frame of the first object and the real-time monitoring frame of the second object to the background and not display the real-time monitoring frames on the screen.
And S708, the intelligent television determines whether the first event occurs to the first object according to the real-time monitoring picture of the first object, and/or the intelligent television determines whether the second event occurs to the second object according to the real-time monitoring picture of the second object.
The first event is a specific event related to the activity of the first object, for example, when the first object is a child, the first event may be an event triggered by the activity of the child, for example, when the child kicks off a quilt, the position of the child moves to a bedside, etc., when the first object is a pet, the first event may be an event triggered by the pet entering a control area, for example, when the pet enters a forbidden area, and when the first object is a kitchen, the first event may be an event triggered by a change of a cooking process, for example, an event such as water boiling, porridge overflowing, food cooking time reaching, etc.
The explanation of the second event is similar to that of the first event, and is not repeated here for brevity.
In one implementation, the smart television has a picture analysis processing capability, and the smart television analyzes and processes the acquired real-time monitoring picture of the first object, so that the monitoring of the first event and the second event is realized.
And S709, when the intelligent television determines that the first object has a first event according to the real-time monitoring picture of the first object, the intelligent television obviously displays the real-time monitoring picture of the first object on the screen of the intelligent television, and/or when the intelligent television determines that the second object has a second event according to the real-time monitoring picture of the second object, the intelligent television obviously displays the real-time monitoring picture of the second object on the screen of the intelligent television.
In one implementation mode, a TV play is being played on a current screen of the intelligent TV, a real-time monitoring picture of a first object and a real-time monitoring picture of a second object are alternately displayed on the screen of the intelligent TV in a form of small windows, a picture-in-picture scene is formed with the playing TV play picture, at the moment, if the intelligent TV determines that the first object has a first event, the small window displaying the real-time monitoring picture is switched to be displayed in a large window or full-screen, the real-time monitoring picture of the first object is displayed in the large window or full-screen area, if the intelligent TV determines that the second object has a second event, the small window displaying the real-time monitoring picture of the first object and the real-time monitoring picture of the second object are switched to be displayed in the large window or full-screen area, and simultaneously, if the intelligent TV determines that the first object has the first event and the second object has a second event, the small window displaying the real-time monitoring picture of the first object and the real-time monitoring picture of the second object can be displayed in the large window or full-screen area, and a corresponding voice prompt message can be displayed on the screen.
In a further implementation manner, a television play is being played on the current screen of the intelligent television, the real-time monitoring picture of the first object and the real-time monitoring picture of the first object are hidden to the background and are not displayed on the screen of the intelligent television, a user is in a state of watching the television play in a full screen, at the moment, if the intelligent television determines that the first object has the first event, the real-time monitoring picture of the first object is popped up on the screen of the intelligent television, specifically, the real-time monitoring picture of the first object can be displayed in a small window, displayed in a large window or displayed in a full screen, if the intelligent television determines that the second object has the second event, the real-time monitoring picture of the first object and the real-time monitoring picture of the second object can be popped up on the screen of the intelligent television, specifically, the real-time monitoring picture of the first object and the real-time monitoring picture of the second object can be popped up on the screen of the intelligent television, the real-time monitoring picture of the first object can be displayed in a small window, displayed in a large window or displayed in a full screen, the real-time monitoring picture of the second object can be displayed in a full window, or a corresponding voice prompt can be displayed on the screen, and the like.
In the embodiment of the application, a user can remotely nurse objects such as children, pets, cooking processes and the like in a household when watching by using the intelligent television, for example, when the user normally uses the intelligent television to watch, if an infant in a child house gives out actions or the pets enter a forbidden area, the television can automatically pop up a picture corresponding to the monitoring camera, and simultaneously remind the user of corresponding events. The nursing quality can be improved, and meanwhile, the user can be liberated from the current nursing task well, so that the user can put more effort to perform other activities while taking care, and meanwhile, the anxiety of the user is relieved to a great extent.
Taking a first smart device as an example, fig. 8 shows a system architecture diagram corresponding to a system 800 for event reminding according to an embodiment of the present application.
As shown in fig. 8, the system for event reminding includes a plurality of cameras and a smart tv, the plurality of cameras may be a plurality of cameras arranged at different positions in a home scene by a user according to actual needs, the plurality of cameras are used for collecting real-time pictures of a plurality of nursing objects, in the example shown in fig. 8, the system 800 includes a camera 810, a camera 820, a camera 830, a camera 840 and a smart tv 850, wherein the smart tv 850 includes a protocol interface module 851, a video decoding module 852, a visual analysis module 853, a display module 854 and a reminding module 855, specifically:
The camera 810 is configured to collect a real-time monitoring picture of the first object.
The camera 820 is used for acquiring a real-time monitoring picture of the second object.
The camera 830 is configured to collect a real-time monitoring picture of the third object.
The camera 840 is used for acquiring a real-time monitoring picture of the fourth object.
Wherein, camera 810, camera 820, camera 830, and camera 840 all support the ONVIF protocol or custom protocol.
The protocol interface module 851 is configured to integrate an ONVIF protocol interface and/or a custom protocol interface to implement searching, discovery and access of video streams to the monitoring camera in the local area network.
That is, the protocol interface module 851 is configured to obtain a video stream collected by a camera supporting the ONVIF protocol or the custom protocol in the lan.
Specifically, in the example shown in fig. 8, the protocol interface module 851 is configured to obtain a real-time video stream of a first object transmitted by the camera 810, further configured to obtain a real-time video stream of a second object transmitted by the camera 820, further configured to obtain a real-time video stream of a third object transmitted by the camera 830, and further configured to obtain a real-time video stream of a fourth object transmitted by the camera 840.
In some embodiments, the functions implemented by the protocol interface module 851 may be performed by a processor integrated in the smart television 850, such as a video stream acquisition interface.
The video decoding module 852 is configured to perform decoding processing on the acquired video stream.
Specifically, in the example shown in fig. 8, the video decoding module 852 is configured to perform parallel decoding processing on the real-time video stream of the first object, the real-time video stream of the second object, the real-time video stream of the third object, and the real-time video stream of the fourth object, so as to obtain a real-time monitoring picture of the first object, a real-time monitoring picture of the second object, a real-time monitoring picture of the third object, and a real-time monitoring picture of the fourth object.
In some embodiments, the functions performed by video decoding module 852 may be performed by a processor integrated in smart television 850, such as a decoding device.
The visual analysis module 853 is used as a core module of the system 800, and is configured to integrate multiple image recognition algorithms, and perform visual analysis processing on the real-time monitoring frame according to different user requirements through the algorithms, that is, determine whether the nursing object generates a triggering event for reminding, and put another way, the visual analysis module 853 is configured to determine whether a specific event occurs according to the real-time monitoring frame and remind the user of the specific event.
Specifically, in the example shown in fig. 8, the first object may be, for example, a child, the algorithm 1 may be, for example, a smart child care algorithm, the visual analysis module 853 is specifically configured to perform visual analysis processing on a real-time monitoring screen of the first object to determine whether the first object has a first event according to the real-time monitoring screen of the first object, where the first event may be, for example, a child that is being cared for, the second object may be, for example, a pet, the algorithm 2 may be, for example, a smart pet care algorithm, the visual analysis module 853 is specifically configured to perform visual analysis processing on a real-time monitoring screen of the second object to determine whether the second object has a second event, where the second event may be, for example, a pet that is being cared for, enters a forbidden area according to the real-time monitoring screen of the second object, the third object may be, for example, the visual analysis module 853 is specifically configured to perform visual analysis processing on a real-time monitoring screen of the third object to determine whether the third object has a third event, where the third event, which may be, for example, a water is burning, according to the real-time monitoring screen of the third object.
It should be appreciated that the visual analysis module 853 of the smart tv 850 may further integrate a plurality of algorithms for other scenes, for example, may integrate a smart aged care algorithm capable of analyzing whether an aged person has an emergency such as a fall according to data of a sensor or monitoring picture data, and may integrate a smart home anti-theft algorithm capable of analyzing whether a person remains at a doorway according to monitoring picture data from a visual doorbell or data of a sensor.
Wherein the visual analysis module 853 may be an AI visual analysis module. In some embodiments, the functionality implemented by the visual analysis module 853 may be performed by a processor integrated in the smart television 850, such as an image processing chip.
In some embodiments, multiple cameras or a combination of sensors and cameras may be used to co-care the same object, where the visual analysis module 853 may perform fusion analysis and processing on the real-time monitoring frames collected by the multiple cameras, or fusion analysis and processing on the real-time monitoring frames collected by the cameras and the real-time sensing data collected by the sensors, so as to obtain a more accurate analysis result, thereby improving the prompt timeliness and accuracy of a specific event.
The display module 854 is configured to display a corresponding real-time monitoring picture on the screen of the smart tv 850 after the video decoding module 852 decodes the real-time video stream.
Specifically, in the example shown in fig. 8, the display module 854 is configured to display a real-time monitoring screen of a first object on a screen of the smart tv 850, is further configured to display a real-time monitoring screen of a second object on a screen of the smart tv 850, is further configured to display a real-time monitoring screen of a third object on a screen of the smart tv 850, and is further configured to display a real-time monitoring screen of a fourth object on a screen of the smart tv 850.
In some implementations, the display module 854 is specifically configured to alternately display, on the screen of the smart tv 850, a real-time monitoring screen of the first object, a real-time monitoring screen of the second object, a real-time monitoring screen of the third object, and a real-time monitoring screen of the fourth object in a widget form.
In yet other implementations, the display module 854 is specifically configured to simultaneously display, on the screen of the smart television 850, a real-time monitoring screen of the first object, a real-time monitoring screen of the second object, a real-time monitoring screen of the third object, and a real-time monitoring screen of the fourth object in a multi-screen within a widget.
In still other implementations, the display module 854 is specifically configured to display a real-time monitoring screen of the first object on a screen of the smart tv 850 when the first event occurs, the display module 854 is also specifically configured to display a real-time monitoring screen of the second object on a screen of the smart tv 850 when the second event occurs, the display module 854 is also specifically configured to display a real-time monitoring screen of the third object on a screen of the smart tv 850 when the third event occurs, and the display module 854 is also specifically configured to display a real-time monitoring screen of the fourth object on a screen of the smart tv 850 when the fourth event occurs.
In some embodiments, the functions performed by the display module 854 may be performed by a display integrated in the smart television 850.
The reminding module 855 is configured to, when the visual analysis module 853 determines that a specific event occurs, remind the user of the event by means of a screen display or an audio prompt on the smart tv.
Specifically, in the example shown in fig. 8, the reminding module 855 is configured to remind a user of a first event by means of a screen display or an audio prompt on the smart tv when the first object has the first event, the reminding module 855 is configured to remind the user of a second event by means of a screen display or an audio prompt on the smart tv when the second object has the second event, the reminding module 855 is configured to remind the user of a third event by means of a screen display or an audio prompt on the smart tv when the third object has the third event, and the reminding module 855 is configured to remind the user of a fourth event by means of a screen display or an audio prompt on the smart tv when the fourth object has the fourth event.
The alert module 855 may be, for example, an acousto-optic alert module.
In some embodiments, the functionality implemented by the reminder module 855 can be accomplished by one or more of a display, a sounder, and an optical device integrated in the smart television 850.
Optionally, the smart tv 850 may further include a remote control module for performing a selection control on the related smart device based on a control operation performed on the smart tv by the user after a specific event occurs and the user is reminded.
In one example, when the smart television reminds the user that the child has an activity, if the child has an activity of sneezing, the user can remotely control the operation of turning off the air conditioner of the child's room through the smart television, and if the child has an activity of crying, the user can remotely control the operation of turning on the sleep music of the child's room through the smart television, or remotely control the milk warming operation in other spaces (such as a kitchen) and the like.
In yet another example, when the smart television alerts the user that water on the induction cooker is on, the user may perform an "power off induction cooker" operation through the smart television remote control.
In the embodiment of the application, a user can remotely nurse objects such as children, pets, cooking processes and the like in a household when watching by using the intelligent television, for example, when the user normally uses the intelligent television to watch, if an infant in a child room makes an action which can trigger an event mark or the pets enter a forbidden area, the television can automatically pop up a picture corresponding to a monitoring camera, and simultaneously remind the user of corresponding events. The nursing quality can be improved, and meanwhile, the user can be liberated from the current nursing task well, so that the user can put more effort to perform other activities while taking care, and meanwhile, the anxiety of the user is relieved to a great extent.
In addition, video streams can be transmitted between the camera and the intelligent television through the ONVIF protocol, so that the video streams can be directly transmitted between the camera and the intelligent television, the user privacy can be better protected without passing through a cloud.
In addition, in the method, the intelligent television is used for completing the analysis processing of the real-time monitoring picture, the intelligent television is regarded as normal electric equipment, the intelligent television has stronger analysis processing capacity, and more flexible and convenient intelligent nursing reminding under different scenes can be realized by arranging a plurality of different algorithms. Illustratively, FIG. 9 shows a schematic flow chart of a method 900 of event alerting provided by an embodiment of the present application.
As shown in fig. 9, the method 900 includes:
s901, a camera A collects real-time monitoring pictures of a child room.
And S902, the camera A sends real-time video stream of the child room to the intelligent television in real time.
The explanation of this step is similar to that of S602 in the embodiment shown in fig. 6, and is not repeated here for brevity.
And S903, the intelligent television acquires a real-time monitoring picture of the child room according to the real-time video stream of the child room.
The explanation of this step is similar to that of S603 in the embodiment shown in fig. 6, and is not repeated here for brevity.
S904, the intelligent television analyzes the real-time monitoring picture of the child room to determine whether the child is cared for or not.
In some implementations, the smart tv analyzes real-time monitoring pictures of the child's room based on a smart child care algorithm.
And S905, when the intelligent television determines that the child is nursed to move, a real-time monitoring picture of the child room is obviously displayed on the screen of the intelligent television.
The explanation about this step is similar to that of S605 in the embodiment shown in fig. 6, and is not repeated here for brevity.
S906, the intelligent television sends out text reminding and/or voice reminding to the user while displaying the real-time monitoring picture of the child room remarkably.
S907, the intelligent television displays shortcut operation controls associated with specific activities of the child on a screen, wherein the associated shortcut operation controls are used for remotely responding and controlling the event 'nursing child activities'.
In some embodiments, when the child's activity is "sneezing", the associated shortcut control may be "close the air conditioner of the child's room", and when the child's activity is "crying", the associated shortcut control may be "open the sleep-setting music of the child's room".
It should be noted that S906 and S907 described above are optional steps.
It should be appreciated that throughout the execution of the method 900, the screen of the smart tv is in a normal use state, for example, in a state of playing a tv show or movie normally, and the watching experience of the user is not affected in the child care process.
In some embodiments, for example, the nursing function of the smart tv may be started through a setting function of the smart tv, a specific application function installed on the smart tv, or a voice manner, and the nursing function of the smart tv may also be started through any other possible manner, which is not limited in the present application.
According to the embodiment of the application, the user can remotely watch the child needing to be watched in the family while using the intelligent television, meanwhile, the intelligent television analyzes the real-time monitoring picture through the AI capability of the user, and the user is reminded in an auxiliary manner through the intelligent television when the specific event is identified, so that the anxiety feeling of the user when using the intelligent television can be reduced.
And when reminding a user of a specific event, the shortcut control associated with the specific event can be provided for the user through the intelligent television, so that the user can respond to the specific event conveniently through the intelligent television, and anxiety of the user when using the intelligent television is reduced to a greater extent.
Illustratively, FIG. 10 shows a schematic flow chart of a method 1000 for event alerting provided by an embodiment of the present application. As shown in fig. 10, the method 1000 includes:
s1001, a camera B collects a real-time monitoring picture of the first area.
The first area comprises an area A, and the area A is a pet forbidden area.
In some embodiments, the area a is an area selected by a user through a frame of the smart tv in the real-time monitoring screen of the first area, for example, may be an area selected by a manual frame of a remote controller of the smart tv when the intelligent pet care function is started, and after the setting of the area is completed, the user may normally perform viewing or other operations.
And S1002, the camera B sends real-time video streams of the first area to the intelligent television in real time.
The explanation of this step is similar to that of S602 in the embodiment shown in fig. 6, and is not repeated here for brevity.
And S1003, the intelligent television acquires a real-time monitoring picture of the first area according to the real-time video stream of the first area.
The explanation of this step is similar to that of S603 in the embodiment shown in fig. 6, and is not repeated here for brevity.
After the real-time monitoring picture of the first area is obtained, the real-time monitoring picture of the first area can be displayed on the current screen of the intelligent television in a floating window mode or hidden in the background.
And S1004, the intelligent television analyzes the real-time monitoring picture of the first area to determine whether the pet enters the area A.
In some implementations, the smart television analyzes the real-time monitoring view of the first area based on a smart pet care algorithm.
S1005, when the intelligent television determines that the pet enters the area A, a real-time monitoring picture of the first area is obviously displayed on a screen of the intelligent television.
The explanation about this step is similar to that of S605 in the embodiment shown in fig. 6, and is not repeated here for brevity.
S1006, the intelligent television sends out text reminding and/or voice reminding to the user while displaying the real-time monitoring picture of the first area remarkably.
And S1007, displaying a shortcut operation control associated with the event 'pet entering area A' on a screen by the intelligent television, wherein the associated shortcut operation control is used for remotely responding and controlling the event 'pet entering area A' by a user.
In some embodiments, the associated shortcut control may be "play alert voice in the first area".
It should be noted that S1006 and S1007 described above are optional steps.
It should be appreciated that during the whole process of executing the method 1000, the screen of the smart television is in a normal use state, for example, in a state of normally playing a television play or a movie, and the watching experience of the user is not affected in the pet care process.
According to the embodiment of the application, a user can nurse the pet at home while using the intelligent television, so that the pet is prevented from entering areas such as bedrooms and kitchens where the user does not want the pet to enter, and anxiety feeling of the user when using the intelligent television can be reduced.
And when reminding a user of a specific event, the shortcut control associated with the specific event can be provided for the user through the intelligent television, so that the user can respond to the specific event conveniently through the intelligent television, and anxiety of the user when using the intelligent television is reduced to a greater extent.
Illustratively, FIG. 11 shows a schematic flow chart of a method 1100 of event alerting provided by an embodiment of the present application. As shown in fig. 11, the method 1100 includes:
s1101, the camera C collects real-time monitoring pictures of the kitchen.
And S1102, the camera C sends the real-time video stream of the kitchen to the intelligent television in real time.
The explanation of this step is similar to that of S602 in the embodiment shown in fig. 6, and is not repeated here for brevity.
And S1103, the intelligent television acquires a real-time monitoring picture of the kitchen according to the real-time video stream of the kitchen.
The explanation of this step is similar to that of S603 in the embodiment shown in fig. 6, and is not repeated here for brevity.
And S1104, after the intelligent television acquires the real-time monitoring picture of the kitchen, displaying the real-time monitoring picture of the kitchen in a form of a small window on a screen.
In some embodiments, when a plurality of cooking appliances are included in the real-time monitoring screen of the kitchen, the plurality of cooking appliances are all associated with a "top-setting" control, and when a user wishes to remotely care one or more cooking processes in the kitchen, the user can click on the "top-setting" control associated with the one or more cooking appliances, respectively, so that the real-time monitoring screen displayed in the small window on the screen of the smart television is the top-setting one or more cooking process screens of the user.
And S1105, the intelligent television displays shortcut operation controls associated with the real-time monitoring picture of the kitchen on the screen.
In some embodiments, when there is an electromagnetic oven in use in the real-time monitoring screen of the kitchen and the user sets the electromagnetic oven on top, the associated shortcut control may be "turn off the power of the electromagnetic oven".
It should be appreciated that throughout the execution of the method 1100, the screen of the smart tv is in a normal use state, for example, in a state of normally playing a tv show or movie, and the watching experience of the user is not affected in the kitchen care process.
According to the embodiment of the application, a user can remotely nurse the cooking process of the kitchen while using the intelligent television, so that the user is prevented from frequently going back and forth to the kitchen and the area where the intelligent television is located, dangerous accidents caused by the fact that the user does not pay attention to the cooking process in time can be avoided, and anxiety feeling of the user when using the intelligent television can be reduced.
And moreover, the shortcut control associated with the nursing object can be provided for the user through the intelligent television, so that the user can conveniently respond to the nursing object through the intelligent television, and the anxiety of the user when using the intelligent television is reduced to a greater extent.
It will be appreciated that in other embodiments, if the computing power of the device with the monitoring module (such as a camera or a sensor) allows, the device with the monitoring module may also analyze and determine whether the related event occurs based on the acquired monitoring data, and send real-time monitoring data to the presentable device such as the smart tv when the related event occurs, so that the presentable device alerts the user about the device in at least one of a reminding screen, a text notification, and a voice reminder. In other embodiments, some of the monitoring data may also be analyzed and determined on the cloud server in the event of user authorization.
One or more of the modules or units described herein may be implemented in software, hardware, or a combination of both. When any of the above modules or units are implemented in software, the software exists in the form of computer program instructions and is stored in a memory, a processor can be used to execute the program instructions and implement the above method flows. The processor may include, but is not limited to, at least one of a central processing unit (central processing unit, CPU), microprocessor, digital Signal Processor (DSP), microcontroller (microcontroller unit, MCU), or artificial intelligence processor, etc. running various types of software computing devices, each of which may include one or more cores for executing software instructions to perform operations or processes. The processor may be built into a SoC (system on a chip) or an application specific integrated circuit (application specificintegrated circuit, ASIC), or may be a stand-alone semiconductor chip. The processor may further include necessary hardware accelerators, such as field programmable gate arrays (field programmable GATE ARRAY, FPGAs), PLDs (programmable logic devices), or logic circuits implementing dedicated logic operations, in addition to the cores for executing software instructions for operation or processing.
When the modules or units described herein are implemented in hardware, the hardware may be any one or any combination of a CPU, microprocessor, DSP, MCU, artificial intelligence processor, ASIC, soC, FPGA, PLD, special purpose digital circuitry, hardware accelerator, or non-integrated discrete device that may run the necessary software or that is independent of the software to perform the above method flows.
When the modules or units described herein are implemented in software, they may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk (solid statedisk, SSD)), etc.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (37)

Translated fromChinese
1.一种事件提醒的方法,其特征在于,所述方法包括:1. A method for event reminder, characterized in that the method comprises:获取第一对象的实时监控画面;Obtaining a real-time monitoring image of the first object;当确定所述第一对象发生第一事件时,在第一设备的屏幕上显著显示所述第一对象的实时监控画面,其中,所述第一设备为用户当前正在注视的设备,或所述第一设备为当前正处于使用状态的设备,或所述第一设备为距离用户最近的设备。When it is determined that a first event occurs on the first object, a real-time monitoring screen of the first object is prominently displayed on the screen of the first device, wherein the first device is the device that the user is currently looking at, or the first device is the device that is currently in use, or the first device is the device closest to the user.2.根据权利要求1所述的方法,其特征在于,在所述确定所述第一对象发生所述第一事件之前,所述方法还包括:2. The method according to claim 1, characterized in that before determining that the first event occurs to the first object, the method further comprises:在所述第一设备的屏幕上以小窗口的形式显示所述第一对象的实时监控画面;Displaying a real-time monitoring image of the first object in a small window on the screen of the first device;所述在第一设备的屏幕上显著显示所述第一对象的实时监控画面,包括:The prominently displaying the real-time monitoring picture of the first object on the screen of the first device includes:将所述第一对象的实时监控画面的显示形式由小窗口的形式切换为大窗口的形式或者全屏的形式,或Switch the display format of the real-time monitoring screen of the first object from a small window to a large window or a full screen, or将所述第一对象的实时监控画面的显示形式由小窗口静态显示的形式切换为小窗口高亮显示的形式或者小窗口闪烁显示的形式。The display form of the real-time monitoring screen of the first object is switched from a small window static display form to a small window highlighted display form or a small window flashing display form.3.根据权利要求1所述的方法,其特征在于,在所述确定所述第一对象发生所述第一事件之前,所述方法还包括:3. The method according to claim 1, characterized in that before determining that the first event occurs to the first object, the method further comprises:在所述第一设备的后台获取所述第一对象的实时监控画面;Acquire a real-time monitoring image of the first object in the background of the first device;所述在第一设备的屏幕上显著显示所述第一对象的实时监控画面,包括:The prominently displaying the real-time monitoring picture of the first object on the screen of the first device includes:在所述第一设备的屏幕上弹出显示所述第一对象的实时监控画面,所述第一对象的实时监控画面在所述第一设备的屏幕上以小窗口的形式、大窗口的形式或者全屏的形式显示,其中,在所述第一设备的屏幕上弹出显示所述第一对象的实时监控画面之前,所述第一设备的屏幕上不显示所述第一对象的实时监控画面,或所述第一对象的实时监控画面在所述第一设备的屏幕上隐藏显示。A real-time monitoring screen of the first object pops up and is displayed on the screen of the first device. The real-time monitoring screen of the first object is displayed on the screen of the first device in the form of a small window, a large window or a full screen. Before the real-time monitoring screen of the first object pops up and is displayed on the screen of the first device, the real-time monitoring screen of the first object is not displayed on the screen of the first device, or the real-time monitoring screen of the first object is hidden and displayed on the screen of the first device.4.根据权利要求3所述的方法,其特征在于,所述方法还包括:4. The method according to claim 3, further comprising:在所述第一设备的屏幕上弹出显示所述第一对象的实时监控画面之后,进一步在所述第一设备的屏幕上放大显示所述第一对象的实时监控画面;或After the real-time monitoring picture of the first object is displayed on the screen of the first device in a pop-up manner, the real-time monitoring picture of the first object is further enlarged and displayed on the screen of the first device; or在所述第一设备的屏幕上弹出显示所述第一对象的实时监控画面之后,响应于用户的放大显示操作,进一步在所述第一设备的屏幕上放大显示所述第一对象的实时监控画面。After the real-time monitoring picture of the first object pops up and is displayed on the screen of the first device, in response to a zoom-in display operation by the user, the real-time monitoring picture of the first object is further zoomed in and displayed on the screen of the first device.5.根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:5. The method according to any one of claims 1 to 4, further comprising:通过对所述第一对象的实时监控画面进行分析,确定所述第一对象发生所述第一事件。By analyzing the real-time monitoring image of the first object, it is determined that the first event occurs on the first object.6.根据权利要求5所述的方法,其特征在于,所述方法还包括:6. The method according to claim 5, further comprising:在所述第一设备处于息屏状态的情况下,当确定所述第一对象发生第一事件时,向第二设备发送第一指示信息,所述第一指示信息用于指示所述第二设备提醒用户所述第一对象发生所述第一事件。When the first device is in the screen-off state, when it is determined that a first event occurs on the first object, first indication information is sent to the second device, where the first indication information is used to instruct the second device to remind the user that the first event occurs on the first object.7.根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括7. The method according to any one of claims 1 to 4, further comprising:接收第二指示信息,所述第二指示信息用于指示所述第一对象发生所述第一事件;receiving second indication information, where the second indication information is used to indicate that the first event occurs on the first object;基于所述第二指示信息在所述第一设备的屏幕上显著显示所述第一对象的实时监控画面。A real-time monitoring picture of the first object is prominently displayed on the screen of the first device based on the second indication information.8.根据权利要求1至7中任一项所述的方法,其特征在于,所述获取第一对象的实时监控画面,包括:8. The method according to any one of claims 1 to 7, wherein obtaining a real-time monitoring image of the first object comprises:通过开放网络视频接口论坛ONVIF协议或者自定义协议接收第一摄像头传输的所述第一对象的实时监控画面,所述第一摄像头用于采集所述第一对象的实时监控画面。The real-time monitoring image of the first object transmitted by the first camera is received through the Open Network Video Interface Forum ONVIF protocol or a custom protocol, and the first camera is used to collect the real-time monitoring image of the first object.9.根据权利要求1至8中任一项所述的方法,其特征在于,所述方法还包括:9. The method according to any one of claims 1 to 8, further comprising:当确定所述第一对象发生第一事件时,在所述第一设备的屏幕上显示第一文字提醒,所述第一文字提醒用于提醒用户所述第一对象发生所述第一事件;和/或When it is determined that a first event has occurred on the first object, a first text reminder is displayed on the screen of the first device, where the first text reminder is used to remind the user that the first event has occurred on the first object; and/or当确定所述第一对象发生第一事件时,通过所述第一设备发出第一语音提醒,所述第一语音提醒用于提醒用户所述第一对象发生所述第一事件。When it is determined that the first event occurs to the first object, a first voice reminder is issued through the first device, where the first voice reminder is used to remind the user that the first event occurs to the first object.10.根据权利要求1至9中任一项所述的方法,其特征在于,在所述第一设备的屏幕上显著显示所述第一对象的实时监控画面的同时,所述方法还包括:10. The method according to any one of claims 1 to 9, wherein while prominently displaying the real-time monitoring image of the first object on the screen of the first device, the method further comprises:在所述第一设备的屏幕上显示一个或多个快捷控件,所述一个或多个快捷控件用于用户对所述第一事件做出远程响应。One or more shortcut controls are displayed on the screen of the first device, where the one or more shortcut controls are used for a user to remotely respond to the first event.11.根据权利要求1至10中任一项所述的方法,其特征在于,11. The method according to any one of claims 1 to 10, characterized in that所述第一对象为儿童,所述第一事件为儿童发生的活动触发的事件;The first object is a child, and the first event is an event triggered by an activity of the child;所述第一对象为宠物,所述第一事件为宠物进入管控区域触发的事件;或The first object is a pet, and the first event is an event triggered by the pet entering a controlled area; or所述第一对象为厨房,所述第一事件为烹饪进程发生变化触发的事件。The first object is a kitchen, and the first event is an event triggered by a change in a cooking process.12.根据权利要求1至11中任一项所述的方法,其特征在于,所述方法还包括:12. The method according to any one of claims 1 to 11, further comprising:实时获取第一区域的第一传感数据;Acquiring first sensor data of a first area in real time;当根据所述第一传感数据确定在所述第一区域发生第二事件时,通过所述第一设备提醒用户所述第一区域发生所述第二事件。When it is determined according to the first sensing data that a second event occurs in the first area, the user is reminded of the occurrence of the second event in the first area through the first device.13.根据权利要求1至12中任一项所述的方法,其特征在于,所述第一设备为智能电视、平板、智能手机、中控屏中的任意一个。13. The method according to any one of claims 1 to 12, wherein the first device is any one of a smart TV, a tablet, a smart phone, and a central control screen.14.一种事件提醒的方法,其特征在于,所述方法包括:14. A method for event reminder, characterized in that the method comprises:采集第一对象的实时监控画面;Collecting real-time monitoring images of the first object;确定第一设备,所述第一设备为用户当前正在注视的设备,或所述第一设备为当前正处于使用状态的设备,或所述第一设备为距离用户最近的设备;Determining a first device, where the first device is a device currently being looked at by the user, or the first device is a device currently in use, or the first device is a device closest to the user;通过开放网络视频接口论坛ONVIF协议或者自定义协议向所述第一设备传输所述第一对象的实时监控画面,以使得所述第一设备在确定所述第一对象发生第一事件时,在所述第一设备的屏幕上显著显示所述第一对象的实时监控画面。The real-time monitoring image of the first object is transmitted to the first device via the Open Network Video Interface Forum ONVIF protocol or a custom protocol, so that when the first device determines that a first event occurs to the first object, the real-time monitoring image of the first object is prominently displayed on the screen of the first device.15.根据权利要求14所述的方法,其特征在于,15. The method according to claim 14, characterized in that所述第一对象为儿童,所述第一事件为儿童发生的活动触发的事件;The first object is a child, and the first event is an event triggered by an activity of the child;所述第一对象为宠物,所述第一事件为宠物进入管控区域触发的事件;或The first object is a pet, and the first event is an event triggered by the pet entering a controlled area; or所述第一对象为厨房,所述第一事件为烹饪进程发生变化触发的事件。The first object is a kitchen, and the first event is an event triggered by a change in a cooking process.16.根据权利要求14或15所述的方法,其特征在于,所述第一设备为智能电视、平板、智能手机、中控屏中的任意一个。16. The method according to claim 14 or 15, wherein the first device is any one of a smart TV, a tablet, a smart phone, and a central control screen.17.一种电子设备,其特征在于,所述电子设备包括:17. An electronic device, characterized in that the electronic device comprises:处理器,用于获取第一对象的实时监控画面;A processor, configured to obtain a real-time monitoring image of the first object;显示器,用于在确定所述第一对象发生第一事件时,在所述电子设备的屏幕上显著显示所述第一对象的实时监控画面,其中,所述电子设备为用户当前正在注视的设备,或所述电子设备为当前正处于使用状态的设备,或所述电子设备为距离用户最近的设备。A display, used to prominently display a real-time monitoring image of the first object on the screen of the electronic device when it is determined that a first event has occurred on the first object, wherein the electronic device is a device that the user is currently looking at, or the electronic device is a device that is currently in use, or the electronic device is the device closest to the user.18.根据权利要求17所述的电子设备,其特征在于,所述显示器还用于:18. The electronic device according to claim 17, wherein the display is further used for:在确定所述第一对象发生所述第一事件之前,在所述电子设备的屏幕上以小窗口的形式显示所述第一对象的实时监控画面;Before determining that the first event occurs to the first object, displaying a real-time monitoring image of the first object in a small window on the screen of the electronic device;所述显示器还具体用于:The display is further specifically used for:在确定所述第一对象发生第一事件时,将所述第一对象的实时监控画面的显示形式由小窗口的形式切换为大窗口的形式或者全屏的形式,或When it is determined that the first event occurs on the first object, the display form of the real-time monitoring screen of the first object is switched from a small window form to a large window form or a full screen form, or在确定所述第一对象发生第一事件时,将所述第一对象的实时监控画面的显示形式由小窗口静态显示的形式切换为小窗口高亮显示的形式或者小窗口闪烁显示的形式。When it is determined that the first event occurs on the first object, the display form of the real-time monitoring screen of the first object is switched from a small window static display form to a small window highlighted display form or a small window flashing display form.19.根据权利要求17所述的电子设备,其特征在于,在确定所述第一对象发生所述第一事件之前,所述第一对象的实时监控画面隐藏在所述电子设备的后台,所述显示器还具体用于:19. The electronic device according to claim 17, wherein before determining that the first event occurs to the first object, a real-time monitoring screen of the first object is hidden in the background of the electronic device, and the display is further configured to:在确定所述第一对象发生第一事件时,在所述电子设备的屏幕上弹出显示所述第一对象的实时监控画面,所述第一对象的实时监控画面在所述第一设备的屏幕上以小窗口的形式、大窗口的形式或者全屏的形式显示,其中,在所述第一设备的屏幕上弹出显示所述第一对象的实时监控画面之前,所述第一设备的屏幕上不显示所述第一对象的实时监控画面,或所述第一对象的实时监控画面在所述第一设备的屏幕上隐藏显示。When it is determined that a first event occurs to the first object, a real-time monitoring screen of the first object pops up and is displayed on the screen of the electronic device. The real-time monitoring screen of the first object is displayed on the screen of the first device in the form of a small window, a large window or a full screen. Before the real-time monitoring screen of the first object pops up and is displayed on the screen of the first device, the real-time monitoring screen of the first object is not displayed on the screen of the first device, or the real-time monitoring screen of the first object is hidden and displayed on the screen of the first device.20.根据权利要求19所述的电子设备,其特征在于,所述显示器还用于:20. The electronic device according to claim 19, wherein the display is further configured to:在所述电子设备的屏幕上弹出显示所述第一对象的实时监控画面之后,进一步在所述电子设备的屏幕上放大显示所述第一对象的实时监控画面;或After the real-time monitoring picture of the first object is displayed on the screen of the electronic device in a pop-up manner, the real-time monitoring picture of the first object is further enlarged and displayed on the screen of the electronic device; or在所述电子设备的屏幕上弹出显示所述第一对象的实时监控画面之后,响应于用户的放大显示操作,进一步在所述电子设备的屏幕上放大显示所述第一对象的实时监控画面。After the real-time monitoring picture of the first object pops up and is displayed on the screen of the electronic device, in response to a zoom-in display operation by the user, the real-time monitoring picture of the first object is further zoomed in and displayed on the screen of the electronic device.21.根据权利要求18至20中任一项所述的电子设备,其特征在于,所述处理器还用于:21. The electronic device according to any one of claims 18 to 20, wherein the processor is further configured to:通过对所述第一对象的实时监控画面进行分析,确定所述第一对象发生所述第一事件。By analyzing the real-time monitoring image of the first object, it is determined that the first event occurs on the first object.22.根据权利要求21所述的电子设备,其特征在于,所述电子设备还包括:22. The electronic device according to claim 21, further comprising:收发器,用于在所述电子设备处于息屏状态的情况下,在确定所述第一对象发生第一事件时,向第二设备发送第一指示信息,所述第一指示信息用于指示所述第二设备提醒用户所述第一对象发生所述第一事件。A transceiver is used to send first indication information to a second device when it is determined that a first event occurs on the first object when the electronic device is in a screen-off state, wherein the first indication information is used to instruct the second device to remind the user that the first event occurs on the first object.23.根据权利要求18至20中任一项所述的电子设备,其特征在于,所述电子设备还包括:23. The electronic device according to any one of claims 18 to 20, further comprising:收发器,用于接收第二指示信息,所述第二指示信息用于指示所述第一对象发生所述第一事件。The transceiver is configured to receive second indication information, where the second indication information is used to indicate that the first event occurs on the first object.24.根据权利要求18至23中任一项所述的电子设备,其特征在于,所述处理器具体用于:24. The electronic device according to any one of claims 18 to 23, wherein the processor is specifically configured to:通过开放网络视频接口论坛ONVIF协议或者自定义协议获取第一摄像头传输的所述第一对象的实时监控画面,所述第一摄像头用于采集所述第一对象的实时监控画面。The real-time monitoring image of the first object transmitted by the first camera is obtained through the Open Network Video Interface Forum ONVIF protocol or a custom protocol, and the first camera is used to collect the real-time monitoring image of the first object.25.根据权利要求18至24中任一项所述的电子设备,其特征在于,所述显示器还用于:25. The electronic device according to any one of claims 18 to 24, wherein the display is further configured to:在确定所述第一对象发生第一事件时,在所述电子设备的屏幕上发出第一文字提醒,所述第一文字提醒用于提醒用户所述第一对象发生所述第一事件;和/或When it is determined that a first event has occurred on the first object, issuing a first text reminder on the screen of the electronic device, wherein the first text reminder is used to remind the user that the first event has occurred on the first object; and/or所述电子设备还包括:The electronic device further comprises:发声器,用于在确定所述第一对象发生第一事件时,发出第一语音提醒,所述第一语音提醒用于提醒用户所述第一对象发生所述第一事件。The sounder is used to issue a first voice reminder when it is determined that the first event occurs to the first object, and the first voice reminder is used to remind the user that the first event occurs to the first object.26.根据权利要求18至25中任一项所述的电子设备,其特征在于,所述显示器还用于:26. The electronic device according to any one of claims 18 to 25, wherein the display is further configured to:在所述电子设备的屏幕上显著显示所述第一对象的实时监控画面的同时,在所述电子设备的屏幕上显示一个或多个快捷控件,所述一个或多个快捷控件用于用户对所述第一事件做出远程响应。While the real-time monitoring image of the first object is prominently displayed on the screen of the electronic device, one or more shortcut controls are displayed on the screen of the electronic device, and the one or more shortcut controls are used for the user to remotely respond to the first event.27.根据权利要求18至26中任一项所述的电子设备,其特征在于,27. The electronic device according to any one of claims 18 to 26, characterized in that:所述第一对象为儿童,所述第一事件为儿童发生的活动触发的事件;The first object is a child, and the first event is an event triggered by an activity of the child;所述第一对象为宠物,所述第一事件为宠物进入管控区域触发的事件;或The first object is a pet, and the first event is an event triggered by the pet entering a controlled area; or所述第一对象为厨房,所述第一事件为烹饪进程发生变化触发的事件。The first object is a kitchen, and the first event is an event triggered by a change in a cooking process.28.根据权利要求25所述的电子设备,其特征在于,所述处理器还用于:28. The electronic device according to claim 25, wherein the processor is further configured to:实时获取第一区域的第一传感数据;Acquiring first sensor data of a first area in real time;所述显示器和/或所述发声器还用于:The display and/or the sound generator are further configured to:在根据所述第一传感数据确定在所述第一区域发生第二事件时,提醒用户所述第一区域发生所述第二事件。When it is determined according to the first sensing data that a second event occurs in the first area, a user is reminded that the second event occurs in the first area.29.根据权利要求18至28中任一项所述的电子设备,其特征在于,所述电子设备为智能电视、平板、智能手机、中控屏中的任意一个。29. The electronic device according to any one of claims 18 to 28, characterized in that the electronic device is any one of a smart TV, a tablet, a smart phone, and a central control screen.30.一种摄像头装置,其特征在于,所述摄像头装置包括:30. A camera device, characterized in that the camera device comprises:采集模块,用于采集第一对象的实时监控画面;An acquisition module, configured to acquire real-time monitoring images of the first object;确定模块,用于确定第一设备,所述第一设备为用户当前正在注视的设备,或所述第一设备为当前正处于使用状态的设备,或所述第一设备为距离用户最近的设备;a determining module, configured to determine a first device, wherein the first device is the device currently being looked at by the user, or the first device is the device currently in use, or the first device is the device closest to the user;传输模块,用于通过开放网络视频接口论坛ONVIF协议或者自定义协议向所述第一设备传输所述第一对象的实时监控画面,以使得所述第一设备在确定所述第一对象发生第一事件时,在所述第一设备的屏幕上显著显示所述第一对象的实时监控画面。A transmission module is used to transmit the real-time monitoring picture of the first object to the first device through the Open Network Video Interface Forum ONVIF protocol or a custom protocol, so that when the first device determines that a first event occurs to the first object, the real-time monitoring picture of the first object is prominently displayed on the screen of the first device.31.根据权利要求30所述的摄像头装置,其特征在于,31. The camera device according to claim 30, wherein:所述第一对象为儿童,所述第一事件为儿童发生的活动触发的事件;The first object is a child, and the first event is an event triggered by an activity of the child;所述第一对象为宠物,所述第一事件为宠物进入管控区域触发的事件;或The first object is a pet, and the first event is an event triggered by the pet entering a controlled area; or所述第一对象为厨房,所述第一事件为烹饪进程发生变化触发的事件。The first object is a kitchen, and the first event is an event triggered by a change in a cooking process.32.根据权利要求30或31所述的摄像头装置,其特征在于,所述第一设备为智能电视、平板、智能手机、中控屏中的任意一个。32. The camera device according to claim 30 or 31, wherein the first device is any one of a smart TV, a tablet, a smart phone, and a central control screen.33.一种电子设备,其特征在于,包括:33. An electronic device, comprising:一个或多个处理器;one or more processors;一个或多个存储器;one or more memories;以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求1至13中任一项所述的方法,或执行如权利要求14至16中任一项所述的方法。and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, and the one or more computer programs include instructions that, when executed by the one or more processors, cause the electronic device to perform the method as claimed in any one of claims 1 to 13, or to perform the method as claimed in any one of claims 14 to 16.34.一种系统,其特征在于,所述系统包括:34. A system, characterized in that the system comprises:如权利要求17至29中任一项所述的电子设备;The electronic device according to any one of claims 17 to 29;一个或多个如权利要求30至32中任一项所述的摄像头装置。One or more camera devices as claimed in any one of claims 30 to 32.35.一种计算机可读存储介质,其特征在于,所述存储介质中存储有程序或指令,当所述程序或指令被运行时,实现如权利要求1至13中任一项所述的方法,或实现如权利要求14至16中任一项所述的方法。35. A computer-readable storage medium, characterized in that a program or instruction is stored in the storage medium, and when the program or instruction is executed, the method according to any one of claims 1 to 13 or the method according to any one of claims 14 to 16 is implemented.36.一种芯片,其特征在于,所述芯片中存储有指令,当所述指令被运行时,实现如权利要求1至13中任一项所述的方法,或实现如权利要求14至16中任一项所述的方法。36. A chip, characterized in that instructions are stored in the chip, and when the instructions are executed, the method according to any one of claims 1 to 13 or the method according to any one of claims 14 to 16 is implemented.37.一种计算机程序产品,其特征在于,所述计算机程序产品中存储有程序或指令,当所述程序或指令被运行时,实现如权利要求1至13中任一项所述的方法,或实现如权利要求14至16中任一项所述的方法。37. A computer program product, characterized in that a program or instruction is stored in the computer program product, and when the program or instruction is executed, the method according to any one of claims 1 to 13 or the method according to any one of claims 14 to 16 is implemented.
CN202410283815.9A2024-03-122024-03-12 Event reminder method, electronic device and systemPendingCN120639928A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202410283815.9ACN120639928A (en)2024-03-122024-03-12 Event reminder method, electronic device and system
PCT/CN2025/071238WO2025189946A1 (en)2024-03-122025-01-08Event reminding method, electronic device, and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410283815.9ACN120639928A (en)2024-03-122024-03-12 Event reminder method, electronic device and system

Publications (1)

Publication NumberPublication Date
CN120639928Atrue CN120639928A (en)2025-09-12

Family

ID=96966599

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410283815.9APendingCN120639928A (en)2024-03-122024-03-12 Event reminder method, electronic device and system

Country Status (2)

CountryLink
CN (1)CN120639928A (en)
WO (1)WO2025189946A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR102058918B1 (en)*2012-12-142019-12-26삼성전자주식회사Home monitoring method and apparatus
KR20150046814A (en)*2013-10-222015-05-04한국전자통신연구원System for alarming pedestrian of approaching bicycle
CN206820884U (en)*2017-05-032017-12-29南京朗坤自动化有限公司Intelligent video monitoring system
CN112905133A (en)*2019-12-032021-06-04青岛海尔智能技术研发有限公司Method and device for display control and terminal equipment
CN114374930B (en)*2020-10-162023-05-16华为技术有限公司Message prompting method, system, vehicle-mounted terminal and computer readable storage medium
CN114845064A (en)*2022-03-222022-08-02南京巨鲨显示科技有限公司 A display device and display method thereof

Also Published As

Publication numberPublication date
WO2025189946A1 (en)2025-09-18

Similar Documents

PublicationPublication DateTitle
CN109814766B (en) Application display method and electronic device
CN111650840B (en)Intelligent household scene arranging method and terminal
CN113272745B (en)Smart home equipment sharing system and method and electronic equipment
CN114173204B (en)Message prompting method, electronic equipment and system
CN112399390B (en) Method and related device for Bluetooth back-up connection
CN110543289B (en)Method for controlling volume and electronic equipment
KR102813045B1 (en) Energy-efficient display processing method and device
CN117014567A (en)Video call display method and related device applied to electronic equipment
WO2021213164A1 (en)Application interface interaction method, electronic device, and computer readable storage medium
CN113885759A (en) Notification message processing method, device, system, and computer-readable storage medium
CN114115770B (en)Display control method and related device
CN115129410B (en)Desktop wallpaper configuration method and device, electronic equipment and readable storage medium
WO2020063605A1 (en)Method for generating screenshot, control method, and electronic device
JP7234379B2 (en) Methods and associated devices for accessing networks by smart home devices
CN113496426A (en)Service recommendation method, electronic device and system
CN114449110B (en)Control method and device of electronic equipment
CN113986369B (en)Internet of things equipment control method and system, electronic equipment and storage medium
CN111930335A (en)Sound adjusting method and device, computer readable medium and terminal equipment
US20240295905A1 (en)Screen display method and electronic device
CN114063806A (en)False touch prevention method and electronic equipment
CN114237776B (en) Interactive methods, devices and electronic devices
CN113572798B (en)Device control method, system, device, and storage medium
WO2024114219A1 (en)Method for reproducing scene of journey, and electronic device
CN120639928A (en) Event reminder method, electronic device and system
CN113934352A (en) Notification message processing method, electronic device, and computer-readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp