Movatterモバイル変換


[0]ホーム

URL:


US11243959B1 - Generating statistics using electronic device data - Google Patents

Generating statistics using electronic device data
Download PDF

Info

Publication number
US11243959B1
US11243959B1US16/239,036US201916239036AUS11243959B1US 11243959 B1US11243959 B1US 11243959B1US 201916239036 AUS201916239036 AUS 201916239036AUS 11243959 B1US11243959 B1US 11243959B1
Authority
US
United States
Prior art keywords
criminal
statistics
data
image data
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/239,036
Inventor
Aviv Gilboa
Mark Troughton
Eric S. Kuhn
Darrell Sommerlatt
Alex Jacobson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amazon Technologies Inc
Original Assignee
Amazon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amazon Technologies IncfiledCriticalAmazon Technologies Inc
Priority to US16/239,036priorityCriticalpatent/US11243959B1/en
Assigned to AMAZON TECHNOLOGIES, INC.reassignmentAMAZON TECHNOLOGIES, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: TROUGHTON, MARK, KUHN, ERIC, SOMMERLATT, Darrell, Gilboa, Aviv, Jacobson, Alex
Application grantedgrantedCritical
Publication of US11243959B1publicationCriticalpatent/US11243959B1/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

The present application is directed to techniques and processes for using various types of data to generate criminal statistics associated with a geographic area. For instance, a system may store first criminal statistics for a first geographical area. The system may further store image data generated by an electronic device and information describing the image data. Next, the system may determine a second geographic area for generating second criminal statistics, where at least a portion of the first geographic area is located within the second geographic area. The system may then determine that at least a portion of the first criminal statistics occurred in the second geographic area and the electronic device is located in the second geographic area. Based on the determinations, the system may generate the second criminal statistics, where the second criminal statistics includes at least a portion of the first criminal statistics and the information.

Description

RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application No. 62/613,466, filed Jan. 4, 2018, titled “INTEGRATING AUDIO/VIDEO RECORDING AND COMMUNICATION DEVICE DATA INTO CRIMINAL STATISTICS,” the entire contents of which are incorporated herein by reference.
BACKGROUND
Home security is a concern for many homeowners and renters. Those seeking to protect or monitor their homes often wish to have video and audio communications with visitors, for example, those visiting an external door or entryway. Audio/Video (A/V) recording and communication devices, such as doorbells, provide this functionality, and may also aid in crime detection and prevention. For example, audio and/or video captured by an A/V recording and communication device may be uploaded to the cloud and recorded on a remote server. Subsequent review of the A/V footage may aid law enforcement in capturing perpetrators of home burglaries and other crimes. Further, the presence of one or more A/V recording and communication devices on the exterior of a home, such as a doorbell unit at the entrance to the home, acts as a powerful deterrent against would-be burglars.
BRIEF DESCRIPTION OF THE DRAWINGS
The various example embodiments for integrating audio/video (A/V) recording and communication device data into criminal statistics now will be discussed in detail with an emphasis on highlighting the advantageous features. These embodiments depict novel and non-obvious techniques for integrating audio/video (A/V) recording and communication device data into criminal statistics, as shown in the accompanying drawings, which are for illustrative purposes only. These drawings include the following figures, in which like numerals indicate like parts:
FIG. 1 is a functional block diagram illustrating an example system for streaming and storing A/V content captured by an audio/video (A/V) recording and communication device according to various aspects of the present disclosure;
FIG. 2 is a flowchart illustrating an example process for streaming and storing A/V content from an A/V recording and communication device according to various aspects of the present disclosure;
FIG. 3 is a front view of an example A/V recording and communication doorbell according to various aspects of the present disclosure;
FIG. 4 is a rear view of the example A/V recording and communication doorbell ofFIG. 3;
FIG. 5 is a functional block diagram of example components of the A/V recording and communication doorbell ofFIGS. 3 and 4;
FIG. 6 is an upper front perspective view of an example A/V recording and communication security camera according to various aspects of the present disclosure;
FIG. 7 is a functional block diagram of example components of the A/V recording and communication security camera ofFIG. 6;
FIG. 8 is a functional block diagram of example components of a floodlight controller with A/V recording and communication features according to various aspects of the present disclosure;
FIG. 9 is an upper front perspective view of an example floodlight controller with A/V recording and communication features according to various aspects of the present disclosure;
FIG. 10 is a front elevation view of the example floodlight controller with A/V recording and communication features ofFIG. 9 in combination with a floodlight device according to various aspects of the present disclosure;
FIG. 11 is a functional block diagram illustrating an example system for communicating in a network according to various aspects of the present disclosure;
FIG. 12 is a functional block diagram illustrating one example embodiment of an A/V recording and communication device according to various aspects of the present disclosure;
FIG. 13 is a functional block diagram illustrating one example embodiment of a hub device according to various aspects of the present disclosure;
FIG. 14 is a functional block diagram illustrating one example embodiment of a backend server according to various aspects of the present disclosure;
FIG. 15 is a functional block diagram illustrating one example embodiment of a first client device according to various aspects of the present disclosure;
FIG. 16 is a functional block diagram illustrating one example embodiment of a second client device according to various aspects of the present disclosure;
FIG. 17 is a screenshot of a map illustrating a plurality of network areas according to an aspect of the present disclosure;
FIG. 18 is a sample screenshot of a graphical user interface (GUI) associated with a process for requesting criminal statistics associated with a geographic area according to various aspects of the present disclosure;
FIG. 19 is a screenshot of a GUI illustrating an example of providing a notification according to various aspects of the present disclosure;
FIG. 20 is a screenshot of a GUI illustrating an example of providing criminal statistics that are organized according to categories according to various aspects of the present disclosure;
FIG. 21 is a screenshot of a GUI illustrating an example of providing criminal statistics that are specific to one of the categories according to various aspects of the present disclosure;
FIG. 22 is a screenshot of a GUI illustrating an example of providing options for sharing criminal statistics according to various aspects of the present disclosure;
FIG. 23 is a screenshot of a GUI illustrating an example of transmitting the criminal statistics to another user according to various aspects of the present disclosure;
FIG. 24 is a flowchart illustrating an example process for integrating A/V recording and communication device data into criminal statistics according to various aspects of the present disclosure;
FIG. 25 is a flowchart illustrating an example process for utilizing first criminal statistics to create second criminal statistics for a user according to various aspects of the present disclosure;
FIG. 26 is a flowchart illustrating an example process for integrating A/V recording and communication device data into criminal statistics according to various aspects of the present disclosure;
FIG. 27 is a flowchart illustrating an example process for requesting criminal statistics according to various aspects of the present disclosure;
FIG. 28 is a flowchart illustrating an example process for displaying criminal statistics according to various aspects of the present disclosure;
FIG. 29 is a functional block diagram of a client device on which example embodiments may be implemented according to various aspects of the present disclosure; and
FIG. 30 is a functional block diagram of a general-purpose computing system on which example embodiments may be implemented according to various aspects of present disclosure.
DETAILED DESCRIPTION
The following detailed description describes the present embodiments with reference to the drawings. In the drawings, reference numbers label elements of the present embodiments. These reference numbers are reproduced below in connection with the discussion of the corresponding drawing features.
With reference toFIG. 1, the present embodiments include an audio/video (A/V) recording and communication device102. While the present disclosure provides numerous examples of methods and systems including A/V recording and communication doorbells, the present embodiments are equally applicable for A/V recording and communication devices other than doorbells. For example, the present embodiments may include one or more A/V recording and communication security cameras instead of, or in addition to, one or more A/V recording and communication doorbells. An example A/V recording and communication security camera may include substantially all of the structure and/or functionality of the doorbells described herein, but without the front button and related components. In another example, the present embodiments may include one or more A/V recording and communication floodlight controllers instead of, or in addition to, one or more A/V recording and communication doorbells.
The A/V recording and communication device102 may be located near the entrance to a structure (not shown), such as a dwelling, a business, a storage facility, etc. The A/V recording and communication device102 includes acamera104, amicrophone106, and aspeaker108. Thecamera104 may comprise, for example, a high definition (HD) video camera, such as one capable of capturing video images at an image display resolution of 722p, or 1080p, 4K, or any other image display resolution. While not shown, the A/V recording and communication device102 may also include other hardware and/or components, such as a housing, a communication module (which may facilitate wired and/or wireless communication with other devices), one or more motion sensors (and/or other types of sensors), a button, etc. The A/V recording and communication device102 may, in some examples, further include similar componentry and/or functionality as the wireless communication doorbells described in US Patent Application Publication Nos. 2015/0022620 (application Ser. No. 14/499,828) and 2015/0022618 (application Ser. No. 14/334,922), both of which are incorporated herein by reference in their entireties as if fully set forth.
With further reference toFIG. 1, the A/V recording and communication device102 communicates with a user's network110, which may be for example a wired and/or wireless network. If the user's network110 is wireless, or includes a wireless component, the network110 may be, for example, a Wi-Fi network compatible with the IEEE 802.11 standard and/or other wireless communication standard(s). The user's network110 is connected to anothernetwork112, which may comprise, for example, the Internet and/or a public switched telephone network (PSTN). As described below, the A/V recording and communication device102 may communicate with the user's client device114 via the user's network110 and/or the network112 (Internet/PSTN). The user's client device114 may comprise, for example, a mobile telephone (may also be referred to as a cellular telephone), such as a smartphone, a personal digital assistant (PDA), or another communication device. The user's client device114 may comprise a display (not shown) and related components capable of displaying streaming and/or recorded video images. The user's client device114 may also comprise a speaker and related components capable of broadcasting streaming and/or recorded audio, and may also comprise a microphone.
The A/V recording and communication device102 may also communicate, via the user's network110 and the network112 (Internet/PSTN), with a network(s)116 of servers and/or backend devices, such as (but not limited to) one or more remote storage devices118 (may be referred to interchangeably as “cloud storage device(s)”), one ormore backend servers120, and one ormore backend APIs122. WhileFIG. 1 illustrates thestorage device118, theserver120, and thebackend API122 as components separate from thenetwork116, it is to be understood that thestorage device118, theserver120, and/or thebackend API122 may be considered to be components of thenetwork116.
Thenetwork116 may be any wireless network or any wired network, or a combination thereof, configured to operatively couple the above-mentioned modules, devices, and systems as shown inFIG. 1. For example, thenetwork116 may include one or more of the following: a PSTN (public switched telephone network), the Internet, a local intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a MAN (Metropolitan Area Network), a virtual private network (VPN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, a Digital Data Service (DDS) connection, a DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34, or V.34bis analog modem connection, a cable modem, an ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks, including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), LTE, VoLTE, LoRaWAN, LPWAN, RPMA, LTE Cat-“X” (e.g. LTE Cat 1,LTE Cat 0, LTE CatM1, LTE Cat NB1), CDMA (Code Division Multiple Access), TDMA (Time Division Multiple Access), FDMA (Frequency Division Multiple Access), and/or OFDMA (Orthogonal Frequency Division Multiple Access) cellular phone networks, GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth network, ZigBee network, or an IEEE 802.11-based radio frequency network. The network may further include or interface with any one or more of the following: RS-232 serial connection, IEEE-4024 (Firewire) connection, Fibre Channel connection, IrDA (infrared) port, SCSI (Small Computer Systems Interface) connection, USB (Universal Serial Bus) connection, or other wired or wireless, digital or analog, interface or connection, mesh or Digi® networking.
According to one or more aspects of the present embodiments, when a person (may be referred to interchangeably as “visitor”) arrives at the A/V recording and communication device102, the A/V recording and communication device102 detects the visitor's presence and begins capturing video images within a field of view of thecamera104. The A/V communication device102 may also capture audio through themicrophone106. The A/V recording and communication device102 may detect the visitor's presence by, for example, detecting motion using thecamera104 and/or a motion sensor, and/or by detecting that the visitor has pressed a front button of the A/V recording and communication device102 (if the A/V recording and communication device102 is a doorbell).
In response to the detection of the visitor, the A/V recording and communication device102 transmits an alert to the user's client device114 (FIG. 1) via the user's network110 and thenetwork112. The A/V recording and communication device102 also transmits streaming video, and may also transmit streaming audio, to the user's client device114. If the user answers the alert, two-way audio communication may then occur between the visitor and the user through the A/V recording and communication device102 and the user's client device114. The user may view the visitor throughout the duration of the call, but the visitor may not see the user (unless the A/V recording and communication device102 includes a display, which it may in some embodiments).
The video images captured by thecamera104 of the A/V recording and communication device102 (and the audio captured by the microphone106) may be uploaded to the cloud and recorded on the remote storage device118 (FIG. 1). In some examples, the video images and/or audio may additionally or alternatively be stored locally by the A/V recording and communications device102, a client device of the user, or other device in communication with the A/V recording and communications device102. In some embodiments, the video and/or audio may be recorded on theremote storage device118 even if the user chooses to ignore the alert sent to his or her client device114.
With further reference toFIG. 1, the system may further comprise abackend API122 including one or more components. A backend API (application programming interface) may comprise, for example, a server (e.g. a real server, or a virtual machine, or a machine running in a cloud infrastructure as a service), or multiple servers networked together, exposing at least one API to client(s) accessing it. These servers may include components such as application servers (e.g. software servers), depending upon what other components are included, such as a caching layer, or database layers, or other components. A backend API may, for example, comprise many such applications, each of which communicate with one another using their public APIs. In some embodiments, the API backend may hold the bulk of the user data and offer the user management capabilities, leaving the clients to have very limited state.
Thebackend API122 illustratedFIG. 1 may include one or more APIs. An API is a set of routines, protocols, and tools for building software and applications. An API expresses a software component in terms of its operations, inputs, outputs, and underlying types, defining functionalities that are independent of their respective implementations, which allows definitions and implementations to vary without compromising the interface. Advantageously, an API may provide a programmer with access to an application's functionality without the programmer needing to modify the application itself, or even understand how the application works. An API may be for a web-based system, an operating system, or a database system, and it provides facilities to develop applications for that system using a given programming language. In addition to accessing databases or computer hardware like hard disk drives or video cards, an API may ease the work of programming GUI components. For example, an API may facilitate integration of new features into existing applications (a so-called “plug-in API”). An API may also assist otherwise distinct applications with sharing data, which may help to integrate and enhance the functionalities of the applications.
Thebackend API122 illustrated inFIG. 1 may further include one or more services (also referred to as network services). A network service is an application that provides data storage, manipulation, presentation, communication, and/or other capability. Network services are often implemented using a client-server architecture based on application-layer network protocols. Each service may be provided by a server component running on one or more computers (such as a dedicated server computer offering multiple services) and accessed via a network by client components running on other devices. However, the client and server components may both be run on the same machine. Clients and servers may have a user interface, and sometimes other hardware associated with them.
FIG. 2 is a flowchart illustrating a process for streaming and storing A/V content from the A/V recording and communication device102 according to various aspects of the present disclosure. At block B202, the A/V recording and communication device102 detects the visitor's presence and captures video images within a field of view of thecamera104. The A/V recording and communication device102 may also capture audio through themicrophone106. As described above, the A/V recording and communication device102 may detect the visitor's presence by detecting motion using thecamera104 and/or a motion sensor, and/or by detecting that the visitor has pressed a front button of the A/V recording and communication device102 (if the A/V recording and communication device102 is a doorbell). Also, as described above, the video recording/capture may begin when the visitor is detected, or may begin earlier, as described below.
At block B204, a communication module of the A/V recording and communication device102 transmits a connection request, via the user's network110 and thenetwork112, to a device in thenetwork112. For example, the network device to which the request is sent may be a server such as theserver120. Theserver120 may comprise a computer program and/or a machine that waits for requests from other machines or software (clients) and responds to them. A server typically processes data. One purpose of a server is to share data and/or hardware and/or software resources among clients. This architecture is called the client-server model. The clients may run on the same computer or may connect to the server over a network. Examples of computing servers include database servers, file servers, mail servers, print servers, web servers, game servers, and application servers. The term server may be construed broadly to include any computerized process that shares a resource to one or more client processes. In another example, the network device to which the request is sent may be an API such as thebackend API122, which is described above.
In response to the request, at block B206 the network device may connect the A/V recording and communication device102 to the user's client device114 through the user's network110 and thenetwork112. At block B208, the A/V recording and communication device102 may record available audio and/or video data using thecamera104, themicrophone106, and/or any other device/sensor available. At block B208, the audio and/or video data is transmitted (streamed) from the A/V recording and communication device102 to the user's client device114 via the user's network110 and thenetwork112. At block B212, the user may receive a notification on his or her client device114 with a prompt to either accept or deny the call.
At block B214, the process determines whether the user has accepted or denied the call. If the user denies the notification, then the process advances to block B216, where the audio and/or video data is recorded and stored at a cloud server. The session then ends at block B218 and the connection between the A/V recording and communication device102 and the user's client device114 is terminated. If, however, the user accepts the notification, then at block B220 the user communicates with the visitor through the user's client device114 while audio and/or video data captured by thecamera104, themicrophone106, and/or other devices/sensors is streamed to the user's client device114. At the end of the call, the user may terminate the connection between the user's client device114 and the A/V recording and communication device102 and the session ends at block B220. In some embodiments, the audio and/or video data may be recorded and stored at a cloud server (block B216) even if the user accepts the notification and communicates with the visitor through the user's client device114.
FIGS. 3-5 illustrate an audio/video (A/V) communication doorbell302 (also referred to a “doorbell302” or “video doorbell302”) according to an aspect of present embodiments.FIG. 3 is a front view,FIG. 4 is a rear view, andFIG. 5 is a functional block diagram of the components within or in communication with thedoorbell302. With reference toFIG. 3, thedoorbell302 includes afaceplate304 mounted to a back plate402 (FIG. 4). Thefaceplate304 may comprise any suitable material, including, without limitation, metals, such as brushed aluminum or stainless steel, metal alloys, or plastics. Thefaceplate304 protects the internal contents of thedoorbell302 and serves as an exterior front surface of thedoorbell302.
With reference toFIG. 3, thefaceplate304 includes abutton306 and alight pipe308. Thebutton306 and thelight pipe308 may have various profiles that may or may not match the profile of thefaceplate304. Thelight pipe308 may comprise any suitable material, including, without limitation, transparent plastic, that is capable of allowing light produced within thedoorbell302 to pass through. The light may be produced by one or more light-emitting components, such as light-emitting diodes (LED's), contained within thedoorbell302, as further described below. Thebutton306 may make contact with a button actuator (not shown) located within thedoorbell302 when thebutton306 is pressed by a visitor. When pressed, thebutton306 may trigger one or more functions of thedoorbell302, as further described below.
With further reference toFIG. 3, the doorbell302 further includes anenclosure310 that engages thefaceplate304. In the illustrated embodiment, theenclosure310 abuts anupper edge312 of thefaceplate304, but in alternative embodiments one or more gaps between theenclosure310 and thefaceplate304 may facilitate the passage of sound and/or light through thedoorbell302. Theenclosure310 may comprise any suitable material, but in some embodiments the material of theenclosure310 preferably permits infrared light to pass through from inside the doorbell302 to the environment and vice versa. Thedoorbell302 further includes alens314. In some embodiments, thelens314 may comprise a Fresnel lens, which may be patterned to deflect incoming light into one or more infrared sensors located within thedoorbell302. Thedoorbell302 further includes acamera316, which captures video data when activated, as described below.
FIG. 4 is a rear view of thedoorbell302, according to an aspect of the present embodiments. As illustrated, theenclosure310 may extend from the front of the doorbell302 around to the back thereof and may fit snugly around a lip of theback plate402. Theback plate402 may comprise any suitable material, including, without limitation, metals, such as brushed aluminum or stainless steel, metal alloys, or plastics. Theback plate402 protects the internal contents of thedoorbell302 and serves as an exterior rear surface of thedoorbell302. Thefaceplate304 may extend from the front of thedoorbell302 and at least partially wrap around theback plate402, thereby allowing a coupled connection between thefaceplate304 and theback plate402. Theback plate402 may have indentations in its structure to facilitate the coupling.
With further reference toFIG. 4,spring contacts404 may provide power to the doorbell302 when mated with other conductive contacts connected to a power source. Thespring contacts404 may comprise any suitable conductive material, including, without limitation, copper, and may be capable of deflecting when contacted by an inward force, for example the insertion of a mating element. Thedoorbell302 further comprises aconnector406, such as a micro-USB or other connector, whereby power and/or data may be supplied to and from the components within thedoorbell302. Areset button408 may be located on theback plate402, and may make contact with a button actuator (not shown) located within thedoorbell302 when thereset button408 is pressed. When thereset button408 is pressed, it may trigger one or more functions, as described below.
FIG. 5 is a functional block diagram of the components within or in communication with thedoorbell302, according to an aspect of the present embodiments. Abracket PCB502 may comprise anaccelerometer504, abarometer506, ahumidity sensor508, and atemperature sensor510. Theaccelerometer504 may be one or more sensors capable of sensing motion and/or acceleration. Thebarometer506 may be one or more sensors capable of determining the atmospheric pressure of the surrounding environment in which thebracket PCB502 may be located. Thehumidity sensor508 may be one or more sensors capable of determining the amount of moisture present in the atmospheric environment in which thebracket PCB502 may be located. Thetemperature sensor510 may be one or more sensors capable of determining the temperature of the ambient environment in which thebracket PCB502 may be located. Thebracket PCB502 may be located outside the housing of the doorbell302 so as to reduce interference from heat, pressure, moisture, and/or other stimuli generated by the internal components of thedoorbell302.
With further reference toFIG. 5, thebracket PCB502 may further comprise terminal screw inserts512, which may be configured to receive terminal screws (not shown) for transmitting power to electrical contacts on a mounting bracket (not shown). Thebracket PCB502 may be electrically and/or mechanically coupled to thepower PCB514 through the terminal screws, the terminal screw inserts512, thespring contacts404, and the electrical contacts. The terminal screws may receive electrical wires located at the surface to which thedoorbell302 is mounted, such as the wall of a building, so that the doorbell may receive electrical power from the building's electrical system. Upon the terminal screws being secured within the terminal screw inserts512, power may be transferred to thebracket PCB502, and to all of the components associated therewith, including the electrical contacts. The electrical contacts may transfer electrical power to thepower PCB514 by mating with thespring contacts404.
With further reference toFIG. 5, thefront PCB516 may comprise alight sensor518, one or more light-emitting components, such as LED's520, one ormore speakers522, and amicrophone524. Thelight sensor518 may be one or more sensors capable of detecting the level of ambient light of the surrounding environment in which thedoorbell302 may be located. LED's520 may be one or more light-emitting diodes capable of producing visible light when supplied with power. Thespeakers522 may be any electromechanical device capable of producing sound in response to an electrical signal input. Themicrophone524 may be an acoustic-to-electric transducer or sensor capable of converting sound waves into an electrical signal. When activated, the LED's520 may illuminate the light pipe308 (FIG. 3). Thefront PCB516 and all components thereof may be electrically coupled to thepower PCB514, thereby allowing data and/or power to be transferred to and from thepower PCB514 and thefront PCB516.
Thespeakers522 and themicrophone524 may be coupled to thecamera processor526 through anaudio CODEC528. For example, the transfer of digital audio from the user's client device114 and thespeakers522 and themicrophone524 may be compressed and decompressed using theaudio CODEC528, coupled to thecamera processor526. Once compressed byaudio CODEC528, digital audio data may be sent through thecommunication module530 to thenetwork112, routed by the one ormore servers120, and delivered to the user's client device114. When the user speaks, after being transferred through thenetwork112, digital audio data is decompressed byaudio CODEC528 and emitted to the visitor via thespeakers522.
With further reference toFIG. 5, thepower PCB514 may comprise apower management module532, a microcontroller534 (may also be referred to as “processor,” “CPU,” or “controller”), thecommunication module530, and power PCBnon-volatile memory536. In certain embodiments, thepower management module532 may comprise an integrated circuit capable of arbitrating between multiple voltage rails, thereby selecting the source of power for thedoorbell302. Thebattery538, thespring contacts404, and/or theconnector406 may each provide power to thepower management module532. Thepower management module532 may have separate power rails dedicated to thebattery538, thespring contacts404, and theconnector406. In one aspect of the present disclosure, thepower management module532 may continuously draw power from thebattery538 to power thedoorbell302, while at the same time routing power from thespring contacts404 and/or theconnector406 to thebattery538, thereby allowing thebattery538 to maintain a substantially constant level of charge. Alternatively, thepower management module532 may continuously draw power from thespring contacts404 and/or theconnector406 to power thedoorbell302, while only drawing from thebattery538 when the power from thespring contacts404 and/or theconnector406 is low or insufficient. Still further, thebattery538 may comprise the sole source of power for thedoorbell302. In such embodiments, thespring contacts404 may not be connected to a source of power. When thebattery538 is depleted of its charge, it may be recharged, such as by connecting a power source to theconnector406. Thepower management module532 may also serve as a conduit for data between theconnector406 and themicrocontroller534.
With further reference toFIG. 5, in certain embodiments themicrocontroller534 may comprise an integrated circuit including a processor core, memory, and programmable input/output peripherals. Themicrocontroller534 may receive input signals, such as data and/or power, from thePIR sensors540, thebracket PCB502, thepower management module532, thelight sensor518, themicrophone524, and/or thecommunication module530, and may perform various functions as further described below. When themicrocontroller534 is triggered by thePIR sensors540, themicrocontroller534 may be triggered to perform one or more functions. When thelight sensor518 detects a low level of ambient light, thelight sensor518 may trigger themicrocontroller534 to enable “night vision,” as further described below. Themicrocontroller534 may also act as a conduit for data communicated between various components and thecommunication module530.
With further reference toFIG. 5, thecommunication module530 may comprise an integrated circuit including a processor core, memory, and programmable input/output peripherals. Thecommunication module530 may also be configured to transmit data wirelessly to a remote network device, and may include one or more transceivers (not shown). The wireless communication may comprise one or more wireless networks, such as, without limitation, Wi-Fi, cellular, Bluetooth, and/or satellite networks. Thecommunication module530 may receive inputs, such as power and/or data, from thecamera PCB542, themicrocontroller534, thebutton306, thereset button408, and/or the power PCBnon-volatile memory536. When thebutton306 is pressed, thecommunication module530 may be triggered to perform one or more functions. When thereset button408 is pressed, thecommunication module530 may be triggered to erase any data stored at the power PCBnon-volatile memory536 and/or at thecamera PCB memory544. Thecommunication module530 may also act as a conduit for data communicated between various components and themicrocontroller534. The power PCBnon-volatile memory536 may comprise flash memory configured to store and/or transmit data. For example, in certain embodiments the power PCBnon-volatile memory536 may comprise serial peripheral interface (SPI) flash memory.
With further reference toFIG. 5, thecamera PCB542 may comprise components that facilitate the operation of thecamera316. For example, animager546 may comprise a video recording sensor and/or a camera chip. In one aspect of the present disclosure, theimager546 may comprise a complementary metal-oxide semiconductor (CMOS) array, and may be capable of recording high definition (e.g., 722p, 1080p, 4K, etc.) video files. Acamera processor526 may comprise an encoding and compression chip. In some embodiments, thecamera processor526 may comprise a bridge processor. Thecamera processor526 may process video recorded by theimager546 and audio recorded by themicrophone524, and may transform this data into a form suitable for wireless transfer by thecommunication module530 to a network. Thecamera PCB memory544 may comprise volatile memory that may be used when data is being buffered or encoded by thecamera processor526. For example, in certain embodiments thecamera PCB memory544 may comprise synchronous dynamic random access memory (SD RAM). IR LED's548 may comprise light-emitting diodes capable of radiating infrared light. IR cutfilter550 may comprise a system that, when triggered, configures theimager546 to see primarily infrared light as opposed to visible light. When thelight sensor518 detects a low level of ambient light (which may comprise a level that impedes the performance of theimager546 in the visible spectrum), the IR LED's548 may shine infrared light through the doorbell302 enclosure out to the environment, and the IR cutfilter550 may enable theimager546 to see this infrared light as it is reflected or refracted off of objects within the field of view of the doorbell. This process may provide the doorbell302 with the “night vision” function mentioned above. As also shown inFIG. 5, thecamera PCB542 includes acomputer vision module552, which is described in greater detail below.
As discussed above, the present disclosure provides numerous examples of methods and systems including A/V recording and communication doorbells, but the present embodiments are equally applicable for A/V recording and communication devices other than doorbells. For example, the present embodiments may include one or more A/V recording and communication security cameras instead of, or in addition to, one or more A/V recording and communication doorbells. An example A/V recording and communication security camera may include substantially all of the structure and functionality of thedoorbell302, but without thefront button306 and its associated components. An example A/V recording and communication security camera may further omit other components, such as, for example, thebracket PCB502 and its associated components.
FIGS. 6 and 7 illustrate an example A/V recording and communication security camera according to various aspects of the present embodiments. With reference toFIG. 6, thesecurity camera602, similar to thevideo doorbell302, includes afaceplate604 that is mounted to aback plate606 and anenclosure608 that engages thefaceplate604. Collectively, thefaceplate304, theback plate402, and theenclosure310 form a housing that contains and protects the inner components of thesecurity camera602. However, unlike thevideo doorbell302, thesecurity camera602 does not include anyfront button306 for activating the doorbell. Thefaceplate604 may comprise any suitable material, including, without limitation, metals, such as brushed aluminum or stainless steel, metal alloys, or plastics. Thefaceplate604 protects the internal contents of thesecurity camera602 and serves as an exterior front surface of thesecurity camera602.
With continued reference toFIG. 6, theenclosure608 engages thefaceplate604 and abuts anupper edge610 of thefaceplate604. As discussed above with reference toFIG. 3, in alternative embodiments, one or more gaps between theenclosure608 and thefaceplate604 may facilitate the passage of sound and/or light through thesecurity camera602. Theenclosure608 may comprise any suitable material, but in some embodiments the material of theenclosure608 preferably permits infrared light to pass through from inside thesecurity camera602 to the environment and vice versa. Thesecurity camera602 further includes alens612. Again, similar to thevideo doorbell302, in some embodiments, the lens may comprise a Fresnel lens, which may be patterned to deflect incoming light into one or more infrared sensors located within thesecurity camera602. Thesecurity camera602 further includes acamera614, which captures video data when activated, as described above and below.
With further reference toFIG. 6, theenclosure608 may extend from the front of thesecurity camera602 around to the back thereof and may fit snugly around a lip (not shown) of theback plate606. Theback plate606 may comprise any suitable material, including, without limitation, metals, such as brushed aluminum or stainless steel, metal alloys, or plastics. Theback plate606 protects the internal contents of thesecurity camera602 and serves as an exterior rear surface of thesecurity camera602. Thefaceplate604 may extend from the front of thesecurity camera602 and at least partially wrap around theback plate606, thereby allowing a coupled connection between thefaceplate604 and theback plate606. Theback plate606 may have indentations (not shown) in its structure to facilitate the coupling.
With continued reference toFIG. 6, thesecurity camera602 further comprises a mountingapparatus616. The mountingapparatus616 facilitates mounting thesecurity camera602 to a surface, such as an interior or exterior wall of a building, such as a home or office. Thefaceplate604 may extend from the bottom of thesecurity camera602 up to just below thecamera614, and connect to theback plate606 as described above. Thelens612 may extend and curl partially around the side of thesecurity camera602. Theenclosure608 may extend and curl around the side and top of thesecurity camera602, and may be coupled to theback plate606 as described above. Thecamera614 may protrude from theenclosure608, thereby giving it a wider field of view. The mountingapparatus616 may couple with theback plate606, thereby creating an assembly including thesecurity camera602 and the mountingapparatus616. The couplings described in this paragraph, and elsewhere, may be secured by, for example and without limitation, screws, interference fittings, adhesives, or other fasteners. Interference fittings may refer to a type of connection where a material relies on pressure and/or gravity coupled with the material's physical strength to support a connection to a different element.
FIG. 7 is a functional block diagram of the components of the A/V recording and communication security camera ofFIG. 6. With reference toFIG. 7, the interior of thewireless security camera602 comprises a plurality of printed circuit boards, including afront PCB702, acamera PCB704, and apower PCB706, each of which is described below. Thecamera PCB704 comprises various components that enable the functionality of thecamera614 of thesecurity camera602, as described below. Infrared light-emitting components, such as infrared LED's708, are coupled to thecamera PCB704 and may be triggered to activate when a light sensor detects a low level of ambient light. When activated, the infrared LED's708 may emit infrared light through theenclosure608 and/or thecamera614 out into the ambient environment. Thecamera614, which may be configured to detect infrared light, may then capture the light emitted by the infrared LED's708 as it reflects off objects within the camera's614 field of view, so that thesecurity camera602 may clearly capture images at night (may be referred to as “night vision”).
Thefront PCB702 comprises various components that enable the functionality of the audio and light components, including alight sensor710, LED's712, one ormore speakers714, and amicrophone716. Thelight sensor710 may be one or more sensors capable of detecting the level of ambient light of the surrounding environment in which thesecurity camera602 may be located. Thespeakers714 may be any electromechanical device capable of producing sound in response to an electrical signal input. Themicrophone716 may be an acoustic-to-electric transducer or sensor capable of converting sound waves into an electrical signal. Thefront PCB702 and all components thereof may be electrically coupled to thepower PCB706, thereby allowing data and/or power to be transferred to and from thepower PCB706 and thefront PCB702.
Thespeakers714 and themicrophone716 may be coupled to acamera processor718 on thecamera PCB704 through anaudio CODEC720. For example, the transfer of digital audio from the user's client device114 and thespeakers714 and themicrophone716 may be compressed and decompressed using theaudio CODEC720, coupled to thecamera processor718. Once compressed byaudio CODEC720, digital audio data may be sent through thecommunication module722 to thenetwork112, routed by one ormore servers120, and delivered to the user's client device114. When the user speaks, after being transferred through thenetwork112, digital audio data is decompressed byaudio CODEC720 and emitted to the visitor via thespeakers714.
With continued reference toFIG. 7, thepower PCB706 comprises various components that enable the functionality of the power and device-control components, including apower management module724, a processor726 acommunication module722, and power PCBnon-volatile memory728. In certain embodiments, thepower management module724 may comprise an integrated circuit capable of arbitrating between multiple voltage rails, thereby selecting the source of power for thesecurity camera602. Thebattery730 and/or theconnector406 may each provide power to thepower management module532. The power management module724 (which may be similar to connector406) may have separate power rails dedicated to thebattery730 and theconnector732. Thepower management module724 may control charging of thebattery730 when theconnector732 is connected to an external source of power, and may also serve as a conduit for data between theconnector732 and theprocessor726.
With further reference toFIG. 7, in certain embodiments theprocessor726 may comprise an integrated circuit including a processor core, memory, and programmable input/output peripherals. Theprocessor726 may receive input signals, such as data and/or power, from thePIR sensors734, thepower management module724, thelight sensor710, themicrophone716, and/or thecommunication module722, and may perform various functions as further described below. When theprocessor726 is triggered by thePIR sensors734, theprocessor726 may be triggered to perform one or more functions, such as initiating recording of video images via thecamera614. When thelight sensor710 detects a low level of ambient light, thelight sensor710 may trigger theprocessor726 to enable “night vision,” as further described below. Theprocessor726 may also act as a conduit for data communicated between various components and thecommunication module722.
With further reference toFIG. 7, thesecurity camera602 further comprises acommunication module722 coupled to thepower PCB706. Thecommunication module722 facilitates communication with devices in one or more remote locations, as further described below. Thecommunication module722 may comprise an integrated circuit including a processor core, memory, and programmable input/output peripherals. Thecommunication module722 may also be configured to transmit data wirelessly to a remote network device, such as the user's client device114, theremote storage device118, and/or theremote server120, and may include one or more transceivers (not shown). The wireless communication may comprise one or more wireless networks, such as, without limitation, Wi-Fi, cellular, Bluetooth, and/or satellite networks. Thecommunication module722 may receive inputs, such as power and/or data, from thecamera PCB704, theprocessor726, the reset button736 (which may be similar to the reset button408), and/or the power PCBnon-volatile memory728. When thereset button736 is pressed, thecommunication module722 may be triggered to erase any data stored at the power PCBnon-volatile memory728 and/or at thecamera PCB memory738. Thecommunication module722 may also act as a conduit for data communicated between various components and theprocessor726. The power PCBnon-volatile memory728 may comprise flash memory configured to store and/or transmit data. For example, in certain embodiments the power PCBnon-volatile memory728 may comprise serial peripheral interface (SPI) flash memory.
With continued reference toFIG. 7, thepower PCB514 further comprises theconnector406 described above and abattery538. Theconnector406 may protrude outward from thepower PCB514 and extend through a hole in theback plate402. Thebattery538, which may be a rechargeable battery, may provide power to the components of thesecurity camera602.
With continued reference toFIG. 7, thepower PCB706 further comprises passive infrared (PIR)sensors734, which may be secured on or within a PIR sensor holder (not shown) that resides behind the lens612 (FIG. 6). ThePIR sensors734 may be any type of sensor capable of detecting and communicating the presence of a heat source within their field of view. Further, alternative embodiments may comprise one or more motion sensors either in place of or in addition to thePIR sensors734. The motion sensors may be configured to detect motion using any methodology, such as a methodology that does not rely on detecting the presence of a heat source within a field of view.
With further reference toFIG. 7, thecamera PCB704 may comprise components that facilitate the operation of thecamera614. For example, animager740 may comprise a video recording sensor and/or a camera chip. In one aspect of the present disclosure, theimager740 may comprise a complementary metal-oxide semiconductor (CMOS) array, and may be capable of recording high definition (e.g., 722p or better) video files. Acamera processor718 may comprise an encoding and compression chip. In some embodiments, thecamera processor718 may comprise a bridge processor. Thecamera processor718 may process video recorded by theimager740 and audio recorded by themicrophone716, and may transform this data into a form suitable for wireless transfer by thecommunication module722 to a network. Thecamera PCB memory738 may comprise volatile memory that may be used when data is being buffered or encoded by thecamera processor718. For example, in certain embodiments thecamera PCB memory738 may comprise synchronous dynamic random access memory (SD RAM). IR LED's708 may comprise light-emitting diodes capable of radiating infrared light. IR cutfilter742 may comprise a system that, when triggered, configures theimager740 to see primarily infrared light as opposed to visible light. When thelight sensor710 detects a low level of ambient light (which may comprise a level that impedes the performance of theimager740 in the visible spectrum), the IR LED's708 may shine infrared light through thesecurity camera602 enclosure out to the environment, and the IR cutfilter742 may enable theimager740 to see this infrared light as it is reflected or refracted off of objects within the field of view of the doorbell. This process may provide thesecurity camera602 with the “night vision” function mentioned above.
Thecamera PCB704 further includes acomputer vision module744. Functionality of thecomputer vision module744 is described in greater detail below.
As discussed above, the present disclosure provides numerous examples of methods and systems including A/V recording and communication doorbells, but the present embodiments are equally applicable for A/V recording and communication devices other than doorbells. For example, the present embodiments may include one or more A/V recording and communication floodlight controllers instead of, or in addition to, one or more A/V recording and communication doorbells.FIGS. 8-10 illustrate an example A/V recording and communication floodlight controller according to various aspects of the present embodiments.FIG. 8 is a functional block diagram illustrating various components of thefloodlight controller802 and their relationships to one another. For example, thefloodlight controller802 comprises an AC/DC adapter804. Thefloodlight controller802 is thus configured to be connected to a source of external AC (alternating-current) power, such as a household AC power supply (may also be referred to as AC mains). The AC power may have a voltage in the range of 110-220 VAC, for example. The incoming AC power may be received by the AC/DC adapter804, which may convert the incoming AC power to DC (direct-current) and may step down the voltage from 110-220 VAC to a lower output voltage of about 12 VDC and an output current of about 2 A, for example. In various embodiments, the output of the AC/DC adapter804 may be in a range of from about 9 V to about 15 V, for example, and in a range of from about 0.5 A to about 5 A, for example. These voltages and currents are only examples provided for illustration and are not limiting in any way.
With further reference toFIG. 8, thefloodlight controller802 further comprises other components, including a processor806 (may also be referred to as a controller), aphotosensor808, an audio CODEC (coder-decoder)810, at least one speaker812 (which may be similar to speaker108), the at least one microphone814 (which may be similar to microphone106), at least onemotion sensor816, an infrared (IR)light source818, anIR cut filter820, an image sensor822 (may be a component of thecamera104, and may be referred to interchangeably as the camera104),volatile memory824,non-volatile memory826, acommunication module828, abutton830, aswitch832 for controlling one or more floodlights, and a plurality oflight indicators834. Each of these components is described in detail below.
With further reference toFIG. 8, theprocessor806 may perform data processing and various other functions, as described below. Theprocessor806 may comprise an integrated circuit including a processor core, thevolatile memory824, thenon-volatile memory826, and/or programmable input/output peripherals (not shown). Thevolatile memory824 may comprise, for example, DDR3 SDRAM (double data rate type three synchronous dynamic random-access memory). Thenon-volatile memory826 may comprise, for example, NAND flash memory. In the embodiment illustrated inFIG. 8, thevolatile memory824 and thenon-volatile memory826 are illustrated outside the box representing theprocessor806. The embodiment illustrated inFIG. 8 is, however, merely an example, and in some embodiments thevolatile memory824 and/or thenon-volatile memory826 may be physically incorporated with theprocessor806, such as on the same chip. Thevolatile memory824 and/or thenon-volatile memory826, regardless of their physical location, may be shared by one or more other components (in addition to the processor806) of thepresent floodlight controller802.
With further reference toFIG. 8, the image sensor822 (camera104), the IRlight source818, the IR cutfilter820, and thephotosensor808 are all operatively coupled to theprocessor806. As described in detail below, the IRlight source818 and the IR cutfilter820 facilitate “night vision” functionality of theimage sensor822. For example, thephotosensor808 is configured to detect the level of ambient light about thefloodlight controller802. Theprocessor806 uses the input from the photosensor808 to control the states of the IRlight source818 and the IR cutfilter820 to activate and deactivate night vision, as described below. In some embodiments, theimage sensor822 may comprise a video recording sensor or a camera chip. In some embodiments, the IRlight source818 may comprise one or more IR light-emitting diodes (LEDs).
With further reference toFIG. 8, the at least onespeaker812 and the at least onemicrophone814 are operatively coupled to theaudio CODEC810, which is operatively coupled to theprocessor806. The transfer of digital audio between the user and a visitor (or intruder) may be compressed and decompressed using theaudio CODEC810, as described below. The motion sensor(s)816 is also operatively coupled to theprocessor806. The motion sensor(s)816 may comprise, for example, passive infrared (PIR) sensors, or any other type of sensor capable of detecting and communicating to theprocessor806 the presence and/or motion of an object within its field of view. When theprocessor806 is triggered by the motion sensor(s)816, theprocessor806 may perform one or more functions, as described below.
With further reference toFIG. 8, thecommunication module828 is operatively coupled to theprocessor806. Thecommunication module828, which includes at least oneantenna836, is configured to handle communication links between thefloodlight controller802 and other, external devices or receivers, and to route incoming/outgoing data appropriately. For example, inbound data from the antenna(s)836 may be routed through thecommunication module828 before being directed to theprocessor806, and outbound data from theprocessor806 may be routed through thecommunication module828 before being directed to the antenna(s)836. Thecommunication module828 may include one or more transceiver modules capable of transmitting and receiving data, and using, for example, one or more protocols and/or technologies, such as GSM, UMTS (3GSM), IS-95 (CDMA one), IS-2000 (CDMA 2000), LTE, FDMA, TDMA, W-CDMA, CDMA, OFDMA, Wi-Fi, WiMAX, Bluetooth, or any other protocol and/or technology. In the illustrated embodiment, thecommunication module828 includes a Wi-Fi chip838 and aBluetooth chip840, but these components are merely examples and are not limiting. Further, while the Wi-Fi chip838 and theBluetooth chip840 are illustrated within the box representing thecommunication module828, the embodiment illustrated inFIG. 8 is merely an example, and in some embodiments the Wi-Fi chip838 and/or theBluetooth chip840 are not necessarily physically incorporated with thecommunication module828.
In some embodiments, thecommunication module828 may further comprise a wireless repeater (not shown, may also be referred to as a wireless range extender). The wireless repeater is configured to receive a wireless signal from a wireless router (or another network device) in the user's network110 and rebroadcast the signal. Wireless devices that are not within the broadcast range of the wireless router, or that only weakly receive the wireless signal from the wireless router, may receive the rebroadcast signal from the wireless repeater of thecommunication module828, and may thus connect to the user's network110 through thefloodlight controller802. In some embodiments, the wireless repeater may include one or more transceiver modules (not shown) capable of transmitting and receiving data, and using, for example, one or more protocols and/or technologies, such as Wi-Fi (IEEE 802.11), WiMAX (IEEE 802.16), or any other protocol and/or technology.
With further reference toFIG. 8, when a visitor (or intruder) who is present in the area about thefloodlight controller802 speaks, audio from the visitor (or intruder) is received by themicrophones814 and compressed by theaudio CODEC810. Digital audio data is then sent through thecommunication module828 to the network112 (FIG. 1) via the user's network110, routed by theserver120 and/or theAPI122, and delivered to the user's client device114. When the user speaks, after being transferred through thenetwork112, the user's network110, and thecommunication module828, the digital audio data from the user is decompressed by theaudio CODEC810 and emitted to the visitor through thespeaker812, which may be driven by a speaker driver (not shown).
With further reference toFIG. 8, thebutton830 is operatively coupled to theprocessor806. Thebutton830 may have one or more functions, such as changing an operating mode of thefloodlight controller802 and/or triggering a reset of thefloodlight controller802. For example, when thebutton830 is pressed and released, it may cause thecommunication module828 of thefloodlight controller802 to enter access point (AP) mode, which may facilitate connecting thefloodlight controller802 to the user's network110. Alternatively, or in addition, when thebutton830 is pressed and held down for at least a threshold amount of time, it may trigger the erasing of any data stored at thevolatile memory824 and/or at thenon-volatile memory826, and/or may trigger a reboot of theprocessor806.
With reference toFIG. 9, thefloodlight controller802 comprises ahousing902 for containing and protecting the interior components of thefloodlight controller802. Thehousing902 includes afront wall904, arear wall906, opposingside walls908,910, anupper wall912, and a taperedlower portion914. Thefront wall904 includes a central opening that receives anupper shield916 and alower grill918. In the illustrated embodiment, front surfaces of theupper shield916 and thelower grill918 are substantially flush with a front surface of thefront wall904, but in alternative embodiments these surfaces may not be flush with one another. Theupper shield916 is substantially rectangular, and includes asemicircular indentation920 along itslower edge922. Thelower grill918 is substantially rectangular, and includes asemicircular indentation924 along itsupper edge926. Together, thesemicircular indentations920,924 in theupper shield916 and thelower grill918 form acircular opening928 that accommodates alight pipe930. A cover extends across and closes an outer open end of thelight pipe930. Theupper shield916, thelower grill918, thelight pipe930, and the cover are all described in further detail below. The camera (not shown) is located in thecircular opening928 formed by theupper shield916 and thelower grill918, behind the cover, and is surrounded by thelight pipe930.
With reference toFIG. 8, thefloodlight controller802 further comprises themicrophones814. In the illustrated embodiment, a first one of themicrophones814 is located along the front of thefloodlight controller802 behind the upper shield916 (FIG. 9) and a second one of themicrophones814 is located along the left side of thefloodlight controller802 behind the left-side wall910 (FIG. 9) of thehousing902. Including two microphones that are spaced from one another and located on different sides of thefloodlight controller802 provides the illustrated embodiment of thefloodlight controller802 with advantageous noise cancelling and/or echo cancelling for clearer audio. The illustrated embodiment is, however, just one example and is not limiting. Alternative embodiments may only include onemicrophone814, or include twomicrophones814 in different locations than as illustrated inFIG. 8.
With reference toFIG. 9, theupper shield916 may include a first microphone opening932 located in front of thefirst microphone814 to facilitate the passage of sound through theupper shield916 so that sounds from the area about thefloodlight controller802 may reach thefirst microphone814. The left-side wall910 of thehousing902 may include a second microphone opening (not shown) located in front of thesecond microphone814 that facilitates the passage of sound through the left-side wall910 of thehousing902 so that sounds from the area about thefloodlight controller802 may reach thesecond microphone814.
With further reference toFIG. 9, thefloodlight controller802 may further comprise alight barrier934 surrounding inner and outer surfaces of thelight pipe930. Thelight barrier934 may comprise a substantially opaque material that prevents the light generated by thelight indicators834 from bleeding into the interior spaces of thefloodlight controller802 around thelight pipe930. Thelight barrier934 may comprise a resilient material, such as a plastic, which may also advantageously provide moisture sealing at the junctures between thelight pipe930 and theupper shield916 and thelower grill918. Portions of thelight barrier934 may also extend between the junctures between theupper shield916 and thelower grill918.
With further reference toFIG. 9, thefloodlight controller802 further comprises connecting hardware configured for connecting thefloodlight controller802 to a floodlight device1002 (FIG. 10) and a power source (not shown). Thefloodlight controller802 further comprises a plurality of wires for connecting thefloodlight controller802 to the power supply and to the floodlight(s)1004 (FIG. 10) of the floodlight device1002 (for enabling thefloodlight controller802 to turn the floodlight(s)1004 on and off). In the illustrated embodiment, three wires may be used, but the illustrated embodiment is merely one example and is not limiting. In alternative embodiments, any number of wires may be provided.
Some of the present embodiments may comprise computer vision for one or more aspects, such as object and/or facial recognition. Computer vision includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the form of decisions. Computer vision seeks to duplicate the abilities of human vision by electronically perceiving and understanding an image. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that may interface with other thought processes and elicit appropriate action. This image understanding may be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data may take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.
One aspect of computer vision comprises determining whether or not the image data contains some specific object, feature, or activity. Different varieties of computer vision recognition include: Object Recognition (also called object classification)—One or several pre-specified or learned objects or object classes may be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Identification—An individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or identification of a specific vehicle. Detection—The image data are scanned for a specific condition. Examples include detection of possible abnormal cells or tissues in medical images or detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data that may be further analyzed by more computationally demanding techniques to produce a correct interpretation.
Several specialized tasks based on computer vision recognition exist, such as: Optical Character Recognition (OCR)—Identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g., ASCII). 2D Code Reading—Reading of 2D codes such as data matrix and QR codes. Facial Recognition. Shape Recognition Technology (SRT)—Differentiating human beings (e.g., head and shoulder patterns) from objects.
Typical functions and components (e.g., hardware) found in many computer vision systems are described in the following paragraphs. The present embodiments may include at least some of these aspects. For example, with reference toFIGS. 3-5, embodiments of the present A/V recording andcommunication doorbell302 may include acomputer vision module552. In addition, with reference toFIGS. 6-7, embodiments of thepresent security camera602 may include acomputer vision module744. Thecomputer vision module552 may include any of the components (e.g., hardware) and/or functionality described herein with respect to computer vision, including, without limitation, one or more cameras, sensors, and/or processors. In some of the present embodiments, with reference toFIGS. 3-5, themicrophone524, thecamera316, and/or theimager546 may be components of thecomputer vision module552.
Image acquisition—A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, may include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data may be a 2D image, a 3D volume, or an image sequence. The pixel values may correspond to light intensity in one or several spectral bands (gray images or color images), but may also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or nuclear magnetic resonance.
Pre-processing—Before a computer vision method may be applied to image data in order to extract some specific piece of information, it is usually beneficial to process the data in order to assure that it satisfies certain assumptions implied by the method. Examples of pre-processing include, but are not limited to re-sampling in order to assure that the image coordinate system is correct, noise reduction in order to assure that sensor noise does not introduce false information, contrast enhancement to assure that relevant information may be detected, and scale space representation to enhance image structures at locally appropriate scales.
Feature extraction—Image features at various levels of complexity are extracted from the image data. Typical examples of such features are: Lines, edges, and ridges; Localized interest points such as corners, blobs, or points; More complex features may be related to texture, shape, or motion.
Detection/segmentation—At some point in the processing a decision may be made about which image points or regions of the image are relevant for further processing. Examples are: Selection of a specific set of interest points; Segmentation of one or multiple image regions that contain a specific object of interest; Segmentation of the image into nested scene architecture comprising foreground, object groups, single objects, or salient object parts (also referred to as spatial-taxon scene hierarchy).
High-level processing—At this step, the input may be a small set of data, for example a set of points or an image region that is assumed to contain a specific object. The remaining processing may comprise, for example: Verification that the data satisfy model-based and application-specific assumptions; Estimation of application-specific parameters, such as object pose or object size; Image recognition—classifying a detected object into different categories; Image registration—comparing and combining two different views of the same object. Decision making—Making the final decision required for the application, for example match/no-match in recognition applications.
One or more of the present embodiments may include a vision processing unit (not shown separately, but may be a component of the computer vision module552). A vision processing unit is an emerging class of microprocessor; it is a specific type of AI (artificial intelligence) accelerator designed to accelerate machine vision tasks. Vision processing units are distinct from video processing units (which are specialized for video encoding and decoding) in their suitability for running machine vision algorithms such as convolutional neural networks, SIFT, etc. Vision processing units may include direct interfaces to take data from cameras (bypassing any off-chip buffers), and may have a greater emphasis on on-chip dataflow between many parallel execution units with scratchpad memory, like a manycore DSP (digital signal processor). But, like video processing units, vision processing units may have a focus on low precision fixed-point arithmetic for image processing.
Some of the present embodiments may use facial recognition hardware and/or software, as a part of the computer vision system. Various types of facial recognition exist, some or all of which may be used in the present embodiments.
Some face recognition identifies facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms p a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.
Recognition algorithms may be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances.
Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching.
Further, a newly emerging trend, claimed to achieve improved accuracy, is three-dimensional face recognition. This technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.
One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It may also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D research is enhanced by the development of sophisticated sensors that do a better job of capturing 3D face imagery. The sensors work by projecting structured light onto the face. Up to a dozen or more of these image sensors may be placed on the same CMOS chip—each sensor captures a different part of the spectrum.
Another variation is to capture a 3D picture by using three tracking cameras that point at different angles; one camera pointing at the front of the subject, a second one to the side, and a third one at an angle. All these cameras work together to track a subject's face in real time and be able to face detect and recognize.
Another emerging trend uses the visual details of the skin, as captured in standard digital images or scanned images, for example. This technique, called skin texture analysis, turns the unique lines, patterns, and spots apparent in a person's skin into a mathematical space.
Another form of taking input data for face recognition is by using thermal cameras, which may only detect the shape of the head and ignore the subject accessories such as glasses, hats, or make up.
Further examples of automatic identification and data capture (AIDC) and/or computer vision that may be used in the present embodiments to verify the identity and/or authorization of a person include, without limitation, biometrics. Biometrics refers to metrics related to human characteristics. Biometrics authentication (or realistic authentication) is used in various forms of identification and access control. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers may be physiological characteristics and/or behavioral characteristics. Physiological characteristics may be related to the shape of the body. Examples include, but are not limited to, fingerprints, palm veins, facial recognition, three-dimensional facial recognition, skin texture analysis, DNA, palm prints, hand geometry, iris recognition, retina recognition, and odor/scent recognition. Behavioral characteristics may be related to the pattern of behavior of a person, including, but not limited to, typing rhythm, gait, and voice recognition.
The present embodiments may use any one, or any combination of more than one, of the foregoing biometrics to identify and/or authenticate a person who is either suspicious or who is authorized to take certain actions with respect to a property or expensive item of collateral. For example, with reference toFIGS. 6-7, thecomputer vision module744, and/or thecamera614 and/or theprocessor726 may receive information about the person using any one, or any combination of more than one, of the foregoing biometrics.
As described above, one aspect of the present embodiments includes the realization that homeowners and property owners may rely on and utilize criminal statistics for various reasons, such as to protect the user's family, pets, and property. However, users may not currently be able to access user interfaces that include the functionality to provide the criminal statistics the user desires in an easily digestible format. For example, a user may be able to access criminal statistics for a town, city, or state, but may not be able to access criminal statistics that are unique to the user's location, such as criminal statistics for the user's neighborhood, or for a user defined area surrounding the user's property. For another example, a user may be able to view criminal statistics that are based on actual crimes, but the criminal statistics may not include information pertaining to suspicious activity or unreported criminal behavior, such as information shared by other users of a network of users located within the surrounding area. As a result of the criminal statistics being inadequate and not easily digestible for the user, the user may be unable to determine which criminal activities and suspicious activities/persons are common in the areas surrounding the user's property. As a result, the user may be unable to take necessary actions to protect the user's family, pets, and/or property, which may put the user's family, pets, and/or property at a greater risk of harm.
The present embodiments solve this problem by, for example, creating criminal statistics that are tailored to user's requesting the criminal statistics and/or include data associated with suspicious activities that occur in a geographic area specific to the user, such as the user's neighborhood or a user defined area including the user's property. For example, a backend device may receive a criminal statistics request from a client device of a user (e.g., in response to a notification on the user's client device that the criminal statistics request is available for viewing), where the criminal statistics request indicates a defined geographic area for which the user is requesting criminal statistics. For instance, the geographic area may include an area that surrounds the user's property, such as, but not limited to, a radius around the user's property. In response, the backend server may use data representing criminal activity, such as criminal reports, that occurred within the geographic area to create the criminal statistics for the user. In some examples, the backend server may further utilize image data, audio data, and/or other data that is generated by one or more A/V recording and communication devices located within the geographic area to create the criminal statistics for the user. The backend server may then transmit the criminal statistics to the client device of the user for viewing. In some examples, the criminal statistics may be curated and transmitted to the user's client device without a statistics request from the user, such that the user receives a notification that the user may view the criminal statistics, and upon selecting the notification, the user is provided with a customized criminal statistics report unique to the user. As a result of tailoring the criminal statistics to the user, and/or by including the additional data from the one or more A/V recording and communication devices, the user may be better informed about the crimes (reported and unreported) and suspicious activity that are occurring around the area surrounding the user's property. Using this information, the user may take various actions to protect the user's family, pets, and/or property, as well as protect families, pets, and/or properties in the surrounding area (e.g., the street on which the property is located, the neighborhood the property is in, etc.). For example, the user may park his or her car in the garage if there are car break-ins reported in the user's neighborhood, or the user may purchase a video doorbell if the user is made aware of parcel thefts being a common occurrence in his or her neighborhood.
FIG. 11 is a functional block diagram illustrating asystem1100 for communicating in a network according to various aspects of the present disclosure. Thesystem1100 may include one or more A/V recording andcommunication devices1102 configured to access a user's network1104 (which may correspond to the user's network110) to connect to a network (Internet/PSTN)1106 (in some embodiments, thedevices1102 may be configured to connect directly to the network (Internet/PSTN)1106, such as over a cellular connection). The one or more A/V recording andcommunication devices1102 may include any or all of the components and/or functionality of the A/V recording and communication device102 (FIGS. 1-2), the A/V recording and communication doorbell302 (FIGS. 3-5), the security camera602 (FIGS. 6-7), and/or the floodlight controller802 (FIGS. 8-10).
The user's network1104 may include any or all of the components and/or functionality of the user's network110 described herein.
Thesystem1100 may further include a smart-home hub device1112 (which may alternatively be referred to herein as the hub device1112) connected to the user's network1104. The smart-home hub device1112 (also known as a home automation hub, gateway device, etc.), may comprise any device that facilitates communication with and control of thesensors1114,automation devices1116, and/or the one or more A/V recording andcommunication devices1102. For example, the smart-home hub device1112 may be a component of a home automation system installed at a property. In some embodiments, the A/V recording andcommunication devices1102, thesensors1114, and/or theautomation devices1116 may communicate with the smart-home hub device1112 directly and/or indirectly via the user's network1104 and/or the network (Internet/PSTN)1106. In some of the present embodiments, the A/V recording andcommunication devices1102, thesensors1114, and/or theautomation devices1116 may, in addition to or in lieu of communicating with the smart-home hub device1112, communicate with thefirst client devices1108,1110 and/or one or more of the components of the network of servers/backend devices1118 directly and/or indirectly via the user's network1104 and/or the network (Internet/PSTN)1106.
Home automation, or smart home, is building automation for the home. It involves the control and automation of various devices and/or systems, such as lighting, heating (such as smart thermostats), ventilation, air conditioning (HVAC), blinds/shades, and security, as well as home appliances, such as washers/dryers, ovens, or refrigerators/freezers. Wi-Fi is often used for remote monitoring and control. Smart home devices (e.g., thehub device1112, thesensors1114, theautomation devices1116, the A/V recording andcommunication devices1102, etc.), when remotely monitored and controlled via the network (Internet/PSTN)1106, may be considered to be components of the Internet of Things. Smart home systems may include switches and/or sensors (e.g., the sensors1114) connected to a central hub such as the smart-home hub device1112, sometimes called a gateway, from which the system may be controlled with a user interface. The user interface may include any or all of a wall-mounted terminal (e.g., a keypad, a touchscreen, etc.), software installed on thefirst client devices1108,1110 (e.g., a mobile application), a tablet computer or a web interface, often but not always via Internet cloud services. The home automation system may use one or more communication protocols, including either or both of wired and wireless protocols, including but not limited to Wi-Fi, X10, Ethernet, RS-485, 6LoWPAN, Bluetooth LE (BTLE), ZigBee, and Z-Wave.
The one ormore sensors1114 may include, for example, at least one of a door sensor, a window sensor, a contact sensor, a tilt sensor, a temperature sensor, a carbon monoxide sensor, a smoke detector, a light sensor, a glass break sensor, a motion sensor, and/or other sensors that may provide the user/owner of the security system a notification of a security event at his or her property.
The one ormore automation devices1116 may include, for example, at least one of an outdoor lighting system, an indoor lighting system, and indoor/outdoor lighting system, a temperature control system (e.g., a thermostat), a shade/blind control system, a locking control system (e.g., door lock, window lock, etc.), a home entertainment automation system (e.g., TV control, sound system control, etc.), an irrigation control system, and/or other automation devices.
As described herein, in some of the present embodiments, some or all of the user's network1104, thefirst client devices1108,1110, the A/V recording andcommunication device1102, the smart-home hub device1112, thesensors1114, and theautomation devices1116 may be referred to as a security system, which may be installed at a property or premises.
With further reference toFIG. 11, thesystem1100 may also include various backend devices such as (but not limited to)storage devices1120,backend servers1122, andbackend APIs1124 that may be in network communication (e.g., over the user's network1104 and/or the network (Internet/PSTN)1106) with the A/V recording andcommunication devices1102, thehub device1112, thefirst client devices1108,1110, thesensors1114, and/or theautomation devices1116. In some embodiments, thestorage devices1120 may be a separate device from the backend servers1122 (as illustrated) or may be an integral component of thebackend servers1122. Thestorage devices1120 may be similar in structure and/or function to the storage device118 (FIG. 1). In addition, in some embodiments, thebackend servers1122 andbackend APIs1124 may be similar in structure and/or function to theserver120 and the backend API122 (FIG. 1), respectively.
With further reference toFIG. 11, thesystem1100 may also include asecurity monitoring service1126. Thesecurity monitoring service1126 may be operated by the same company that manufactures, sells, and/or distributes the A/V recording andcommunication devices1102, thehub device1112, thesensors1114, and/or theautomation devices1116. In other embodiments, thesecurity monitoring service1126 may be operated by a third-party company (e.g., a different company than the one that manufactured, sold, and/or distributed the A/V recording andcommunication devices1102, thehub device1112, thesensors1114, and/or the automation devices1116). In any of the present embodiments, thesecurity monitoring service1126 may have control of at least some of the features and components of the security system (e.g., thesecurity monitoring service1126 may be able to arm and/or disarm the security system, lock and/or unlock doors, activate and/or deactivate one or more of thesensors1114 and/or theautomation devices1116, etc.). For example, thesecurity monitoring service1126 may operate and control their own client devices and/or network of servers/backend devices for monitoring and/or controlling security systems. In such an example, the A/V recording andcommunication devices1102, thehub device1112, thesensors1114, and/or theautomation devices1116 may communicate with the client devices and/or one or more components of the network of servers/backend devices of thesecurity monitoring service1126 over the network (Internet/PSTN)1106 (in some embodiments, via one or more of the components of the network of backend servers/backend devices1118).
Thesystem1100 may also include one ormore client devices1108,1110 (alternatively referred to herein as a “first client devices1108,1110”), which in various embodiments may be configured to be in network communication and/or associated with the A/V recording andcommunication device1102. Thefirst client devices1108,1110 may comprise, for example, a mobile phone such as a smartphone, or a computing device such as a tablet computer, a laptop computer, a desktop computer, etc. In some embodiments, thefirst client devices1108,1110 may include a smart watch or a combination of a smart watch and a mobile phone. Thefirst client devices1108,1110 may include any or all of the components and/or functionality of the client device114 (FIG. 1) and/or the client device2902 (FIG. 29) described herein. In some embodiments, one or more of thefirst client devices1108,1110 may not be associated with the A/V recording andcommunication device1102.
With further reference toFIG. 11,system1100 may also include at least one additional client device1128 (alternatively referred to as the “second client device1128”). Thesecond client device1128 may comprise, for example, a mobile phone such as a smartphone, or a computing device such as a tablet computer, a laptop computer, a desktop computer, etc. In some embodiments, thesecond client devices1128 may include a smart watch or a combination of a smart watch and a mobile phone. In some examples, thesecond client device1128 may be associated with one or more A/V recording and communication devices (not shown). In other examples, thesecond client device1128 may not be associated with one or more A/V recording and communication devices.
With further reference toFIG. 11, thesystem1100 may include one or more third-party services1130(1)-1130(3) (alternatively referred to herein individually as third party-service1130 and/or in combination as third-party services1130). The backend server(s)1122 may be in network communication with the third-party services1130 to retrieve data (criminal data) representing criminal statistics. The criminal statistics may indicate incidents that have occurred within a geographic area. In some examples, the incidents may include criminal reports, such as, but not limited to, reported suspicious activity, reported crimes, and/or the like. In some examples, thebackend server1122 may retrieve the data continuously from the third-party services1130. In some examples, thebackend server1122 may retrieve the data from the third-party services1130 at given time intervals. The given time intervals may include, but are not limited to, every hour, day, week, month, or the like.
For a first example, a third-party service1130 may include a service that collects, stores, generates, filters, and/or provides criminal statistics for one or more geographic areas. The geographic areas may include, but are not limited to, countries, states, cities, and/or towns. For instance, the third-party services1130 may include CRIME REPORTS® OR SPOTCRIME®. For a second example, a third-party service1130 may include a law enforcement agency that collects, stores, generates, filters, and/or provides criminal statistics for one or more geographic areas. As described above, the geographic areas may include, but are not limited to, countries, states, cities, and/or towns. For a third example, a third-party service1130 may include online resources that users can use to search for content, such as criminal statistics. For instance, the online resources may include, but are not limited to, search engines, social media sites, databases, and/or other online resources.
FIG. 12 is a functional block diagram illustrating an embodiment of an A/V recording andcommunication device1102 according to various aspects of the present disclosure. In some embodiments, the A/V recording andcommunication device1102 may represent, and further include one or more of the components from, the A/V recording andcommunication doorbell302, the A/V recording andcommunication security camera602, and/or thefloodlight controller802. Additionally, in some embodiments, the A/V recording andcommunication device1102 may omit one or more of the components shown inFIG. 12 and/or may include one or more additional components not shown inFIG. 12.
The A/V recording andcommunication device1102 may comprise aprocessing module1202 that is operatively connected to acamera1204, microphone(s)1206, amotion sensor1208, aspeaker1210, acommunication module1212, and a button1214 (in embodiments where the A/V recording andcommunication device1102 is a doorbell, such as the A/V recording and communication doorbell302). Theprocessing module1202 may comprise aprocessor1216,volatile memory1218, andnon-volatile memory1220, which includes adevice application1222. In various embodiments, thedevice application1222 may configure theprocessor1216 to captureimage data1224 using thecamera1204,audio data1226 using the microphone(s)1206,input data1228 using the button1214 (and/or thecamera1204 and/or themotion sensor1208, depending on the embodiment), and/ormotion data1230 using thecamera1204 and/or themotion sensor1208. In some embodiments, thedevice application1222 may also configure theprocessor1216 to generatetext data1232 describing theimage data1224, theaudio data1226, and/or theinput data1228, such as in the form of metadata, for example.
In addition, thedevice application1222 may configure theprocessor1216 to transmit theimage data1224, theaudio data1226, themotion data1230, theinput data1228, thetext data1232, and/or auser alert1234 to thefirst client devices1108,1110, thehub device1112, and/or thebackend server1122 using thecommunication module1212. In various embodiments, thedevice application1222 may also configure theprocessor1216 to generate and transmit anoutput signal1236 that may include theimage data1224, theaudio data1226, thetext data1232, theinput data1228, and/or themotion data1230. In some of the present embodiments, theoutput signal1236 may be transmitted to thebackend server1122 and/or thehub device1112 using thecommunication module1212, and thebackend server1122 and/or thehub device1112 may transmit (or forward) theoutput signal1236 to thefirst client devices1108,1110 and/or thebackend server1122 may transmit theoutput signal1236 to thehub device1112. In other embodiments, theoutput signal1236 may be transmitted directly to thefirst client devices1108,1110 and/or thehub device1112.
In further reference toFIG. 12, theimage data1224 may comprise image sensor data such as (but not limited to) exposure values and data regarding pixel values for a particular sized grid. Theimage data1224 may include still images, live video, and/or pre-recorded images and/or video. Theimage data1224 may be recorded by thecamera1204 in a field of view of thecamera1204.
In further reference toFIG. 12, themotion data1230 may comprise motion sensor data generated in response to motion events. For example, themotion data1230 may include an amount or level of a data type generated by the motion sensor1208 (e.g., the voltage level output by themotion sensor1208 when themotion sensor1208 is a PIR type motion sensor). In some of the present embodiments, such as those where the A/V recording andcommunication device1102 does not include themotion sensor1208, themotion data1230 may be generated by thecamera1204. In such embodiments, based on a frame by frame comparison of changes in the pixels from theimage data1124, it may be determined that motion is present.
Theinput data1228 may include that data generated in response to an input to thebutton1214. The button1214 (which may include similar design and functionality to that of the front button306 (FIG. 3)) may receive an input (e.g., a press, a touch, a series of touches and/or presses, etc.) and may generate theinput data1228 in response that is indicative of the type of input. In embodiments where the A/V recording andcommunication device1102 is not a doorbell, the A/V recording andcommunication device1102 may not include thebutton1214, and in such embodiments, the A/V recording andcommunication device1102 may not generate theinput data1228 and/or theinput data1228 may be generated by another component of the A/V recording and communication device1102 (e.g., the camera1204).
With further reference toFIG. 12, auser alert1234 may be generated by theprocessor1216 and transmitted, using thecommunication module1212, to thefirst client devices1108,1110, thehub device1112, and/or thebackend server1122. For example, in response to detecting motion using thecamera1204 and/or themotion sensor1208, the A/V recording andcommunication device1102 may generate and transmit theuser alert1234. In some of the present embodiments, theuser alert1234 may include at least theimage data1224, theaudio data1226, thetext data1232, and/or themotion data1230. Upon receiving theuser alert1234, the user of thefirst client device1108,1110 may be able to share the user alert1234 (or at least some of the contents of theuser alert1234, such as the image data1224) with a geographic area network, which is described in more detail below.
With further reference toFIG. 12, thenon-volatile memory1220stores location data1238. Thelocation data1238 may indicate the geographic location of the A/V recording andcommunication device1102. For example,location data1238 may indicate, but is not limited to, the street address, zip code, city, state, property, neighborhood, GPS coordinates, distance from one or more cell towers, and/or the like of where the A/V recording andcommunication device1102 is located. In some examples, theprocessor1216 of the A/V recording andcommunication device1102 may transmit, using thecommunication module1212, thelocation data1238 to thefirst client devices1108,1110, thehub device1112, and/or thebackend server1122. In some embodiments, thelocation data1238 may be used by thebackend server1122 when determining which crime reports, image data, and/or other data to include the second criminal statistics provided to the user.
As described herein, at least some of the processes of thehub device1112, thebackend server1122, and/or thefirst client device1108,1110 may be executed by the A/V recording andcommunication device1102. For example, thedevice application1222 may configure theprocessor1216 to analyze theimage data1224 in order to determine if theimage data1224 depicts a suspicious activity, such as a criminal activity. For example, computer vision processing and/or image processing, as described herein, may be performed by theprocessor1216 of the A/V recording andcommunication device1102 to determine that theimage data1224 depicts the suspicious activity. For example, in any of the present embodiments, theimage data1224 generated by thecamera1204 may be analyzed to determineactivity data1240. In some of the present embodiments, one or more of theimage data1224, themotion data1230, and theaudio data1226 may be used to determine theactivity data1240. The computer vision and/or image processing may be executed using computer vision and/or image processing algorithms. Examples of computer vision and/or image processing algorithms may include, without limitation, spatial gesture models that are 3D model-based and/or appearance based. 3D model-based algorithms may include skeletal and volumetric, where volumetric may include NURBS, primitives, and/or super-quadrics, for example.
In some embodiments, theprocessor1216 of the A/V recording andcommunication device1102 may compare theactivity data1240 to anactivity database1242 to determine what, if any, activities theimage data1124 depicts in the field of view of the A/V recording andcommunication device1102. For example, theactivity database1242 may store image data corresponding to images and/or video footage that depict various activities, where the image data may be labeled (e.g., tagged, such as in the form of metadata) to indicate an activity type1244 (alternatively referred to herein as the “type ofactivity1244”) depicted by each image and/or video represented by theimage data1124. Based on the comparing, theprocessor1216 of the A/V recording andcommunication device1102 may match theactivity data1240 from theimage data1224 to the image data stored in theactivity database1242. Theprocessor1216 of the A/V recording andcommunication device1102 may then use the match to determine that theactivity data1240 represents an activity and/or to determine the type ofactivity1244 that theactivity data1240 represents. In some examples, when theactivity data1240 represents multiple activities, theprocessor1216 of the A/V recording andcommunication device1102 may perform a similar analysis to identify each activity represented by theactivity data1240 and/or the respective type ofactivity1244 associated with each of the activities represented by theactivity data1240.
In some examples, the types ofactivities1244 may correspond to suspicious activities, such as various categories of criminal behavior. For example, the types ofactivities1244 may include, but are not limited to, vehicle crimes (e.g., burglary, theft, etc.), assaults, break-ins (e.g., burglary to a property, vehicle, etc.), shootings, murder, homicides, kidnapping, sex crimes, robberies, fraud, arson, embezzlement, forgery, solicitation, parcel theft, stabbing, suspicious package, armed robberies, theft, breaking and entering, vandalism, and/or any types of attempted crime and/or suspicious activity. As such, in some examples, theactivity database1242 may include image data that depicts each of the suspicious activities, which theprocessor1216 of the A/V recording andcommunication device1102 may use to match with theactivity data1240 from theimage data1224.
For a first example, theprocessor1216 of the A/V recording andcommunication device1102 may match theactivity data1240 from theimage data1224 to image data, stored in theactivity database1242, that depicts a person breaking into a vehicle. Based on the match, theprocessor1216 of the A/V recording andcommunication device1102 may determine that theimage data1224 depicts a break-in (e.g., a burglary) of a vehicle (e.g., that theimage data1224 depicts an activity). For a second example, theprocessor1216 of the A/V recording andcommunication device1102 may match theactivity data1240 from theimage data1224 to image data, stored in theactivity database1242, that depicts a first person assaulting a second person. Based on the match, theprocessor1216 of the A/V recording andcommunication device1102 may determine that theimage data1224 depicts an assault (e.g., that theimage data1224 depicts an activity). For a third example, theprocessor1216 of the A/V recording andcommunication device1102 may match theactivity data1240 from theimage data1224 to image data, stored in theactivity database1242, that depicts a person grabbing an object from a property and walking off with the object. Based on the match, theprocessor1216 of the A/V recording andcommunication device1102 may determine that theimage data1224 depicts a larceny (e.g., that theimage data1224 depicts an activity).
In some embodiments, in addition to or in place of comparing theimage data1224 to image data stored in theactivity database1242, certain sequences, patterns, and/or characteristics of events may be determined based on an analysis of theimage data1224,motion data1230,input data1228, and/oraudio data1226, and the sequences, patterns, and/or characteristics may be compared against sequences, patterns, and/or characteristics stored in the activity database1243 that are indicative of activity types1244. For example, it may be determined from theimage data1224 and/oraudio data1226 that a person loiters in proximity to a vehicle for a threshold period of time, then a door of the vehicle opens, and then an alarm of the vehicle sounds. This information (e.g., person loitering near car, car door opening, alarm sounding) may be compared against the sequences, patterns, and/or characteristics stored in theactivity database1242, and the result may be that theimage data1224 and/oraudio data1226 are indicative of a vehicle break-in. If the analysis of theimage data1224 further yielded the determination that the vehicle moved from its original location, the determination may be that the vehicle was stolen. In either example, theactivity type1244 may be determined to be a vehicle related crime and/or suspicious activity.
In some examples, based on determining that theimage data1224 depicts the activity and/or based on determining the type ofactivity1244, theprocessor1216 of the A/V recording andcommunication device1102 may generate auser alert1234 that indicates that theimage data1224 depicts the activity and/or the type ofactivity1244. Theprocessor1216 of the A/V recording andcommunication device1102 may then transmit, using thecommunication module1212, theuser alert1234 to thefirst client device1108,1110, thehub device1112, thebackend server1122, and/or thesecurity monitoring service1126.
In some examples, in addition to generating theuser alert1234, theprocessor1216 of the A/V recording andcommunication device1102 may generateinformation1246 associated with theimage data1224. Theinformation1224 may describe the activity and/or the type ofactivity1244 depicted by theimage data1224. After generating theinformation1246, theprocessor1216 of the A/V recording andcommunication device1102 may transmit, using thecommunication module1212, theinformation1246 to thefirst client device1108,1110, thehub device1112, thebackend server1122, and/or thesecurity monitoring service1126.
FIG. 13 is a functional block diagram illustrating an example of the smart-home hub device1112 (alternatively referred to herein as the hub device1112) according to various aspects of the present disclosure. Thehub device1112 may be, for example, one or more of a Wi-Fi hub, a smart-home hub, a hub of a home security/alarm system, a gateway device, a hub for a legacy security/alarm system (e.g., a hub for connecting a pre-existing security/alarm system to the network (Internet/PSTN)1106 for enabling remote control of the hub device1112), and/or another similar device. Thehub device1112 may comprise aprocessing module1302 that is operatively connected to acommunication module1304. In examples, thehub device1112 may comprise one or more of a camera (not shown), a microphone (not shown), and a speaker (not shown). Theprocessing module1302 may comprisevolatile memory1306, aprocessor1308, andnon-volatile memory1310, which includes a smart-home hub application1312.
In various embodiments, the smart-home hub application1312 may configure theprocessor1308 to receive sensor data from thesensors1114 and/or theautomation devices1116. For example, the sensor data may include a current state (e.g., opened/closed for door and window sensors, motion detected for motion sensors, living room lights on/off for a lighting automation system, etc.) of each of thesensors1114 and/or theautomation devices1116. In examples, the sensor data may be received in response to sensor triggers. The sensor triggers may be a door opening/closing, a window opening/closing, lights being turned on/off, blinds being opened/closed, etc. As such, the sensor data may include the current state of thesensors1114 and/or theautomation devices1116 as well as any updates to the current state based on sensor triggers.
With further reference toFIG. 13, the smart-home hub application1312 may configure theprocessor1308 to receive theaudio data1226, thetext data1232, theimage data1224, themotion data1230, theinput data1228, theuser alerts1234, and/or thelocation data1238 from the A/V recording and communication device1102 (in some embodiments, via thebackend server1122 and/or thefirst client devices1108,1110) using thecommunication module1304. For example, thehub device1112 may receive and/or retrieve (e.g., after receiving a signal from the A/V recording andcommunication device1102 that the A/V recording andcommunication device1102 has been activated) theimage data1224, theinput data1228, and/or themotion data1230 from the A/V recording andcommunication device1102 and/or thebackend server1122 in response to motion being detected by the A/V recording andcommunication device1102. Additionally, the smart-home hub application1312 may configure theprocessor1308 to transmit, using thecommunication module1304, theaudio data1226, thetext data1232, theimage data1224, themotion data1230, theinput data1228, theuser alerts1234, and/or thelocation data1238 to thefirst client devices1108,1110 and/or thebackend server1122.
As described herein, at least some of the processes of the A/V recording andcommunication device1102, thebackend server1122, and/or thefirst client device1108,1110 may be executed by thehub device1112. For example, thesmart hub application1312 may configure theprocessor1308 to analyze theimage data1224 in order to determine if theimage data1224 depicts an activity and/or the type ofactivity1244 depicted by the image data, using the processes described above. In some examples, based on determining that theimage data1224 depicts the activity and/or based on determining the type ofactivity1244, theprocessor1308 of thehub device1112 may generate auser alert1234 that indicates that theimage data1224 depicts the activity and/or the type ofactivity1244. Theprocessor1308 of thehub device1112 may then transmit, using thecommunication module1304, theuser alert1234 to thefirst client devices1108,1110, thebackend server1122, and/or thesecurity monitoring service1126. Additionally, theprocessor1308 of thehub device1112 may generateinformation1246 for theimage data1224, where theinformation1246 may describe the activity and/or the type ofactivity1244.
As further illustrated inFIG. 13, thenon-volatile memory1310 may storeconsent data1314. For example, theprocessor1308 of thehub device1112 may receive, using thecommunication module1304, consent to share theimage data1224 from the A/V recording andcommunication devices1102, thefirst client devices1108,1110, and/or thebackend server1122, where the consent may be represented by theconsent data1314. In addition to receiving the consent (and, in some embodiments, included with the consent), in some examples, theprocessor1308 of thehub device1112 may further receive, using thecommunication module1304,information1246 describing theimage data1224 from thefirst client device1108,1110 and/or the A/V recording andcommunication device1102. Theinformation1246 may include, but is not limited to, comments, messages, and/or tags that describe one or more activities (e.g., suspicious behavior) depicted by theimage data1224.
In some embodiments, as described herein, a user uploading theimage data1224,audio data1226,text data1232,input data1228, and/or other data to a network of users associated with a geographic area (e.g., a neighborhood) may be indicative of his or her consent. As such, in response to the user uploading (or sharing) the data with the network, theconsent data1314 may be generated and associated with the data.
For a first example, theinformation1246 may include a tag that indicates a category of criminal behavior that is depicted by theimage data1224, such as larceny. For a second example, theinformation1246 may include a comment describing an activity depicted by theimage data1224. For instance, the comment may include “A person picked up a package from my porch and then walked off with the package”. After receiving theconsent data1314 and/or theinformation1246, theprocessor1308 of thehub device1112 may transmit, using thecommunication module1304, theconsent data1314 and/or theinformation1246 to thebackend server1122.
FIG. 14 is a functional block diagram illustrating one embodiment of thebackend server1122 according to various aspects of the present disclosure. Thebackend server1122 may comprise acommunication module1402 and aprocessing module1404, which includes aprocessor1406,volatile memory1408, andnon-volatile memory1410. Thecommunication module1402 may allow thebackend server1122 to access and communicate with devices connected to the network (Internet/PSTN)1106 (e.g., the A/V recording andcommunication device1102, thehub device1112, thefirst client devices1108,1110, a device controlled by thesecurity monitoring service1126, thesecond client device1128, and/or the third-party services1130). Thenon-volatile memory1410 may include aserver application1412 that configures theprocessor1406 to receive and/or retrieve (e.g., obtain), using thecommunication module1402, theaudio data1226, thetext data1232, theinput data1228, theuser alerts1234, theimage data1224, and/or themotion data1230 from the A/V recording and communication device1102 (e.g., in the output signal1236) and/or thehub device1112. Additionally, theserver application1412 may configure theprocessor1406 to receive and/or retrieve, using thecommunication module1402, thelocation data1238, theconsent data1314, and/or theinformation1246 from the A/V recording andcommunication device1102, thefirst client devices1108,1110, and/or thehub device1112. Theserver application1412 may also configure theprocessor1406 to transmit (and/or forward) theaudio data1226, thetext data1232, theinput data1228, theuser alerts1234, theimage data1224, and/or themotion data1230 to thefirst client devices1108,1110 and/or thehub device1112 using thecommunication module1402.
In further reference toFIG. 14, thenon-volatile memory1410 may also includesource identifying data1414 that may be used to identify the A/V recording andcommunication device1102, thehub device1112, and/or thefirst client devices1108,1110. In addition, thesource identifying data1414 may be used by theprocessor1406 of thebackend server1122 to determine whether thefirst client devices1108,1110 are associated with the A/V recording andcommunication device1102 and/or thehub device1112.
In some embodiments, theserver application1412 may further configure theprocessor1406 to generate and transmit a report signal (not shown) to a third-party client device (not shown) using thecommunication module1402, which may be associated with a law enforcement agency or the security monitoring service, for example. The report signal, which may be theuser alert1234, in some examples, may include theimage data1224, theaudio data1226, and/or thetext data1232. In such embodiments, an operator of the third-party client device may be able to view theimage data1224 and/or thetext data1232 to help in making a determination of whether a person in the field of view of the A/V recording andcommunication device1102 is suspicious and/or performing suspicious activities. Theprocessor1406 of thebackend server1122 may then receive, using thecommunication module1402, data from the third-party client device that indicates the suspicious activities.
As described herein, at least some of the processes of the A/V recording andcommunication device1102, thehub device1112, and/or thefirst client device1108,1110 may be executed by thebackend server1122. For example, theserver application1412 may configure theprocessor1406 to analyze theimage data1224 in order to determine if theimage data1224 depicts an activity and/or the type ofactivity1244 depicted by the image data, using the processes described above. In some examples, based on determining that theimage data1224 depicts the activity and/or based on determining the type ofactivity1244, theprocessor1406 of thebackend server1122 may generate auser alert1234 that indicates that theimage data1224 depicts the activity and/or the type ofactivity1244. Theprocessor1406 of thebackend server1122 may then transmit, using thecommunication module1402, theuser alert1234 to thefirst client devices1108,1110, thehub device1112, and/or thesecurity monitoring service1126. Furthermore, theprocessor1406 of thebackend server1122 may generateinformation1246 for theimage data1224, where theinformation1246 describes the activity and/or the type ofactivity1244.
As further illustrated inFIG. 14, thenon-volatile memory1410 stores firstcriminal data1416. The firstcriminal data1416 may represent criminal statistics (referred to as “first criminal statistics”) that thebackend server1122 receives and/or retrieves from the third-party services1130. For example, theserver application1412 may configure theprocessor1406 to receive and/or retrieve, using thecommunication module1402, the firstcriminal data1416 from the third-party services1130. In some examples, theprocessor1406 of thebackend server1122 retrieves and/or receives the firstcriminal data1416 at given time intervals, such as, but not limited to, every hour, day, week, month, year, and/or the like. In some examples, theprocessor1406 of thebackend server1122 retrieves and/or receives the firstcriminal data1416 continuously (and/or perpetually) as the firstcriminal data1416 is collected, stored, generated, filtered, and/or provided by the third-party services1130. Still, in some examples, theprocessor1406 of thebackend server1122 retrieves and/or receives the firstcriminal data1416 in response to receiving requests for criminal statistics (referred to as “second criminal statistics”), as described below.
In some embodiments, the first criminal statistics represented by the firstcriminal data1416 may correspond to a geographic area, such as a town, city, state, or country. Additionally, the first criminal statistics represented by the firstcriminal data1416 may be organized intofirst categories1418. In some examples, thefirst categories1418 may correspond to categories of criminal behavior (e.g., categories of crimes). For example, thefirst categories1418 may include, but are not limited to, one or more of vehicle crimes (e.g., burglary, theft, etc.), assaults, break-ins (e.g., burglary to a property, vehicle, etc.), shootings, murder, homicides, kidnapping, sex crimes, robberies, fraud, arson, embezzlement, forgery, solicitation, parcel theft, stabbing, suspicious package, armed robberies, theft, breaking and entering, vandalism, and/or any types of attempted crime and/or suspicious activity.
As further illustrated inFIG. 14, thenon-volatile memory1410 stores statistics requests1420. For example, theserver application1412 may configure theprocessor1406 to receive, using thecommunication module1402, the statistics requests1420 from client devices, such as thefirst client devices1108,1110 and/or thesecond client device1128. Astatistics request1420 may include ageographic area1422, atime period1424, afrequency1426, and/or categories1428 (referred to as “second categories1428”). Thegeographic area1422 may correspond to an area in which a user is requesting second criminal statistics. For example, thegeographic area1422 may include, but is not limited to, a zip code, a town, a city, a state, a country, or the like. For another example, thegeographic area1422 may include an area of interest from which the user wishes to gather the second criminal statistics. The area of interest may include an area around the user's address, such as a radius around the user's address (e.g., five-mile radius, ten-mile radius, etc.), a polygon shape around the user's address, a triangle around the user's address, a user defined area including the user's address, or the like.
Thetime period1424 may include a range of dates and/or times for requesting the second criminal statistics. For a first example, thetime period1424 may indicate a range of dates, such as October 17 through October 24. For a second example, thetime period1424 may indicate a time range, such as, but not limited to, the last day, week, month, year, and/or the like. Thefrequency1426 may correspond to a time interval for which the user wishes to receive the second criminal statistics. For example, thefrequency1426 may include, but is not limited to, each hour, each day, each week, each month, and/or the like. In some examples, thebackend server1122 uses thefrequency1426 to determine when to create and/or transmit second criminal statistics to the client device that transmitted thestatistics request1420, such as thefirst client devices1108,1110 and/or the second client device(s)1128.
Thesecond categories1428 may indicate categories of criminal behavior (e.g., categories of crimes and/or suspicious activities) that the user is requesting to receive the second criminal statistics and/or that the user wishes to use in order to organize the second criminal statistics. For example, thesecond categories1428 may include, but are not limited to, one or more of vehicle crimes (e.g., burglary, theft, etc.), assaults, break-ins (e.g., burglary to a property, vehicle, etc.), shootings, murder, homicides, kidnapping, sex crimes, robberies, fraud, arson, embezzlement, forgery, solicitation, parcel theft, stabbing, suspicious package, armed robberies, theft, breaking and entering, vandalism, and/or any types of attempted crime and/or suspicious activity. In some examples, thesecond categories1428 may be similar to thefirst categories1418. In some examples, one or more of thesecond categories1428 differ from one or more of thefirst categories1418. For example, as described herein, the one or morefirst categories1418 may be filtered and/or sorted into the one or moresecond categories1428.
As further illustrated inFIG. 14, thenon-volatile memory1410 may store user accounts1430. Theserver application1412 may configure theprocessor1406 to generateuser accounts1430 for users that request second criminal statistics from thebackend server1122, such as by transmitting statistics requests1420. Auser account1430 associated with a user may include data indicating an identity of the user, a location of the user and/or the user's A/V recording and communication devices and/or hub device (e.g., physical address, physical location, GPS coordinates, etc.), contact information (e.g., phone number, email address, fax number, etc.) for a client device associated with the user, a region that the user is interested in receiving the criminal statistics for (e.g., as defined by the user, a radius around the of the user, etc.), and one or more identities of one or more A/V recording and communication devices associated with the user and/or the client device. Theuser account1420 may further include data that associated theuser account1420 with ageographic area1422 included in astatistics request1420, atime period1424 included in thestatistics request1420, afrequency1426 included in thestatistics request1420, and/orsecond categories1428 included in thestatistics request1420. In some examples, theprocessor1406 of thehub device1112 may then use the user accounts1430 to determine how to create the second criminal statistics for the users and/or when to transmit the second criminal statistics to the users.
For example, based on receiving astatistics request1420 from a client device, such as the second client device1128 (and/or similar thefirst client devices1108,1110), theprocessor1406 of thebackend server1122 may generate the second criminal statistics for the user associated with thesecond client device1128. To generate the second criminal statistics, theprocessor1406 of thebackend server1122 may analyze the firstcriminal data1416 representing the first criminal statistics in order to identify a portion (referred to as the “first portion”) of the first criminal data that corresponds to thegeographic area1422 included in thestatistics request1420. For example, theprocessor1406 of thebackend server1122 may analyze the firstcriminal data1416 in order to identify criminal statistics (e.g., criminal reports) that are associated with crimes and/or suspicious activities that occurred within thegeographic area1422. In some examples, theprocessor1406 of thebackend server1122 then uses the first portion of thecriminal data1416 to generate the second criminal statistics.
Additionally, or alternatively, in some examples, theprocessor1406 of thebackend server1122 may further analyze the first portion of the firstcriminal data1416 to identify a portion (referred to as the “second portion”) of the firstcriminal data1416 that corresponds to both thegeographic area1422 and thetime period1424 included in thestatistics request1420. For example, theprocessor1406 of thebackend server1122 may analyze the firstcriminal data1416 in order to identify criminal statistics (e.g., criminal reports) that are associated with crimes and/or suspicious activities that occurred within thegeographic area1422 during thetime period1424. In such examples, theprocessor1406 of thebackend server1122 may then generate the second criminal statistics using the second portion of the firstcriminal data1416.
Additionally, or alternatively, in some examples, theprocessor1406 of thebackend server1122 may further analyze the first portion of the firstcriminal data1416 to identify a portion (referred to as the “third portion”) of the firstcriminal data1416 that corresponds to both thegeographic area1422 and thesecond categories1428 included in thestatistics request1420. For example, theprocessor1406 of thebackend server1122 may analyze the firstcriminal data1416 in order to identify criminal statistics (e.g., criminal reports) that are associated with crimes and/or suspicious activities that occurred within thegeographic area1422 and are associated with thesecond categories1428 of crimes and/or suspicious activities that the user is requesting. In such examples, theprocessor1406 of thebackend server1122 may then generate the second criminal statistics using the third portion of the firstcriminal data1416.
Additionally, or alternatively, in some examples, theprocessor1406 of thebackend server1122 may further analyze the first portion, the second portion, and/or the third portion of the firstcriminal data1416 to identify a portion (e.g., referred to as the “fourth portion”) of the firstcriminal data1416 that corresponds to each of thegeographic area1422, thetime period1424, and thesecond categories1428. For example, theprocessor1406 of thebackend server1122 may analyze the firstcriminal data1416 in order to identify criminal statistics (e.g., criminal reports) that are associated with crimes and/or suspicious activities that occurred within thegeographic area1422, during thetime period1424, and which are associated with thesecond categories1428 of crimes and/or suspicious activities that the user is requesting. In such examples, theprocessor1406 of thebackend server1122 may then generate the second criminal statistics using the fourth portion of the firstcriminal data1416.
In some examples, when creating the second criminal statistics, theprocessor1406 of thebackend server1122 may determine one or more of the geographic area, the time period, and/or the second categories when the one or more of thegeographic area1422, thetime period1424, and/or thesecond categories1428 are not included in thestatistics request1420. For a first example, theprocessor1406 of thebackend server1122 may utilize a default geographic area, such as, but not limited to, the zip code, the town, the city, the state, the country, a radius, or the like in which the user's address is included. Additionally, or alternatively, and for a second example, theprocessor1406 of thebackend server1122 may utilize a default time period, such as, but not limited to, a day, a week, a month, a year, and/or the like. Additionally, or alternatively, and for a third example, theprocessor1406 of thebackend server1122 may utilize one or more default second categories, such as, but not limited to, one or more of vehicle crimes (e.g., burglary, theft, etc.), assaults, break-ins (e.g., burglary to a property, vehicle, etc.), shootings, murder, homicides, kidnapping, sex crimes, robberies, fraud, arson, embezzlement, forgery, solicitation, parcel theft, stabbing, suspicious package, armed robberies, theft, breaking and entering, vandalism, and/or any types of attempted crime and/or suspicious activity.
In some examples, theprocessor1406 of thebackend server1122 may further generate the second criminal statistics using image data generated by one or more A/V recording and communication devices that are located within thegeographic area1422 and/or information describing the image data. For example, theprocessor1406 of thebackend server1122 may utilize thelocation data1238 to determine that the A/V recording andcommunication device1102 is located within thegeographic area1422. Based on the determination, theprocessor1406 of thebackend server1122 may determine that the user associated with the A/V recording andcommunication devices1102 has provided consent to shareimage data1224 generated by the A/V recording andcommunication device1102. Furthermore, theprocessor1406 of thebackend server1122 may determine that theimage data1224 was generated by the A/V recording andcommunication device1102 during the time period corresponding to the second criminal statistics, such as thetime period1424. Additionally, in some examples, theprocessor1406 of thebackend server1122 may determine that theimage data1224 depicts an activity that corresponds to one of the second categories corresponding to the second criminal statistics, such as one of thesecond categories1428. In response, theprocessor1406 of thebackend server1122 may utilize theimage data1224 and/or theinformation1246 when creating the second criminal statistics.
For example, if theinformation1246 describes a suspicious activity depicted within theimage data1224, theprocessor1406 of thebackend server1122 may analyze theinformation1246 to determine asecond category1428 corresponding to theimage data1224. For instance, if theinformation1246 describes that theimage data1224 depicts a person breaking into a vehicle, then theprocessor1406 of thebackend server1122 may determine that theimage data1224 corresponds to a break-in (e.g., burglary)category1428 and/or avehicle crime category1428. For a second example, if theinformation1246 describes that theimage data1224 depicts a first person assaulting a second person, then theprocessor1406 of thebackend server1122 may determine that theimage data1224 corresponds to anassault category1428. For a third example, if theinformation1246 describes that theimage data1224 depicts person grabbing an object from a property and walking off with the object, then theprocessor1406 of thebackend server1122 may determine that theimage data1224 corresponds tolarceny category1428. In either of the examples, and in response to determining that theimage data1224 corresponds to one of thecategories1428, theprocessor1406 of thebackend server1122 may add a statistic associated with theimage data1224 to the second criminal statistics. For example, if theimage data1224 is associated with larceny, theprocessor1406 of thebackend server1122 may add an additional larceny to the second criminal statistics.
Additionally, or alternatively, to using theinformation1246, theprocessor1406 of thebackend server1122 may utilize the analysis of the image data1224 (e.g., by comparing the image data to theactivity database1242 and/or by comparing patterns, sequences, and/or characteristics from the data generated by the A/V recording andcommunication device1102 to theactivity database1242, as described above) to determine thesecond category1428 corresponding to theimage data1224. For example, theprocessor1406 of thebackend server1122 may match theactivity data1240 from theimage data1224 to image data, stored in theactivity database1242, that depicts a person grabbing an object from a property and walking off with the object. Based on the match, theprocessor1406 of thebackend server1122 may determine that theimage data1224 depicts a larceny. In response, theprocessor1406 of thebackend server1122 may generateinformation1246 describing that theimage data1224 depicts a larceny. Additionally, theprocessor1406 of thebackend server112 may add a statistic associated with theimage data1224 to the second criminal statistics. For example, theprocessor1406 of thebackend server1122 may add an additional larceny to the second criminal statistics.
In some examples, when using theimage data1224 to create the second criminal statistics, theprocessor1406 of thebackend server1122 may include theimage data1224 with the second criminal statistics. As such, the user requesting the second criminal statistics may be able to view theimage data1224 after receiving the second criminal statistics. In other examples, when using theimage data1224 to create the second criminal statistics, theprocessor1406 of thebackend server1122 may include a link to view theimage data1224. As such, the user requesting the second criminal statistics may be able to use the link to request theimage data1224 from thebackend server1122.
After generating the second criminal statistics, theserver application1412 may configure theprocessor1406 to transmit, using thecommunication module1402, secondcriminal data1432 representing the second criminal statistics to the client device that transmitted thestatistics request1420, such as thesecond client device1128 in the examples above. As discussed above, the second criminal categories may be organized using one or more second categories, such as thesecond categories1428. In some examples, the second categories may be similar to thefirst categories1418. In some examples, one or more of the second categories may be different than one or more of thefirst categories1418.
As further illustrated inFIG. 14, thenon-volatile memory1410further stores notifications1434. Theserver application1412 may configure theprocessor1406 to generate anotification1434, which may indicate that the second criminal statistics have been created and/or are ready for review by the user. For example, after creating the second criminal statistics, theprocessor1406 of thebackend server1122 may generate thenotification1434 indicating that the second criminal statistics are ready for the user of thesecond client device1128 to review. Additionally, theprocessor1406 of thebackend server1122 may transmit, using thecommunication module1402, thenotification1434 to thesecond client device1128. After sending thenotification1434, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, a message from thesecond client device1128 that the user wishes to receive the second criminal statistics, which may also be indicated by astatistics request1420. In response, theprocessor1406 of thebackend server1122 may transmit, using thecommunication module1402, the secondcriminal data1432 representing the second criminal statistics to thesecond client device1128.
In some examples, such as when thestatistics request1420 indicates the frequency1426 (which may then be associated with auser account1430 of the user) for receiving the second criminal statistics, theserver application1412 may configure theprocessor1406 to generate the second criminal statistics for the user based on thefrequency1426. For example, if thefrequency1426 indicates that the user wishes to receive the second criminal statistics every week, then theserver application1412 application may configure theprocessor1406 to generate the second criminal statistics each week using new firstcriminal data1416,new image data1224, and/ornew information1246 describing thenew image data1224, using the processes described above. Theserver application1412 may then configure theprocessor1406 to transmit, using thecommunication module1402, and to thesecond client device1128, thenotification1434 indicating that the second criminal statistics are ready for viewing and/or the secondcriminal data1432 representing the second criminal statistics. As such, the user associated with thesecond client device1128 may continue to receive updated criminal statistics from thebackend server1122 at time intervals that are based on thefrequency1426.
FIG. 15 is a functional block diagram illustrating one embodiment of afirst client device1108,1110 according to various aspects of the present disclosure. Thefirst client device1108,1110 may comprise aprocessing module1502 that is operatively connected to aninput interface1504, microphone(s)1506, speaker(s)1508, and acommunication module1510. Thefirst client device1108,1110 may further comprise a camera (not shown) operatively connected to theprocessing module1502. Theprocessing module1502 may comprise aprocessor1512,volatile memory1514, andnon-volatile memory1516, which includes aclient application1518. In various embodiments, theclient application1518 may configure theprocessor1512 to receive input(s) to the input interface1504 (e.g., requests for access to the A/V recording and communication device1102), for example.
In addition, theclient application1518 may configure theprocessor1512 to receive, using thecommunication module1510, theinput data1228, theimage data1224, theaudio data1226, theoutput signal1236, theuser alerts1234, and/or thelocation data1238 from one or more of the A/V recording andcommunication device1102, thehub device1112, or thebackend server1122. Furthermore, theclient application1518 may configure theprocessor1512 to receive and/or retrieve, using thecommunication module1502, the notifications1434(1) and/or the second criminal data1432(1) from one or more of thehub device1112 or thebackend server1122. Furthermore, theclient application1518 may configure theprocessor1512 to transmit, using thecommunication module1510, theinput data1228, theimage data1224, theaudio data1226, theoutput signal1236, theuser alerts1234, thelocation data1238, theinformation1246, and/or the statistics requests1420(1) to one or more of thehub device1112 or thebackend server1122.
In some embodiments, thefirst client device1108,1110 may generate, using thecommunication module1510, theconsent data1314 using theimage data1224, theaudio data1226, and/or thetext data1232. Additionally, thedevice application1518 may configure theprocessor1512 to transmit, using thecommunication module1510, theconsent data1314 to one or more of thehub device1112 and thebackend server1122. As further described below, theconsent data1314 may be received by thebackend server1122 and, in response, at least theimage data1224 may be provided to members of a geographic area using client devices such as (but not limited to) thefirst client devices1108,1110 and thesecond client device1128.
With further reference toFIG. 15, theinput interface1504 may include adisplay1520. Thedisplay1520 may include a touchscreen, such that the user of thefirst client device1108,1110 may provide inputs directly to the display1520 (e.g., a request for access to the A/V recording and communication device1102). In some embodiments, thefirst client device1108,1110 may not include a touchscreen. In such embodiments, the user may provide an input using any input device, such as, without limitation, a mouse, a trackball, a touchpad, a joystick, a pointing stick, a stylus, etc.
In some examples, based at least in part on receiving auser alert1234, thedevice application1518 may configure theprocessor1512 to cause thedisplay1520 to display theuser alert1234. While displaying theuser alert1234, theuser interface1504 may receive input from the user to answer theuser alert1234. In response, thedevice application1518 may configure theprocessor1512 to display the receivedimage data1224 using thedisplay1520. Additionally, thedevice application1518 may configure theprocessor1512 to output audio represented by theaudio data1226 using thespeaker1508. Furthermore, thedevice application1520 may configure theprocessor1512 to receive, using theuser interface1504, input associated with theinformation1246 describing theimage data1224.
In some examples, thedevice application1518 may further configure theprocessor1512 to display a graphical user interface (GUI)1522 using thedisplay1520. TheGUI1522 may allow the user to request the second criminal statistics from thebackend server1122. For example, theprocessor1512 of thefirst client device1108,1110 may receive, using theinput interface1504, input indicating a geographic location. Based on receiving the input, thedevice application1518 may configure theprocessor1512 to display, using thedisplay1520, amap1524 corresponding to an initial geographic area, wherein the initial geographic area includes at least the inputted location. Next, theprocessor1512 of thefirst client device1108,1110 may receive, using theinput interface1504, one or more inputs indicating one or more of the geographic area1422(1) (which may correspond to a geographic area1422), the time period1424(1) (which may correspond to a time period1424), the frequency1426(1) (which may correspond to a frequency1426), and the categories1428(1) (which may correspond to categories1428). In response, thedevice application1518 may configure theprocessor1512 to generate a statistics request1420(1) (which may correspond to a statistics request1420), where the statistics request1420(1) includes the one or more of the geographic area1422(1), the time period1424(1), the frequency1426(1), and the categories1428(1). Thedevice application1518 may then configure theprocessor1512 to transmit, using thecommunication module1510, the statistics request1420(1) to the backend server1122 (which may be via the hub device1112).
In some examples, thedevice application1518 may configure theprocessor1512 to display notifications1434(1) (which may correspond to a notification1434) when thefirst client device1108,1110 receives the notifications1434(1) from thebackend server1122. When displaying the notifications1343(1), theprocessor1512 of thefirst client device1108,1110 may receive, using theinput interface1504, input indicating that the user would like to receive the second criminal statistics. In response, thedevice application1518 may configure theprocessor1512 to transmit, using thecommunication module1510, a request to thebackend server1122 for the second criminal statistics, which may also be represented by a statistics request1420(1). Based on transmitting the request, theprocessor1512 of thefirst client device1108,1110 may receive, from thebackend server1122, the second criminal data1432(1) representing the second criminal statistics.
In response to receiving the second criminal data, thedevice application1518 may configure theprocessor1512 to display at least a portion of the second criminal statistics using thedisplay1510. As shown in detail below, the at least the portion of the second criminal statistics may indicate a number of incidents (e.g., crimes, suspicious activity, etc.) per category1428(1), locations of one of more of the incidents (e.g., using the map1524), whether one or more of the incidents corresponds to image data (e.g., the image data1224) and/or information (e.g., information1246) describing the image data, and/or the like. In some examples,processor1512 of thefirst client device1108,1110 may then receive input associated with the at least the portion of the second criminal statistics and perform an action in response. For a first example, theprocessor1512 of thefirst client device1108,1110 may receive input selecting one of the incidents and, in response, display additional information (e.g., a criminal report associated with the incident, image data of the incident, etc.) using thedisplay1520. For a second example, theprocessor1512 of thefirst client device1108,1110 may receive input selecting one of the incidents associated with theimage data1224 and, in response, display theimage data1224 using the display. For a third example, theprocessor1512 of thefirst client device1108,1110 may receive input that causes theprocessor1512 to display a second portion of the second criminal statistics using the display. In such an example, the input may correspond to the user scrolling through the second criminal statistics and/or the user selecting a new geographic area to display on theGUI1522.
FIG. 16 is a functional block diagram illustrating one embodiment of asecond client device1128 according to various aspects of the present disclosure. Thesecond client device1128 may comprise aprocessing module1602 that is operatively connected to aninput interface1604, microphone(s)1606, speaker(s)1608, and acommunication module1610. Thesecond client device1128 may further comprise a camera (not shown) operatively connected to theprocessing module1602. Theprocessing module1602 may comprise aprocessor1612,volatile memory1614, andnon-volatile memory1616, which includes adevice application1618.
With further reference toFIG. 16, theinput interface1604 may include adisplay1620. Thedisplay1620 may include a touchscreen, such that the user of thefirst client device1108,1110 may provide inputs directly to the display1620 (e.g., a request for access to the A/V recording and communication device1102). In some embodiments, thefirst client device1108,1110 may not include a touchscreen. In such embodiments, the user may provide an input using any input device, such as, without limitation, a mouse, a trackball, a touchpad, a joystick, a pointing stick, a stylus, etc.
In some examples,device application1618 may be similar to thedevice application1518 and may configure theprocessor1612 to perform similar processes as theprocessor1512 described above. For example, thedevice application1618 may configure theprocessor1612 to display aGUI1622 using thedisplay1620, where theGUI1622 is configured to request the second criminal statistics (which may differ (e.g., based on location information) from the second criminal statistics requested/received by thefirst client device1108,1110) from thebackend server1122. For example, theprocessor1612 of thesecond client device1128 may receive, using theinput interface1604, input indicating a location. Based on receiving the input, thedevice application1618 may configure theprocessor1612 to display amap1624 on the GUI, where themap1624 includes an initial geographic area that includes at least the location. While displaying themap1624, theprocessor1612 of thesecond client device1128 may receive one or more inputs indicating one more of a geographic area1422(2) (which may correspond to a geographic area1422), a time period1424(2) (which may correspond to a time period1424), a frequency1426(2) (which may correspond to a frequency1426), and/or categories1428(2) (which may correspond to categories1428). In response, thedevice application1618 may configure theprocessor1612 to generate a statistics request1420(2) (which may correspond to a statistics request1420) that includes the one or more of the geographic area1422(2), the time period1424(2), the frequency1426(2), and/or the categories1428(2). Thedevice application1618 may then configure theprocessor1612 to transmit, using thecommunication module1610, the statistics request1420(2) to thebackend server1122.
Additionally, thedevice application1618 may configure theprocessor1612 to receive, using thecommunication module1610, a notification1434(2) (which may correspond to a notification1434) that the second criminal statistics are ready for view. Theprocessor1612 of thesecond client device1128 may then receive input associated with viewing the second criminal statistics and, in response, transmit another statistics request1420(2) to thebackend server1122 using thecommunication module1610. After transmitting the second statistics request1420(2), thedevice application1618 may configure theprocessor1612 to receive, using thecommunication module1610, second criminal data1432(2) representing the second criminal statistics. Thedevice application1618 may then configure theprocessor1612 to display at least a portion of the second criminal statistics, using similar processes as described above for thefirst client device1108,1110. Additionally, while displaying the at least the portion of the second criminal statistics, thedevice application1618 may configure theprocessor1612 to perform one or more actions (similar to the actions above) in response to receiving input using the input interface.
In some embodiments, the geographic area1422(1), the time period1424(1), the frequency1426(1), and the categories1428(1) included in the statistics request1420(1) may be similar to the geographic area1422(2), the time period1424(2), the frequency1426(2), and the categories1428(2) included in the statistics request1420(2). In some embodiments, one or more of the geographic area1422(1), the time period1424(1), the frequency1426(1), and the categories1428(1) included in the statistics request1420(1) may be different than the geographic area1422(2), the time period1424(2), the frequency1426(2), and the categories1428(2) included in the statistics request1420(2). In such embodiments, a first user associated with thefirst client device1108,1110 may request second criminal statistics that differ from a second user of thesecond client device1128, even if an address of the first user is located proximate (e.g., within 100 feet, 100 yards, a block, a mile, 20 miles, etc.) to an address of the second user. In other words, each user is capable of tailoring the second criminal statistics based on preferences of the respective user. For example, because each user may be able to define a geographic area for which the user desires to receive the second criminal statistics, such as by defining a radius, by dragging and dropping markers to create an amorphous area, and/or by other methods.
In the illustrated embodiment ofFIGS. 12-16, the various components including (but not limited to) theprocessing modules1202,1302,1404,1502,1602 and thecommunication modules1212,1304,1406,1510,1610 are represented by separate boxes. The graphical representations depicted in each ofFIGS. 12-16 are, however, merely examples, and are not intended to indicate that any of the various components of the A/V recording andcommunication device1102, thehub device1112, thebackend server1122, thefirst client device1108,1110, and thesecond client device1128 are necessarily physically separate from one another, although in some embodiments they might be. In other embodiments, however, the structure and/or functionality of any or all of the components of the A/V recording andcommunication device1102 may be combined. In addition, in some embodiments thecommunication module1212 may include its own processor, volatile memory, and/or non-volatile memory. Likewise, the structure and/or functionality of any or all of the components of thehub device1112 may be combined. In addition, in some embodiments thecommunication module1304 may include its own processor, volatile memory, and/or non-volatile memory. Moreover, the structure and/or functionality of any or all of the components of thebackend server1122 may be combined. In addition, in some embodiments thecommunication module1402 may include its own processor, volatile memory, and/or non-volatile memory. Furthermore, the structure and/or functionality of any or all of the components of thefirst client device1108,1110 may be combined. In addition, in some embodiments thecommunication interface1510 may include its own processor, volatile memory, and/or non-volatile memory. Finally, the structure and/or functionality of any or all of the components of thesecond client device1128 may be combined. In addition, in some embodiments thecommunication interface1610 may include its own processor, volatile memory, and/or non-volatile memory.
FIG. 17 is a screenshot of amap1702 illustrating a plurality of geographic areas according to an aspect of the present disclosure. In many embodiments, each of the plurality of geographic areas may include a subset of the users of the network of users corresponding to a geographical area. In some embodiments, an individual geographic area of the plurality of geographic areas may include a grouping of A/V recording and communication devices that are located an area that may be defined using various methods. For example, the individual geographic area may be associated with a town, city, state, neighborhood, or country. In a further example, the individual geographic area may be determined by thebackend server1122 based on grouping a particular number of A/V recording and communication devices about a particular vicinity. In a further example, a user may customize a geographic area (e.g., geographic area1422). In some embodiments, various A/V recording and communication devices may be grouped into only one geographic area or more than one geographic area. In other embodiments, various A/V recording and communication devices may not be grouped into any geographic area.
In reference toFIG. 17, themap1702 illustrates a firstgeographic area1704 identified as Santa Monica, Calif. In many embodiments, the firstgeographic area1704 may include one or more A/V recording and communication devices, such as (but not limited to) the A/V recording andcommunication device1102 and a second A/V recording andcommunication device1706. As described above, the A/V recording andcommunication device1102 may be associated with thefirst client device1108 and configured to share video footage (e.g., the image data1224) captured by the A/V recording andcommunication device1102. Likewise, the second A/V recording andcommunication device1706 may be associated with another client device (not shown) and configured to share video footage (e.g., the second image data) captured by the second A/V recording andcommunication device1706. Further, themap1702 illustrates a secondgeographic area1708. In many embodiments, the secondgeographic area1708 may include one or more A/V recording and communication devices, including the A/V recording andcommunication device1102, a third A/V recording andcommunication device1708, and a fourth A/V recording andcommunication device1710.
In some embodiments, a first user may use thefirst client device1108 to customize the first geographic area1704 (which may correspond to the geographic area1422(1) in some examples) and a second user may use thesecond client device1128 to customize the second geographic area1708 (which may correspond to the geographic area1422(2) in some examples). For example, thebackend server1122 may create criminal statistics associated with the firstgeographic area1704 for the first user. The criminal statistics may include data corresponding to one or more suspicious activities and/or crimes depicted by image data generated by the A/V recording andcommunication device1102 and the second A/V recording andcommunication device1706. Additionally, thebackend server1122 may create criminal statistics associated with the secondgeographic area1706 for the second user. The criminal statistics may include data corresponding to one or more suspicious activities depicted by image data generated by the A/V recording andcommunication device1102, the third A/V recording andcommunication device1708, and the fourth A/V recording andcommunication device1710.
FIG. 18 illustrates one example of a graphical user interface (GUI)1802 (which may correspond to theGUI1522 and/or the GUI1622) associated with a process for requesting criminal statics associated with a geographic area. For example, a requesting party may login through a user portal at a website (or using a mobile application) using a client device, which may include thesecond client device1128 in the example ofFIG. 18 (however, it could also include thefirst client device1108 performing similar processes in other examples). Using theGUI1802, the requester may enter an address that identifies a location around which the requester wishes to receive the criminal statistics. TheGUI1802 then displays a map1804 (which may correspond to the map1624) of the geographic area around the address. An icon of afirst type1806 may indicate the location of the entered address on themap1804. Themap1804 may further display one or more additional icons of a second type1808 (although only one is labeled) that indicate the locations of A/V recording and communication devices.
Using themap1804, the requester may then specify anarea1810 of interest (from which the requester wishes to gather the second criminal statistics) by indicating thearea1810 on themap1804. The requester may specify thearea1810 from which the second criminal statistics will be created in any of a variety of ways. For example, the requester may draw a polygon of any shape and size on themap1804 of theGUI1802 by specifying locations of vertices of the polygon, such as by touching, dragging, and/or dropping the locations of the vertices1822(1)-1822(5), if thesecond client device1128 includes a touchscreen, or by using a pointing device, such as a mouse or a trackball, and an onscreen pointer to specify the locations of the vertices1822(1)-1822(5). In certain embodiments, the polygon specified by the requester may not have any vertices1822(1)-1822(5), such as a circle. The requester in such an embodiment may specify an area of interest by providing a radius around the address that the requester has entered.
In some embodiments, thearea1810 of interest may be thesame area1810 of interest that the user defined when joining a network of users associated with the user's geographic area. For example, the user may have joined a “neighborhoods network”, a “local community watch network”, or another type of network that is associated with the user's geographic location. When joining, the user may have defined thearea1810 that the user is interested in receiving shared information from other users of the network, such as data generated by A/V recording and communication devices located within thearea1810 and/or data generated by security systems within thearea1810. As such, thebackend server1122, with or without user input, may utilize the previously definedarea1810 as thearea1810 for generating the second criminal statistics. In examples where user input is requested, the user may be able to select a “use existing area” button, or input “use existing area” within the first text box and/ordropdown menu1812.
With further reference toFIG. 18, theGUI1802 may also include a first text box and/ordropdown menu1812 that enables the requester to identify thegeographic location1806, a second text box and/ordropdown menu1814 that enables the requester to identify the time period (e.g., the time period1424(2)), a third text box and/ordropdown menu1816 that enables the requester to identify the frequency (e.g., the frequency1426(2)), and a fourth text box and/ordropdown menu1818 that enables the requester to identify the categories (e.g., the categories1428(2)). Furthermore, theGUI1802 includes a button and/or othergraphical element1820 that enables the requester to cause thesecond client device1128 to transmit a statistics request (e.g., the statistics request1420(2)) for the second criminal statistics.
FIG. 19 is a screenshot of aGUI1902 illustrating an example of providing anotification1904 according to various aspects of the present disclosure. For example, the second client device1128 (and/or similarly the first client device1108) may display theGUI1902 to a user, such as when thesecond client device1128 receives anotification1434 from thebackend server1122 that the second criminal statistics are ready to review. As shown, theGUI1902 may provide thenotification1904 indicating that the second criminal statistics are ready for review. For example, thenotification1904 may include, but is not limited to, a message reciting “Your weekly crime recap is now ready for review”. In other examples, thenotification1904 may include any other type of message, graphical indicator, and/or alert (e.g., a sound output by the second client device1128) that indicates to the second criminal statistics are available to review.
In some embodiments, thenotification1904 may be included in a feed within thedevice application1618 operating on thesecond client device1128. For example, the user may be viewing a feed that includes the shared data (e.g., image data, audio data, textual data, etc.) from other users in the user's geographic area (e.g., neighborhood), news stories, and/or other information, and within the feed, one of the items may be thenotification1904. Thenotification1904, in such an example, may include information such as “Your weekly crime recap for the week of January 8-January 14 is now ready for review, this week there were 18 assaults in your neighborhood.”
FIG. 20 is a screenshot of a GUI2000 illustrating an example of providing second criminal statistics that are organized according to categories according to various aspects of the present disclosure. For example, the second client device1128 (and/or similarly the first client device1108) may display theGUI2002 to a user, such as when thesecond client device1128 receives the second criminal statistics from thebackend server1122. As shown, theGUI2002 may include categories2004(1)-2004(5), which, in the example, include an all category2004(1), a vehicle category2004(2), an assault category2004(3), a breaking category2004(4), and a shooting category2005(5). TheGUI2002 may further include a number of incidents2006 (criminal reports, suspicious activity, information describing image data, etc.) for each category2005(1)2005(5), although, for illustrative purposes, only the number ofincidents2006 is labeled for the all category2004(1).
Additionally, theGUI2002 may include amap2008 of a geographic area, wherein themap2008 includes icons2010(1)-2010(3) indicating the locations of the incidents (e.g., police reports, suspicious activities, image data captured by A/V recording and communication devices and shared by the associated users, etc.). For example, a first icon2010(1) indicates the location of a first incident that is included within the vehicle category2004(2), a second icon2010(2) indicates the location of a second incident that is included in the assault category2004(3), and a third icon2010(3) indicates the location of a third incident that is included in the shooting category2004(5). Furthermore, themap2008 may include anicon2012 that indicates a location of the user's address. As such, the user may use theGUI2002 to determine which incidents are occurring near the user's address, such as in within a geographic area including the user's address.
FIG. 21 is a screenshot of a GUI2100 illustrating an example of providing second criminal statistics that are specific to one of the categories according to various aspects of the present disclosure. For example, the second client device1128 (and/or similarly the first client device1108) may display theGUI2102 to a user, such as when thesecond client device1128 receives input selecting a category (e.g., the assault category2004(3) from theexample GUI2002 fromFIG. 20). As shown, theGUI2102 may include details for incidents2104(1)-2104(3) that are included in the selected category. For example, each of the details for the incidents2104(1)-2104(3) includes the type of incident, the location of the incident, and details about what occurred during the incident. In some embodiments, the incidents2104(1)-2104(3) may include a link (e.g., a physical link, or may link when selected) to associated data related to the incident, such as image data, audio data, and/or other data. In some examples, thesecond client device1128 may receive input to scroll through details of additional incidents.
TheGUI2102 may further include a map2106 (which may be similar to themap2008 from theexample GUI2002 ofFIG. 20) of a geographic area, where themap2106 includes icons2108(1)-2108(3) indicating the locations of the incidents2104(1)-2104(3). In some examples, and as shown in the example ofFIG. 21, when an incident2104(1) is selected by a user, theGUI2102 causes the icon2108(1) corresponding to that incident to display differently than the icons2108(2)-2108(3) corresponding to the other incidents2104(2)-2104(3). Furthermore, themap2106 may include anicon2110 that indicates a location of the user's address. As such, the user use theGUI2102 to determine which incidents included in the selected category occurred near the user's address.
FIG. 22 is a screenshot of aGUI2202 illustrating an example of providing options for sharing the second criminal statistics according to various aspects of the present disclosure. For example, the second client device1128 (and/or similarly the first client device1108) may display theGUI2202 to a user, such as when thesecond client device1128 receives input for sharing the second criminal statistics. As shown, theGUI2202 may include different options2204(1)-2204(3) for how to share the second criminal statistics. For example, and without limitation, the first option2204(1) may include transmitting data representing the second criminal statistics using a short message service (SMS), the second option2204(2) may include sharing the second criminal statistics using FACEBOOK® (or other social media network), and the third option2204(3) may include sharing the second criminal statistics using TWITTER® (or other social media network). TheGUI2202 may further include abutton2206 that the user may select for additional options for how the user may share the second criminal statistics (e.g., email).
FIG. 23 is a screenshot of aGUI2302 illustrating an example of transmitting the second criminal statistics to another user according to various aspects of the present disclosure. For example, the second client device1128 (and/or similarly the first client device1108) may display theGUI2302 to a user, such as when thesecond client device1128 receives input for sharing the second criminal statistics (e.g., receives input selecting the first option2204(1) from theexample GUI2202 ofFIG. 22) As shown, theGUI2302 includes a message2304 (e.g., an auto-populated message) that thesecond client device1128 and/or thebackend server1122, in response to the input to send themessage2304, may transmit to a client device of another user (e.g., the first client device1108). Themessage2304 may include the second criminal statistics and a request that the other user download the application (e.g., thedevice application1518, thedevice application1618, etc.) for creating criminal statistics. In other embodiments, the message may include a request to download the application in order to review the criminal statistics and/or to create a personalized criminal statistics report for the new user (e.g., the user that receives the message2304).
Each of the processes described herein, including theprocesses2400,2500,2600,2700, and2800 are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that may be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order and/or in parallel to implement the processes. Additionally, any number of the described blocks may be optional and eliminated to implement the processes.
FIG. 24 is a flowchart illustrating anexample process2400 for integrating A/V recording and communication device data into criminal statistics according to various aspects of the present disclosure. Theprocess2400, at block B2402, receives, from a third-party server, first data representing first criminal statistics for a first geographic area. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, and from one or more third-party services1130, the firstcriminal data1416 representing the first criminal statistics. In some examples, the first criminal statistics represented by the firstcriminal data1416 may correspond to a geographic area, such as a city, state, or country. In addition, each incident/report from the firstcriminal data1416 may be associated with a location, such as an address, an intersection, GPS coordinates, or the like. In some embodiments, theprocessor1406 of thebackend server1122 may receive the firstcriminal data1416 continuously from the one or more third-party services1130 using thecommunication module1402. Additionally, or alternatively, in some embodiments, theprocessor1406 of thebackend server1122 may receive the firstcriminal data1416 from the one or more third-party services1130 using thecommunication module1402 at given time intervals and/or in response to statistics requests1420.
Theprocess2400, at block B2404, stores the first data in one or more databases. For example, based on receiving the firstcriminal data1416, theprocessor1406 of thebackend server1122 may store the firstcriminal data1416 in one or more databases, such as thestorage devices1120.
Theprocess2400, at block B2406, receives image data generated by an audio/video (A/V) recording and communication device. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, theimage data1224 generated by the A/V recording andcommunication device1102. In some examples, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, theimage data1224 from the A/V recording andcommunication device1102 and/or thehub device1112. In some examples, theprocessor1406 of thebackend server1122 may receive theimage data1224 based on the A/V recording andcommunication device1102 detecting motion in its field of view. For instance, theimage data1224 may depict an activity that occurs in a field of view of thecamera1204 of the A/V recording andcommunication device1102.
Theprocess2400, at block B2408, receives consent for sharing the image data, the consent including at least information describing a suspicious activity depicted by the image data. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402,consent data1314 from theclient device1108,1110 associated with the A/V recording andcommunication device1102 and/or thehub device1112. Theconsent data1314 may indicate consent for sharing theimage data1224 with other users that are members of one or more geographic areas, such as thegeographic area1704 and/orgeographic area1708. Theconsent data1314 may further includeinformation1246 that indicates that theimage data1224 depicts a suspicious activity, such as criminal behavior. In some examples, theinformation1246 may include a comment created by a user of thefirst client device1108,1110. In some examples, theinformation1246 may include a tag selected by the user, where the tag indicates the suspicious activity. In some examples, theinformation1246 may be generated by the A/V recording andcommunication device1102 and/or thehub device1112, which may describe the suspicious behavior (e.g., activity) depicted by theimage data1224.
Theprocess2400, at block B2410, stores the image data and the information in the one or more databases. For example, based on receiving theimage data1224 and theinformation1246, theprocessor1406 of thebackend server1122 may store theimage data1224 and theinformation1246 in the one or more databases, such as the storage device(s)1120.
Theprocess2400, at block B2412, receives a criminal statistics request for second criminal statistics, the criminal statistics request indicating a second geographic area that includes a portion of the first geographic area. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, astatistics request1420 from thefirst client device1108,1110 or thesecond client device1128, where thestatistics request1420 indicates the secondgeographic area1422 that includes a portion of the first geographic area. The secondgeographic area1422 may correspond to an area in which a user wishes to receive the second criminal statistics. In some examples, the statistics request1420 may further indicate thetime period1424 for the second criminal statistics, thefrequency1426 for which to create the second criminal statistics, and/or thecategories1428 for organizing the second criminal statistics. In some examples, theprocessor1406 of thebackend server112 may store the statistics request1420 (e.g., the secondgeographic area1422, thetime period1424, thefrequency1426, and/or the categories1428) in association with auser account1430 associated with the user.
Theprocess2400, at block B2414, identifies a portion of the first criminal statistics that occurred within the second geographic area. For example, theprocessor1406 of thebackend server1122 may analyze the firstcriminal data1416 to identify a portion of the first criminal statistics that occurred in the secondgeographic area1422. The portion of the first criminal statistics may correspond to criminal reports describing crimes and/or other suspicious activities that occurred within the secondgeographic area1422. In some examples, theprocessor1406 of thebackend server1122 may further analyze thecriminal data1416 corresponding to the portion of the first criminal statistics to identify an additional portion of the first criminal statistics that corresponds to thetime period1424 and/or thecategories1428.
Theprocess2400, at block B2416, determines that a location of the A/V recording and communication device is within the second geographic area. For example, theprocessor1406 of thebackend server1122 may utilize thelocation data1238 to determine the location of the A/V recording andcommunication device1102. Theprocessor1406 of thebackend server1122 may then determine that the location of the A/V recording andcommunication device1102 is within the secondgeographic area1422. In some examples, theprocessor1406 of thebackend server1122 determines that the location of the A/V recording andcommunication device1102 is within the secondgeographic area1422 in response to receiving thestatistics request1420.
Theprocess2400, at block B2418, creates the second criminal statistics using at least the portion of the first criminal statistics that occurred within the second geographic area and the information. For example, theprocessor1406 of thebackend server1122 may create the second criminal statistics using the portion of the firstcriminal data1416 that corresponds to the portion of the first criminal statistics that are associated with the secondgeographic area1422 and theinformation1246 describing theimage data1224. In some examples, theprocessor1406 of thebackend server1122 may create the second criminal statistics by curating the portion of the firstcriminal data1416 that corresponds to the portion of the first criminal statistics that are associated with the secondgeographic area1422 and theinformation1246 describing theimage data1224. In some examples, theprocessor1406 of thebackend server1122 creates the second criminal statistics by further organizing the second criminal statistics into thecategories1428.
Theprocess2400, at block B2420, transmits second criminal data representing the second criminal statistics. For example, theprocessor1406 of thebackend server1122 may transmit, using thecommunication module1402, the secondcriminal data1432 representing the second criminal statistics to thefirst client device1108,1110 or the second client device1128 (e.g., whichever client device transmitted the statistics request1420). In response, thefirst client device1108,1110 or thesecond client device1128 may receive the secondcriminal data1432 and display the second criminal statistics to the user.
Theprocess2400 ofFIG. 24 may be implemented in a variety of embodiments, including those discussed above. However, the below-detailed embodiments are not intended to be limiting, and are provided merely as example embodiments of the present disclosure. Other embodiments similar to those outlined herein may also fall within the scope of the present disclosure.
For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402 and from a third-party service1130, firstcriminal data1416 representing first criminal statistics for the city of San Diego (e.g., block B2402). Theprocessor1406 of thebackend server1122 may then store the firstcriminal data1416 in the storage databases1120 (e.g., block B2404). Next, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402,image data1224 generated by the A/V recording andcommunication device1102, where theimage data1224 depicts a larceny (e.g., block B1406). After receiving theimage data1224, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402 and from thefirst client device1108,consent data1314 that includesinformation1246 describing the image data1224 (e.g., bock B2408). For example, theinformation1246 may indicate that theimage data1224 depicts a person stealing a package off of a porch. Theprocessor1406 of thebackend server1122 may then store theimage data1224 and theinformation1246 in the storage databases1120 (e.g., block B2410).
Theprocessor1406 of thebackend server1122 may then receive, using thecommunication module1402 and from thesecond client device1128, acriminal statistics request1420 that indicates ageographic area1422, where thegeographic area1422 includes a mile radius around a user's address within San Diego (e.g., block B1412). Based on receiving thecriminal statistics request1420, theprocessor1406 of thebackend server1122 may analyze the first criminal statistics for the city of San Diego to identify a portion of the first criminal statistics that corresponds to the mile radius around the user's address (e.g., block B2414). Additionally, theprocessor1406 of thebackend server1122 may analyze thelocation data1238 for the A/V recording andcommunication device1102 to determine that the A/V recording andcommunication device1102 is located within the mile radius around the user's address (e.g., block B1416). Next, theprocessor1406 of thebackend server1122 may create second criminal statistics that are associated with the mile radius around the user's address using the identified portion of the first criminal statistics and the information1246 (e.g., block B2418). After creating the second criminal statistics, theprocessor1406 of thebackend server1122 may transmit, using thecommunication module1406, secondcriminal data1432 representing the second criminal statistics to thesecond client device1128.
FIG. 25 is a flowchart illustrating anexample process2500 for utilizing first criminal statistics to create second criminal statistics for a user according to various aspects of the present disclosure. Theprocess2500, at block B2502, receives, from a third-party server, first data representing first criminal statistics that occurred in a first geographic area, the first criminal statistics being organized using one or more first categories. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, and from one or more third-party services1130, the firstcriminal data1416 representing the first criminal statistics, the first criminal statistics being organized intofirst categories1418. In some examples, the first criminal statistics represented by the firstcriminal data1416 may correspond to a geographic area, such as a city, state, or country. In some examples, thefirst categories1418 may correspond to categories of criminal behavior (e.g., categories of crimes). For example, thefirst categories1418 may include, but are not limited to, one or more of vehicle crimes (e.g., burglary, theft, etc.), assaults, break-ins (e.g., burglary to a property, vehicle, etc.), shootings, murder, homicides, kidnapping, sex crimes, robberies, fraud, arson, embezzlement, forgery, solicitation, parcel theft, stabbing, suspicious package, armed robberies, theft, breaking and entering, vandalism, and/or any types of attempted crime and/or suspicious activity.
Theprocess2500, at block B2504, stores the first data in one or more databases. For example, based on receiving the firstcriminal data1416, theprocessor1406 of thebackend server1122 may store the firstcriminal data1416 in one or more databases, such as thestorage devices1120.
Theprocess2500, at block B2504, may proceed to block B2412 of theexample process2400 ofFIG. 24.
Theprocess2500, at block B2506, may proceed from block B2414 of theexample process2400 ofFIG. 24.
Theprocess2500, at block B2508, determines one or more second categories for organizing second criminal statistics. For example, theprocessor1406 of thebackend server1122 may determine the one or more second categories for organizing the second criminal statistics. In some examples, theprocessor1406 of thebackend server1122 may determine the one or more second categories based on thecategories1428 from thestatistics request1420. In some examples, theprocessor1406 of thebackend server1122 may determine the one or more second categories based on using default categories that thebackend server1122 used when generating criminal statistics. For example, theprocessor1406 of thebackend server1122 may use the default categories when thestatistics request1420 does not include thecategories1428. Still, in some examples, theprocessor1406 of thebackend server1122 may determine the one or more second categories based on the one or more second categories (e.g., categories1428) being associated with auser account1430 of a user receiving the second criminal statistics.
In some examples, the one or more second categories (e.g., the categories1428) may correspond to categories of criminal behavior (e.g., categories of crimes). For example, the second categories may include, but are not limited to, one or more of vehicle crimes (e.g., burglary, theft, etc.), assaults, break-ins (e.g., burglary to a property, vehicle, etc.), shootings, murder, homicides, kidnapping, sex crimes, robberies, fraud, arson, embezzlement, forgery, solicitation, parcel theft, stabbing, suspicious package, armed robberies, theft, breaking and entering, vandalism, and/or any types of attempted crime and/or suspicious activity. In some examples, thefirst categories1418 may be similar to the one or more second categories. In some examples, one or more of thefirst categories1418 may be different than one or more of the second categories.
For example, thefirst categories1418 may include each of the types of crimes and/or suspicious activity listed above. However, because the list may be extensive and not easily digestible, thebackend server1122 may utilize an algorithm to sort the first categories into the second categories. In such an example, the second categories may only include ten categories, where the ten categories may be vehicle crimes, armed robbery, theft, breaking and entering, arson, vandalism, assault, homicide, sex crimes, shootings, stabbings, and suspicious packages. As such, the reports and/or other data/information that is included in the first category of kidnapping (which, in this example, is not included in the second categories) may be analyzed to determine which of the second categories the report and/or other data/information should be included. In some examples, every kidnapping report and/or other data/information may automatically be included in the second category of assault and/or in the second category of breaking and entering. However, in other examples, each report and/or data/information may be separately analyzed (e.g., by analyzing the text associated with a report, or text associated with image data, and/or by using computer vision processing, image processing, audio processing, and/or other processing mechanisms to determine characteristics, features, patterns, and/or sequences of the data/information associated with the report and/or data/information pertaining to the crime and/or suspicious activity). For example, if a report and/or data/information included in the first category of kidnapping includes associated text such as “the victim was forcibly removed for their house and thrown into the back of a vehicle,” then the report and/or data/information may be associated with the second categories of assault and/or breaking and entering. In some examples, the report and/or data/information of a crime and/or suspicious activity included in the first category of kidnapping may not be included in any of the second categories. For example, if the textual information associated with the report and/or data/information from the first category of kidnapping includes language such as “the victim is suspected to have been kidnapped,” then the report and/or data/information may not relate to any of the second categories, and as a result, may not be included in any of the second categories. In some examples, all of the reports and/or data one or more of the first categories may not be analyzed at all, and my be filtered out.
Theprocess2500, at block B2512, creates the second criminal statistics using at least a portion of the first criminal statistics that occurred in a second geographic area, the creating based on the one or more second categories. For example, theprocessor1406 of thebackend server1122 may create the second criminal statistics using at least a portion of the first criminal statistics that occurred in a second geographic area, such as the secondgeographic area1422. When creating the second criminal statistics, theprocessor1406 of thebackend server1122 may organize the second criminal statistics into the one or more second categories. For example, theprocessor1406 of thebackend server1122 may place each incident (e.g., police report, suspicious activity, information describing image data) represented by the second criminal statistics into a category of the one or more second categories.
In some examples, when one or more of thefirst categories1418 are similar to one or more of the second categories, theprocessor1406 of thebackend server1122 may include incidents (e.g., police reports, suspicious activities, etc.) from the one or morefirst categories1418 into the corresponding one or more similar second categories. For example, if the one or morefirst categories1418 includes a burglary category, and the one or more second categories also includes a burglary category, then theprocessor1406 of thebackend server1122 may place each incident included in the burglary category from the one or morefirst categories1418 into the burglary category from the one or more second categories. For another example, if the one or morefirst categories1418 includes a vehicle crimes category, and the one or more second categories also includes a vehicle crimes category, then theprocessor1406 of thebackend server1122 may place each incident included in the vehicle crimes category from the one or morefirst categories1418 into the vehicle crimes category from the one or more second categories.
In some examples, when one or more of thefirst categories1418 differ from one or more of the second categories, theprocessor1406 of thebackend server1122 may analyze each of the incidents from the portion of the first criminal statistics that are included in afirst category1418 that does not include a corresponding second category. Based on the analysis, theprocessor1406 of thebackend server1122 may determine which category of the one or more second categories the respective incident should be included in. For example, theprocessor1406 of thebackend server1122 may analyze one or more words described by each incident to identity key words that are associated with the one or more second categories. Based on the identifying a key word that is associated with a category from the one or more second categories, theprocessor1406 of thebackend server1122 may include the respective incident in the category. For example, if an incident includes the word “murder” or the word “killing”, then theprocessor1406 of thebackend server1122 may include the incident in the murder category.
In some examples, theprocessor1406 of thebackend server1122 may determine which category of the one or more second categories thatinformation1246 describing theimage data1224 should be placed by analyzing theinformation1246. For example, if theinformation1246 includes a tag associated with a category, then theprocessor1406 of thebackend server1122 may include theinformation1246 in that category. For instance, if an incident includes the tag “package” or the tag “stolen parcel”, then theprocessor1406 of thebackend server1122 may include the incident in the parcel theft category. For another example, if theinformation1246 includes a comment, message, and/or other description associated with theimage data1224, then theprocessor1406 of thebackend server1122 may analyze one or more words included within theinformation1246 to identity key words that are associated with the one or more second categories. Based on the identifying a key word that is associated with a category from the one or more second categories, theprocessor1406 of thebackend server1122 may include the respective incident in the category. For instance, if theinformation1246 includes the word “murder” or the word “killing”, then theprocessor1406 of thebackend server1122 may include the incident in the murder category.
In some examples, theprocessor1406 of thebackend server1122 may determine which category of the one or more second categories thatinformation1246 describing theimage data1224 should be placed based on historical information. For example, in response to users of the service that provides the criminal statistics and/or in response to administrators of the service classifying the incidents, reports, and/or image data into criminal statistic categories, thebackend server1122 may learn which types ofinformation1246 should be associated with each of the second categories. For example, if an administrator is classifying the first criminal statistics categories into second criminal statistic categories, the association between the two criminal statistic categories may be learned by thebackend server1122 and applied to future sorting and filtering of the first criminal statistic categories into the second criminal statistic categories. As another example, based on user responses to reports, crime data, and/or image data that is sorted into the second criminal statistic categories, thebackend server1122 may learn the characteristics, patterns, and/or sequences of events from the types of reports, crime data, and/or image data that is most desired and/or intriguing to the users (e.g., based on length of viewing, based on sharing, etc.). For example, if multiple users view image data in the shooting category and stop watching on average after three seconds of viewing, the characteristics, patterns, and/or sequences from the image data may be dissociated with the shooting category. As another example, if multiple users view image data, read reports, and/or view crime data that is in the vehicle theft category, and share the image data, the reports, and/or the crime data with other users, especially if one or more users include textual information such as “Check out this car theft,” thebackend server1122 may learn to associate the characteristics, sequences, and/or patterns from the image data, the reports, and/or the crime data with the vehicle theft category.
Theprocess2500, at block B2514, transmits second criminal data representing the second criminal statistics. For example, theprocessor1406 of thebackend server1122 may transmit, using thecommunication module1402, the secondcriminal data1432 representing the second criminal statistics to thefirst client device1108,1110 or the second client device1128 (e.g., whichever client device transmitted the statistics request1420). In response, thefirst client device1108,1110 or thesecond client device1128 may receive the secondcriminal data1432 and display the second criminal statistics to the user.
FIG. 26 is a flowchart illustrating anexample process2600 for integrating A/V recording and communication device data into criminal statistics according to various aspects of the present disclosure. Theprocess2600, at block B2602, receives, from a third-party server, first data representing first criminal statistics associated with a geographic area. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, and from one or more third-party services1130, the firstcriminal data1416 representing the first criminal statistics. In some examples, the first criminal statistics represented by the firstcriminal data1416 may correspond to a geographic area, such as a city, state, or country. In some embodiments, theprocessor1406 of thebackend server1122 may receive the firstcriminal data1416 continuously from the one or more third-party services1130 using thecommunication module1402. Additionally, or alternatively, in some embodiment, theprocessor1406 of thebackend server1122 may receive the firstcriminal data1416 from the one or more third-party services1130 using thecommunication module1402 at given time intervals.
Theprocess2600, at block B2604, receives image data generated by an audio/video (A/V) recording and communication device. For example, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, theimage data1224 generated by the A/V recording andcommunication device1102. In some examples, theprocessor1406 of thebackend server1122 may receive, using thecommunication module1402, theimage data1224 from the A/V recording andcommunication device1102 and/or thehub device1112. In some examples, theprocessor1406 of thebackend server1122 receives theimage data1224 based on the A/V recording andcommunication device1102 detecting motion. For instance, theimage data1224 may depict an activity that occurs in a field of view of thecamera1204 of the A/V recording andcommunication device1102.
Theprocess2600, at block B2606, obtains information describing the image data. For example, theprocessor1406 of thebackend server1122 may obtain theinformation1246 describing theimage data1224. In some examples, theprocessor1406 of thebackend server1122 may obtain theinformation1246 by receiving, using thecommunication module1402, theinformation1246 from thefirst client device1108,1110 and/or the hub device1112 (e.g., such as within consent data1314). In some examples, theprocessor1406 of thebackend server1122 obtains theinformation1246 by analyzing theimage data1224 using at least one of computer vision processing and image processing (described in detail above). Based on the analysis, theprocessor1406 of thebackend server1122 may determine that theimage data1224 depicts suspicious activity. Theprocessor1406 of thebackend server1122 may then generate theinformation1246, which describes the suspicious activity depicted by theimage data1224.
Theprocess2600, at block B2608, determines that a location of the A/V recording and communication device is within the geographic area. For example, theprocessor1406 of thebackend server1122 may utilize thelocation data1238 to determine the location of the A/V recording andcommunication device1102. Theprocessor1406 of thebackend server1122 may then determine that the location of the A/V recording andcommunication device1102 is within the geographic area. In some examples, when astatistics request1420 indicates the secondgeographic area1422, theprocessor1406 of thebackend server1122 may determine that the location of the A/V recording andcommunication device1102 is within the secondgeographic area1422.
Theprocess2600, at block B2610, creates the second criminal statistics using at least a portion of the first criminal statistics and the information. For example, theprocessor1406 of thebackend server1122 may create the second criminal statistics using at a portion of the firstcriminal data1416 that corresponds to the at least the portion of the first criminal statistics and theinformation1246 describing theimage data1224. In some examples, when thestatistics request1420 indicates the secondgeographic area1422, the at least the portion of the firstcriminal data1416 may represent the at least the portion of the first criminal statistics that occurred within the secondgeographic area1422. In some examples, theprocessor1406 of thebackend server1122 may further create the second criminal statistics by further organizing the second criminal statistics into thecategories1428.
Theprocess2600, at block B2612, transmits second criminal data representing the second criminal statistics. For example, theprocessor1406 of thebackend server1122 may transmit, using thecommunication module1402, the secondcriminal data1432 representing the second criminal statistics to thefirst client device1108,1110 or the second client device1128 (e.g., whichever client device transmitted the statistics request1420). In response, thefirst client device1108,1110 or thesecond client device1128 may receive the secondcriminal data1432 and display the second criminal statistics to the user.
FIG. 27 is a flowchart illustrating anexample process2700 for requesting criminal statistics according to various aspects of the present disclosure. Theprocess2700, at block B2702, displays a graphical user interface (GUI), the GUI for requesting criminal statistics. For example, theprocessor1612 of thesecond client device1128 may display theGUI1622 using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may display theGUI1522 using the display1520). The GUI1622 (and/or the GUI1522) may be for requesting criminal statistics, such as from thebackend server1122.
Theprocess2700, at block B2704, receives a first input indicating a location. For example, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the first input indicating the location. In some examples, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the first input using a first text box and/ordropdown menu1812 that enables a user to identify the location. In some examples, the location may include an address of the user's property. In some examples, the location may be the location of the A/V recording and communication device associated with the user, such as the A/V recording andcommunication device1102 associated with thefirst client device1108,1110. In other examples, the location may be the location of the client device attempting to request and/or access the criminal statistics. In such an example, the location may be determined using the location services of the client device, such as GPS coordinates generated by an onboard GPS chip.
Theprocess2700, at bock B2706, displays, on the GUI, a map of a first geographic area that is associated with the location. For example, theprocessor1612 of thesecond client device1128 may display, on theGUI1622, themap1624 of the first geographic area that is associated with the location (and/or theprocessor1512 of thefirst client device1108,1110 may display, on theGUI1522, themap1524 of the first geographic location that is associated with the location). For example, as illustrated on the example GUI1802 (which may correspond to theGUI1522 and/or the GUI1622) ofFIG. 18, the map1804 (which may correspond to themap1524 and/or the map1624) may display the geographic area that includes thelocation1806.
Theprocess2700, at block B2708, receives a second input indicating a second geographic location that includes a portion of the first geographic location. For example, theprocessor1612 of thesecond client device1128 may receive the second input indicating the second geographic area1422(2) (and/or theprocessor1512 of thefirst client device1108,1110 may receive the second input indicating the second geographic area1422(1)). In some examples, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the second input via themap1804 of theGUI1802.
Theprocess2700, at block B2710, displays, on the GUI, an indication of the geographic area. For example, theprocessor1612 of thesecond client device1128 may display, on theGUI1622, an indication of the second geographic area1422(2) (and/or theprocessor1512 of thefirst client device1108,1110 may display, on theGUI1522, an indication of the second geographic area1422(1)). For example, and as illustrated on theexample GUI1802 ofFIG. 18, themap1804 may display an indication of the secondgeographic area1810.
Theprocess2700, at block B2712, receives a third input indicating a time period. For example, theprocessor1612 of thesecond client device1128 may receive the third input indicating the time period1424(2) (and/or theprocessor1512 of thefirst client device1108,1110 may receive the third input indicating the time period1424(1)). For example, and as illustrated by theexample GUI1802 ofFIG. 18, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the third input using a second text box and/ordropdown menu1814 that enables the user to identify the time period.
Theprocess2700, at block B2714, receives a fourth input indicating one or more categories. For example, theprocessor1612 of thesecond client device1128 may receive the fourth input indicating the one or more categories1428(2) (and/or theprocessor1512 of thefirst client device1108,1110 may receive the fourth input indicating the one or more categories1428(1)). For example, and as illustrated by theexample GUI1802 ofFIG. 18, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the fourth input using a fourth text box and/ordropdown menu1818 that enables the user to identify the one or more categories.
Theprocess2700, at block B2716, receives a fifth input indicating a frequency. For example, theprocessor1612 of thesecond client device1128 may receive the fifth input indicating the frequency1426(2) (and/or theprocessor1512 of thefirst client device1108,1110 may receive the fifth input indicating the frequency1426(1)). For example, and as illustrated by theexample GUI1802 ofFIG. 18, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the fifth input using a third text box and/ordropdown menu1816 that enables the user to identify the frequency.
Theprocess2700, at block B2718, receives a sixth input associated with requesting the criminal statistics. For example, theprocessor1612 of thesecond client device1128 may receive the sixth input associated with requesting the criminal statistics (and/or theprocessor1512 of thefirst client device1108,1110 may receive the sixth input associated with requesting the criminal statistics). For example, and as illustrated by theexample GUI1802 ofFIG. 18, theprocessor1612 of the second client device1128 (and/or theprocessor1512 of thefirst client device1108,1110) may receive the sixth input using a button and/or othergraphical element1820 that enables the user to cause the second client device1128 (and/or thefirst client device1108,1110) to request the criminal statistics from thebackend server1122.
Theprocess2700, at block B2720, generates a request for the criminal statistics. For example, based on receiving the sixth input, theprocessor1612 of thesecond client device1128 may generate a statistics request1420(2) (and/or theprocessor1512 of thefirst client device1108,1110 may generate a statistics request1420(1)). In some examples, the statistics request1420(2) may include the second geographic area1422(2), the time period1424(2), the frequency1426(2), and/or the one or more categories1428(2) (and/or the statistics request1420(1) may include the second geographic area1422(1), the time period1424(1), the frequency1426(1), and/or the one or more categories1428(1)).
Theprocess2700, at block B2722, transmits the request to a backend server. For example, theprocessor1612 of thesecond client device1128 may transmit, using thecommunication module1610, the statistics request1420(2) to the backend server1122 (and/or theprocessor1512 of thefirst client device1108,1110 may transmit, using thecommunication module1510, the statistics request1420(1) to the backend server1122).
Theprocess2700, at block B2724, receives data representing the criminal statistics from the backend server. For example, theprocessor1612 of thesecond client device1128 may receive, using thecommunication module1610, the second criminal data1432(2) representing the second criminal statistics from the backend server1122 (and/or theprocessor1512 of thefirst client device1108,1110 may receive, using thecommunication module1510, the second criminal data1432(1) representing the second criminal statistics from the backend server1122). In some examples, theprocessor1612 of thesecond client device1128 may first receive, using thecommunication module1610, a notification1434(2) from thebackend server1122 that the second criminal statistics are ready to be viewed (and/or theprocessor1512 of thefirst client device1108,1110 may first receive, using thecommunication module1510, a notification1434(1) from thebackend server1122 that the second criminal statistics are ready to be viewed). In such examples, and in response to receiving the input, theprocessor1612 of thesecond client device1128 may transmit, using thecommunication module1610, an additional statistics request1420(2) to thebackend server1122 for the second criminal statistics (and/or theprocessor1512 of thefirst client device1108,1110 may transmit, using thecommunication module1510, an additional statistics request1420(1) to thebackend server1122 for the second criminal statistics).
Theprocess2700, at block B2726, displays at least a portion of the criminal statistics. For example, theprocessor1612 of thesecond client device1128 may display at least a portion of the second criminal statistics using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may display at least a portion of the second criminal statistics using the display1520).
Theprocess2700 ofFIG. 27 may be implemented in a variety of embodiments, including those discussed above. However, the below-detailed embodiments are not intended to be limiting, and are provided merely as example embodiments of the present disclosure. Other embodiments similar to those outlined herein may also fall within the scope of the present disclosure.
For example, theprocessor1612 of thesecond client device1128 may cause a GUI1802 (which may correspond to GUI1622) to be displayed on the display1620 (e.g., block B2702). While displaying theGUI1802, theprocessor1612 of thesecond client device1128 may receive, using theinput interface1604, a first input associated with a geographic location of a user's address, which may be located in San Diego (e.g., block B2704). In some examples, the first input is received using the first text box and/ordropdown menu1812. In response, theprocessor1612 of thesecond client device1128 may cause amap1804, which may include a portion of San Diego, to be displayed on the GUI1800 (e.g., block B2706). Theprocessor1612 of thesecond client device1128 may then receive, using theuser interface1604, a second input indicating a second geographic area1422(2), such as anarea1810 around the user's address in San Diego (e.g., block B2708). In response, theprocessor1612 of thesecond client device1128 may cause the secondgeographic area1810 to be displayed on the map1804 (e.g., block B2710).
Next, theprocessor1612 of thesecond client device1128 may receive, using theinput interface1604, a third input indicating a time period1424(2), such as the last week (e.g., block B2712). In some examples, the third input is received using the second text box and/ordropdown menu1814. Next, theprocessor1612 of thesecond client device1128 may receive, using theinput interface1604, a fourth input indicating one or more categories1428(2), such as murders, larceny, and the like (e.g., block B2714). In some examples, the fourth input is received using the fourth text box and/ordropdown menu1818. Additionally, theprocessor1612 of thesecond client device1128 may receive, using theinput interface1604, a fifth input indicating frequency1426(2), such as every week (e.g., block B2716). In some examples, the fifth input is received using the third text box and/ordropdown menu1816. Theprocessor1612 of thesecond client device1128 may then receive, using theinput interface1604, a sixth input associated with requesting criminal statistics (e.g., block B2718). In some examples, the sixth input is received using the button and/or othergraphical element1820.
In response, theprocessor1612 of thesecond client device1128 may generate a statistics request1420(2) that includes the geographic area1422(2), the time period1424(2), the frequency1426(2), and the categories1428(2) (e.g., block B2720). Theprocessor1612 of thesecond client device1128 may then transmit, using thecommunication module1610, the statistics request1420(2) to the backend server1122 (e.g., block B2722). Based on transmitting the statistics request1420(2), theprocessor1612 of thesecond client device1128 may receive, using thecommunication module1610, the second criminal data1432(2) representing the criminal statistics (e.g., block B2724). The criminal statistics may be associated with thearea1810 around the user's address, include incidents that occurred in the last week, and be organized based on murders, larceny, and the like. Theprocessor1612 of thesecond client device1128 may then cause a display of a portion of the criminal statistics using the display1620 (e.g., block B2726).
FIG. 28 is a flowchart illustrating anexample process2800 for displaying criminal statistics according to various aspects of the present disclosure. Theprocess2800, at block B2802, displays at least a first portion of criminal statistics. For example, theprocessor1612 of thesecond client device1128 may display at least a portion of the criminal statistics using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may display at least a portion of the criminal statistics using the display1520). In some examples, block B2802 of theexample process2800 fromFIG. 28 may correspond to block B2726 of theexample process2700 ofFIG. 27.
Theprocess2800, at block B2804, receives an input associated with the at least the first portion of the criminal statistics. For example, theprocessor1612 of thesecond client device1128 may receive, using the input interface1604 (and/or theprocessor1512 of thefirst client device1108,1110 may receive, using the input interface1604), the input associated with the first portion of the criminal statistics. In some examples, the input may correspond to a selection of an icon associated with an incident (e.g., a criminal report,information1246 describingimage data1224, theimage data1224 itself, etc.) that is included in the at least the portion of the criminal statistics. In some examples, the input may indicate that the user wishes to view a second portion of the criminal statistics.
Based on the input, theprocess2800, at block B2806, may display additional information associated with an incident. For example, if the input corresponds to a selection of an icon associated with an incident, such as a criminal report, theprocessor1612 of thesecond client device1128 may display additional information associated with the incident using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may display additional information associated with the incident using the display1520). In some examples, the additional information may correspond to at least a portion of the police report. In some examples, the additional information may correspond to information about a suspect associated with the incident.
Additionally, or alternatively, based on the input, theprocess2800, at block B2808, may display a second portion of the criminal statistics. For example, if the input indicates that the user wishes to view the second portion of the criminal statistics, theprocessor1612 of thesecond client device1128 may display the second portion of the criminal statistics using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may display the second portion of the criminal statistic using the display1520). In some examples, the input may correspond to a selection of a category, and the second portion of the criminal statistics may include incidents associated with the category. In some examples, the input may correspond to changing the geographic area being displayed, and the second portion of the criminal statistics may include incidents associated with the new geographic area. In some examples, the input may correspond to navigating through the criminal statistics, and the second portion of the criminal statistics may include additional incidents associated with the geographic area being displayed to the user.
Additionally, or alternatively, and based on the input, theprocess2800, at block B2810, may display image data associated with an incident. For example, if the input corresponds to a selection of an icon associated withinformation1246 describingimage data1224, theprocessor1612 of thesecond client device1128 may display theimage data1224 using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may display theimage data1224 using the display1520). In some examples, theprocessor1612 of thesecond client device1128 may further display additional information associated with theimage data1224, such as comments and/or additional details describing the suspicious behavior, using the display1620 (and/or theprocessor1512 of thefirst client device1108,1110 may further display the additional information using the display1520).
The processes described herein may be used by systems, such as thebackend server1122, to create criminal statistics that are tailored to users requesting the criminal statistics and/or include data associated with suspicious activities that occur in the geographic areas surrounding the users' properties. For example, based on receiving astatistics request1420 that indicates ageographic area1422, thebackend server1122 may use data representing criminal activities, such as criminal reports, that occurred within thegeographic area1422 to create the criminal statistics for the user. Additionally, in some examples, thebackend server1122 may utilizeinformation1246 describingimage data1224, theimage data1224, and/or other data that is generated by one or more A/V recording andcommunication devices1102 located within thegeographic area1422 associated with the user to create the criminal statistics for the user. As such, by creating criminal statistics that are specific to the indicatedgeographic area1422, and which include theadditional information1246, thebackend server1122 is capable to curating criminal statistics that are easily digestible and also tailored specifically to the user. As a result, the user may be better informed about the crimes and/or other suspicious activity that are occurring around thegeographic area1422 surrounding the user's property. Using this information, the user may take various actions to protect the user's family, pets, and/or property, as well as protect families, pets, and/or property in the surrounding area (e.g., the street on which the property is located, the neighborhood the property is in, etc.).
FIG. 29 is a functional block diagram of aclient device2902 on which the present embodiments may be implemented according to various aspects of the present disclosure. The user's client device114 described with reference toFIG. 1 and/or theclient device1108,1110 described with reference toFIG. 11 may include some or all of the components and/or functionality of theclient device2902. Theclient device2902 may comprise, for example, a smartphone.
With reference toFIG. 29, theclient device2902 includes aprocessor2904, amemory2906, auser interface2908, acommunication module2910, and adataport2912. These components are communicatively coupled together by aninterconnect bus2914. Theprocessor2904 may include any processor used in smartphones and/or portable computing devices, such as an ARM processor (a processor based on the RISC (reduced instruction set computer) architecture developed by Advanced RISC Machines (ARM).). In some embodiments, theprocessor2904 may include one or more other processors, such as one or more conventional microprocessors, and/or one or more supplementary co-processors, such as math co-processors.
Thememory2906 may include both operating memory, such as random access memory (RAM), as well as data storage, such as read-only memory (ROM), hard drives, flash memory, or any other suitable memory/storage element. Thememory2906 may include removable memory elements, such as a CompactFlash card, a MultiMediaCard (MMC), and/or a Secure Digital (SD) card. In some embodiments, thememory2906 may comprise a combination of magnetic, optical, and/or semiconductor memory, and may include, for example, RAM, ROM, flash drive, and/or a hard disk or drive. Theprocessor2904 and thememory2906 each may be, for example, located entirely within a single device, or may be connected to each other by a communication medium, such as a USB port, a serial port cable, a coaxial cable, an Ethernet-type cable, a telephone line, a radio frequency transceiver, or other similar wireless or wired medium or combination of the foregoing. For example, theprocessor2904 may be connected to thememory2906 via thedataport2912.
Theuser interface2908 may include any user interface or presentation elements suitable for a smartphone and/or a portable computing device, such as a keypad, a display screen, a touchscreen, a microphone, and a speaker. Thecommunication module2910 is configured to handle communication links between theclient device2902 and other, external devices or receivers, and to route incoming/outgoing data appropriately. For example, inbound data from thedataport2912 may be routed through thecommunication module2910 before being directed to theprocessor2904, and outbound data from theprocessor2904 may be routed through thecommunication module2910 before being directed to thedataport2912. Thecommunication module2910 may include one or more transceiver modules capable of transmitting and receiving data, and using, for example, one or more protocols and/or technologies, such as GSM, UMTS (3GSM), IS-95 (CDMA one), IS-2000 (CDMA 2000), LTE, FDMA, TDMA, W-CDMA, CDMA, OFDMA, Wi-Fi, WiMAX, or any other protocol and/or technology.
Thedataport2912 may be any type of connector used for physically interfacing with a smartphone and/or a portable computing device, such as a mini-USB port or an IPHONE®/IPOD® 30-pin connector or LIGHTNING® connector. In other embodiments, thedataport2912 may include multiple communication channels for simultaneous communication with, for example, other processors, servers, and/or client terminals.
Thememory2906 may store instructions for communicating with other systems, such as a computer. Thememory2906 may store, for example, a program (e.g., computer program code) adapted to direct theprocessor2904 in accordance with the present embodiments. The instructions also may include program elements, such as an operating system. While execution of sequences of instructions in the program causes theprocessor2904 to perform the process steps described herein, hard-wired circuitry may be used in place of, or in combination with, software/firmware instructions for implementation of the processes of the present embodiments. Thus, the present embodiments are not limited to any specific combination of hardware and software.
FIG. 30 is a functional block diagram of a general-purpose computing system on which the present embodiments may be implemented according to various aspects of the present disclosure. Thecomputer system3002 may be embodied in at least one of a personal computer (also referred to as a desktop computer)3004, a portable computer (also referred to as a laptop or notebook computer)3006, and/or aserver3008. A server is a computer program and/or a machine that waits for requests from other machines or software (clients) and responds to them. A server typically processes data. The purpose of a server is to share data and/or hardware and/or software resources among clients. This architecture is called the client-server model. The clients may run on the same computer or may connect to the server over a network. Examples of computing servers include database servers, file servers, mail servers, print servers, web servers, game servers, and application servers. The term server may be construed broadly to include any computerized process that shares a resource to one or more client processes.
Thecomputer system3002 may execute at least some of the operations described above. Thecomputer system3002 may include at least oneprocessor3010,memory3012, at least onestorage device3014, and input/output (I/O)devices3016. Some or all of thecomponents3010,3012,3014,3016 may be interconnected via asystem bus3018. Theprocessor3010 may be single- or multi-threaded and may have one or more cores. Theprocessor3010 may execute instructions, such as those stored in thememory3012 and/or in thestorage device3014. Information may be received and output using one or more I/O devices3016.
Thememory3012 may store information, and may be a computer-readable medium, such as volatile or non-volatile memory. The storage device(s)3014 may provide storage for thesystem3002, and may be a computer-readable medium. In various aspects, the storage device(s)3014 may be a flash memory device, a hard disk device, an optical disk device, a tape device, or any other type of storage device.
The I/O devices3016 may provide input/output operations for thesystem3002. The I/O devices3016 may include a keyboard, a pointing device, and/or a microphone. The I/O devices3016 may further include a display unit for displaying graphical user interfaces, a speaker, and/or a printer. External data may be stored in one or more accessibleexternal databases3020.
The features of the present embodiments described herein may be implemented in digital electronic circuitry, and/or in computer hardware, firmware, software, and/or in combinations thereof. Features of the present embodiments may be implemented in a computer program product tangibly embodied in an information carrier, such as a machine-readable storage device, and/or in a propagated signal, for execution by a programmable processor. Embodiments of the present method steps may be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output.
The features of the present embodiments described herein may be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and/or instructions from, and to transmit data and/or instructions to, a data storage system, at least one input device, and at least one output device. A computer program may include a set of instructions that may be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions may include, for example, both general and special purpose processors, and/or the sole processor or one of multiple processors of any kind of computer. Generally, a processor may receive instructions and/or data from a read only memory (ROM), or a random-access memory (RAM), or both. Such a computer may include a processor for executing instructions and one or more memories for storing instructions and/or data.
Generally, a computer may also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files. Such devices include magnetic disks, such as internal hard disks and/or removable disks, magneto-optical disks, and/or optical disks. Storage devices suitable for tangibly embodying computer program instructions and/or data may include all forms of non-volatile memory, including for example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices, magnetic disks such as internal hard disks and removable disks, magneto-optical disks, and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, one or more ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features of the present embodiments may be implemented on a computer having a display device, such as an LCD (liquid crystal display) monitor, for displaying information to the user. The computer may further include a keyboard, a pointing device, such as a mouse or a trackball, and/or a touchscreen by which the user may provide input to the computer.
The features of the present embodiments may be implemented in a computer system that includes a back-end component, such as a data server, and/or that includes a middleware component, such as an application server or an Internet server, and/or that includes a front-end component, such as a client computer having a graphical user interface (GUI) and/or an Internet browser, or any combination of these. The components of the system may be connected by any form or medium of digital data communication, such as a communication network. Examples of communication networks may include, for example, a LAN (local area network), a WAN (wide area network), and/or the computers and networks forming the Internet.
The computer system may include clients and servers. A client and server may be remote from each other and interact through a network, such as those described herein. The relationship of client and server may arise by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The above description presents the best mode contemplated for carrying out the present embodiments, and of the manner and process of practicing them, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which they pertain to practice these embodiments. The present embodiments are, however, susceptible to modifications and alternate constructions from those discussed above that are fully equivalent. Consequently, the present invention is not limited to the particular embodiments disclosed. On the contrary, the present invention covers all modifications and alternate constructions coming within the spirit and scope of the present disclosure. For example, the steps in the processes described herein need not be performed in the same order as they have been presented, and may be performed in any order(s). Further, steps that have been presented as being performed separately may in alternative embodiments be performed concurrently. Likewise, steps that have been presented as being performed concurrently may in alternative embodiments be performed separately.
Example Clauses
In a first aspect, a method comprises: receiving, from a third-party server, first criminal data representing first criminal statistics from within a first geographic area; storing the first criminal data in one or more databases; receiving, from an audio/video (A/V) recording and communication device, image data generated by a camera of the A/V recording and communication device; receiving, from a first client device associated with the A/V recording and communication device, consent for sharing the image data, the consent including at least information describing a suspicious activity depicted by the image data; storing the image data and the information in the one or more databases; receiving, from at least one of the first client device and a second client device, a criminal statistics request for second criminal statistics, the criminal statistics request indicating a second geographic area that includes at least a portion of the first geographic area; based on the receiving of the criminal statistics request, identifying a portion of the first criminal statistics that occurred within the second geographic area; based on the receiving of the criminal statistics request, determining that a location of the A/V recording and communication device is within the second geographic area; curating the second criminal statistics using at least the portion of the first criminal statistics that occurred within the second geographic area and the information describing the suspicious activity depicted by the image data; and transmitting, to the at least one of the first client device and the second client device, second criminal data representing the second criminal statistics.
In an embodiment of the first aspect, the method further comprises: determining that the first criminal statistics are associated with one or more first criminal categories, and wherein the curating of the second criminal statistics includes at least filtering at least the portion of the first criminal statistics that occurred within the second geographic area and the information describing the suspicious activity depicted by the image data into one or more second criminal categories.
In another embodiment of the first aspect, the criminal statistics request further indicates the one or more second criminal categories for filtering the second criminal statistics.
In another embodiment of the first aspect, the information indicates a criminal category of the one or more second criminal categories, and wherein the curating the second criminal statistics is further based on the information indicating the criminal category.
In another embodiment of the first aspect, the criminal statistics request further indicates a period of time associated with the second criminal statistics, and the method further comprises: determining that the portion of the first criminal statistics occurred during the period of time; and determining that the image data was captured by the A/V recording and communication device during the period of time.
In another embodiment of the first aspect, the criminal statistics request further indicates one or more criminal categories, and the method further comprises: determining that the portion of the first criminal statistics are associated with the one or more criminal categories; and determining, based on the information, that the suspicious activity depicted by the image data is associated with a criminal category of the one or more criminal categories.
In another embodiment of the first aspect, the criminal statistics request further indicates one or more criminal categories, and the method further comprises: analyzing the image data to determine that the image data depicts the suspicious activity; and determining that the suspicious activity is associated with a criminal category of the one or more criminal categories.
In another embodiment of the first aspect, the image data is analyzed using at least one of computer vision processing and image processing.
In another embodiment of the first aspect, the criminal statistics request further indicates a frequency for receiving the second criminal statistics, and the curating of the second criminal statistics is further based on the frequency.
In another embodiment of the first aspect, the criminal statistics request is a first criminal statistics request, and the method further comprises: transmitting, to the at least one of the first client device and the second client device, a notification associated with the second criminal statistics; and receiving, from the at least one of the first client device and the second client device, a second criminal statistics request to view the second criminal statistics, the transmitting of the second criminal data representing the second criminal statistics to the at least one of the first client device and the second client device is based on the receiving of the second criminal statistics request to view the second criminal statistics.
In another embodiment of the first aspect, the receiving the image data from the A/V recording and communication device is via a hub device of a security system.
In another embodiment of the first aspect, the method further comprises: associating the second geographic area with a user profile, the user profile is associated with a user of the A/V recording and communication device; and after the transmitting of the second criminal data representing the second criminal statistics, curating at least third criminal statistics for the user using the second geographic area associated with the user profile.
In a second aspect, a method comprises: receiving first criminal data representing first criminal statistics that occurred within a geographic area; receiving, from an audio/video (A/V) recording and communication device, image data generated by a camera of the A/V recording and communication device; obtaining information describing the image data; determining that a location of the A/V recording and communication device is located within the geographic area; creating second criminal statistics using at least a portion of the first criminal statistics and the information describing the image data; and transmitting, to a client device, second criminal data representing the second criminal statistics.
In an embodiment of the second aspect, the geographic area is a first geographic area, and the method further comprises: determining a second geographic area associated with the second criminal statistics, the second geographic area corresponding to a portion of the first geographic area; and determining that the at least the portion of the first criminal statistics is associated with the second geographic area, the creating of the second criminal statistics using the at least the portion of the first criminal statistics is based on the determining that the at least the portion of the first criminal statistics is associated with the second geographic area.
In another embodiment of the second aspect, the method further comprises: receiving, from the client device, a criminal statistics request for the second criminal statistics, the criminal statistics request indicating the second geographic area.
In another embodiment of the second aspect, the method further comprises: associating the second geographic area with a user profile, the user profile is associated with a user of the A/V recording and communication device, and the determining the second geographic area associated with the second criminal statistic comprises analyzing the user profile to determine the second geographic area associated with the second criminal statistics.
In another embodiment of the second aspect, the method further comprises: determining that the first criminal statistics are associated with one or more first criminal categories, the creating of the second criminal statistics includes at least organizing at least the portion of the first criminal statistics and the information describing the image data into one or more second criminal categories.
In another embodiment of the second aspect, the method further comprises: receiving, from the client device, a criminal statistics request for the second criminal statistics, the criminal statistics request indicating the one or more second criminal categories.
In another embodiment of the second aspect, the information indicates a criminal category of the one or more second criminal categories, and the creating the second criminal statistics is based on the information indicating the criminal category.
In another embodiment of the second aspect, the method further comprises: determining that the portion of the first criminal statistics occurred during a period of time; and determining that the image data was captured by the A/V recording and communication device during the period of time, the creating of the second criminal statistics using at least the portion of the first criminal statistics and the information describing the image data is based on the determining that the portion of the first criminal statistics occurred during the period of time and the determining that the image data was captured by the A/V recording and communication device during the period of time.
In another embodiment of the second aspect, the method further comprises: determining that the portion of the first criminal statistics are associated with one or more criminal categories; and determining, based on the information, that the image data is associated with a criminal category of the one or more criminal categories, the creating of the second criminal statistics using the at least the portion of the first criminal statistics and the information describing the image data is based on the determining that the portion of the first criminal statistics are associated with the one or more criminal categories and the determining that the image data is associated with the criminal category of the one or more criminal categories.
In another embodiment of the second aspect, the method further comprises: analyzing the image data to determine that the image data depicts a suspicious activity, the obtaining of the information comprises generating the information describing the suspicious activity depicted by the image data.
In another embodiment of the second aspect, the method further comprises: determining that the suspicious activity is associated with a criminal category of one or more criminal categories, the creating of the second criminal statistics using the at least the portion of the first criminal statistics and the information describing the image data is based on the determining that the suspicious activity is associated with the criminal category of the one or more criminal categories.
In another embodiment of the second aspect, the image data is analyzed using at least one of computer vision processing and image processing
In another embodiment of the second aspect, the method further comprises: determining that a given period of time has passed since creating previous criminal statistics, the creating of the second criminal statistics using the at least the portion of the first criminal statistics and the information describing the image data is based on the determining that the given period of time has passed.
In another embodiment of the second aspect, the method further comprises: transmitting, to the client device, a notification associated with the second criminal statistics; and receiving, from the client device, a request to view the second criminal statistics, the transmitting of the second criminal data representing the second criminal statistics to the client device is based on the receiving of the request to view the second criminal statistics.
In another embodiment of the second aspect, the receiving the image data from the A/V recording and communication device is via a hub device of a security system.
In a third aspect, a method comprises: receiving, from a third-party server, first criminal data representing first criminal statistics that occurred within a first geographic area, the first criminal statistics being organized using one or more first categories associated with one or more first types of crimes; storing the first criminal data in one or more databases; receiving, from a client device, a criminal statistics request for second criminal statistics, the criminal statistics request indicating a second geographic area that corresponds to a portion of the first geographic area; based on the receiving of the criminal statistics request, identifying a portion of the first criminal statistics that occurred within the second geographic area; determining one or more second categories for organizing the second criminal statistics, the one or more second categories being associated with one or more second types of crime; creating the second criminal statistics using the at least the portion of the first criminal statistics that occurred within the second geographic area, the creating including at least organizing the second criminal statistics using the one or more second categories associated with the one or more second types of crimes; and transmitting, to the client device, second criminal data representing the second criminal statistics.
In an embodiment of the third aspect, the method further comprises: receiving, from an audio/video (A/V) recording and communication device, image data generated by the audio/video A/V recording and communication device; and determining that a location of the A/V recording and communication device is located within the second geographic area, the creating of the second criminal statistics further uses the image data.
In another embodiment of the third aspect, the receiving the image data from the A/V recording and communication device is via a hub device of a security system.
In another embodiment of the third aspect, the client device is a first client device, and the method further comprises: receiving, from at least one of the first client device and a second client device, information associated with the image data; and based on the information, determining that the image data is associated with a category of the one or more second categories, the creating the second criminal statistics using the image data comprises creating the second criminal statistics using the information based on the determining that the image data is associated with the category of the one or more second categories.
In another embodiment of the third aspect, the method further comprises: analyzing the image data to determine that the image data depicts a suspicious activity; and determining that the suspicious activity is associated with a category of the one or more second categories, the creating the second criminal statistics using the image data is based on the determining that the image data depicts the suspicious activity that is associated with the category of the one or more second categories.
In another embodiment of the third aspect, the criminal statistics request further indicates the one or more second categories for organizing the second criminal statistics.
In another embodiment of the third aspect, the criminal statistics request further indicates a period of time associated with the second criminal statistics, and the method further comprises: determining that the portion of the first criminal statistics occurred during the period of time.
In another embodiment of the third aspect, the criminal statistics request further indicates a frequency for receiving the second criminal statistics, and the creating of the second criminal statistics is based on the frequency.
In another embodiment of the third aspect, the method further comprises: transmitting, to the client device, a notification associated with the second criminal statistics; and receiving, from the client device, a request to view the second criminal statistics, the transmitting of the second criminal data representing the second criminal statistics to the client device is based on the receiving of the request to view the second criminal statistics.
In another embodiment of the third aspect, the method further comprises: associating the second geographic area with a user profile, the user profile is associated with a user; and after the transmitting of the second criminal data representing the second criminal statistics, creating at least third criminal statistics for the user using the second geographic area associated with the user profile.
In a fourth aspect, a method comprises: receiving, from a third-party source, first criminal data representing first criminal statistics that occurred within a first geographic area, the first criminal statistics being organized using one or more first categories; identifying a portion of the first criminal statistics that occurred within a second geographic area; determining one or more second categories for organizing the at least the portion of the first criminal statistics that occurred within the second geographic area; creating second criminal statistics using the portion of the first criminal statistics that occurred within the second geographic area, the second criminal statistics being organized using the one or more second categories; and transmitting, to a client device, second criminal data representing the second criminal statistics.
In an embodiment of the fourth aspect, the method further comprises: receiving, from the client device, a criminal statistics request for the second criminal statistics, the criminal statistics request indicating at least one of the second geographic location and the one or more second categories.
In another embodiment of the fourth aspect, the method further comprises: receiving, from an audio/video (A/V) recording and communication device, image data generated by the A/V recording and communication device; and determining that a location of the A/V recording and communication device is located within the second geographic area, the creating of the second criminal statistics further uses the image data.
In another embodiment of the fourth aspect, the receiving the image data from the A/V recording and communication device is via a hub device of a security system.
In another embodiment of the fourth aspect, the client device is a first client device, and the method further comprises: receiving, from at least one of the first client device and a second client device, information associated with the image data; and based on the information, determining that the image data is associated with a category of the one or more second categories, the creating the second criminal statistics using the image data comprises creating the second criminal statistics using the information based on the determining that the image data is associated with the category of the one or more second categories.
In another embodiment of the fourth aspect, the method further comprises: analyzing the image data to determine that the image data depicts a suspicious activity; and determining that the suspicious activity is associated with a category of the one or more second categories, the creating the second criminal statistics using the image data is based on the determining that the image data depicts the suspicious activity that is associated with the category of the one or more second categories.
In another embodiment of the fourth aspect, the method further comprises: determining that the portion of the first criminal statistics occurred during a period of time, the creating second criminal statistics using the portion of the first criminal statistics is further based on the determining that the portion of the first criminal statistics occurred during the period of time.
In another embodiment of the fourth aspect, the method further comprises: determining that a given period of time has passed since creating previous criminal statistics, the creating of the second criminal statistics using the at least the portion of the first criminal statistics is based on the determining that the given period of time has passed.
In another embodiment of the fourth aspect, the method further comprises: transmitting, to the client device, a notification associated with the second criminal statistics; and receiving, from the client device, request to view the second criminal statistics, the transmitting of the second criminal data representing the second criminal statistics to the client device is based on the receiving of the request to view the second criminal statistics.
In another embodiment of the fourth aspect, the method further comprises: associating the second geographic area with a user profile, the user profile is associated with a user; and creating at least third criminal statistics for the user using the second geographic area associated with the user profile.
In a fifth aspect, a method is implemented by a client device that includes a display, a communication module, and a processor, the method comprising: causing, by the processor, a graphical user interface (GUI) to be displayed on the display, the GUI for requesting criminal statistics; receiving, by the processor, a first input indicating a location; based on the receiving of the first input, causing, by the processor, a map of a first geographic area that is associated with the location to be displayed on the GUI; receiving, by the processor, a second input indicating a second geographic area, the second geographic area including a portion of the first geographic area; based on the receiving of the second input, causing, by the processor, an indication of the second geographic area to be displayed on the GUI; receiving, by the processor, a third input associated with requesting the criminal statistics associated with the second geographic area; based on the receiving of the third input, transmitting, by the processor and using the communication module, a criminal statistics request to a network device, the criminal statistics request indicating at least the second geographic area; receiving, by the processor and using the communication module, and from the network device, first criminal data representing the criminal statistics, the criminal statistics being based on second criminal data associated with the second geographic area and information describing image data captured by one or more audio/video (A/V) recording and communication devices located within the second geographic area; and based on the receiving of the crime first data, causing, by the processor, at least a portion of the criminal statistics to be displayed on the display.
In an embodiment of the fifth aspect, the method further comprises: receiving, using the processor, a fourth input indicating a period of time associated with the criminal statistics, the criminal statistics request further indicates the period of time, and the criminal statistics are associated with criminal reports that correspond to the period of time.
In another embodiment of the fifth aspect, the method further comprises: receiving, using the processor, a fourth input indicating one or more criminal categories associated with the criminal statistics, the criminal statistics request further indicates the one or more criminal categories, and the at least the portion of the criminal statistics are further displayed on the display based on the one or more criminal categories.
In another embodiment of the fifth aspect, the method further comprises: receiving, using the processor, a fourth input indicating a frequency for receiving the criminal statistics, the criminal statistics request further indicates the frequency.
In another embodiment of the fifth aspect, the method further comprises: causing, by the processor, one or more first indicators to be displayed on the display, the one or more indicators illustrating locations of criminal reports that are associated with the at least the portion of the criminal statistics; and causing, by the processor, a second indicator to be displayed on the display, the second indicator illustrating a location associated with the image data.
In another embodiment of the fifth aspect, the method further comprises: receiving, using the processor, a fourth input indicating a selection of a first indicator of the one or more first indicators, the first indicator corresponding to a criminal report associated with the at least the portion of the criminal statistics; and based on the receiving of the fourth input, causing, by the processor, information associated with the criminal report to be displayed on the display.
In another embodiment of the fifth aspect, the method further comprises: receiving, using the processor, a fourth input indicating a selection of the second indicator associated with the image data; and based on the receiving of the fourth input, causing, by the processor, the image data to be displayed on the display.
In another embodiment of the fifth aspect, the at least the portion of the criminal statistics is a first portion of the criminal statistics, and the method further comprises: receiving, by the processor, a fourth input to view a second portion of the criminal statistics; and causing, by the processor, the second portion of the criminal statistics to be displayed on the display.
In another embodiment of the fifth aspect, the causing the at least the portion of the criminal statistics to be displayed on the display comprise causing, by the processor, the at least the portion of the criminal statistics to be displayed on the map of the first geographic area.
In another embodiment of the fifth aspect, the criminal statistics request is a first criminal statistics request, and the method further comprises: receiving, by the processor and using the communication module, a notification from the network device, the notification indicating that the criminal statistics are available; receiving, by the processor, a fourth input associated with viewing the criminal statistics; and transmitting, by the processor and using the communication module, and to the network device, a second criminal statistics request to view the criminal statistics, the receiving of the first criminal data representing the criminal statistics is based on the transmitting of the second criminal statistics request to view the criminal statistics.
In a sixth aspect, a computer program product is provided, the computer program product embodied in code executable by a processor, which when executed causes the processor to perform operations comprising: causing a graphical user interface (GUI) to be displayed on a display, the GUI for requesting criminal statistics; receiving a first input indicating a location; based on the receiving of the first input, causing a map of a first geographic area that is associated with the location to be displayed on the GUI; receiving a second input indicating a second geographic area, the second geographic area including a portion of the first geographic area; based on the receiving of the second input, causing an indication of the second geographic area to be displayed on the GUI; receiving a third input indicating one or more criminal categories associated with the criminal statistics; transmitting a criminal statistics request to a network device, the criminal statistics request indicating at least the second geographic area and the one or more criminal categories; and based on the transmitting of the criminal statistics request, receiving, from the network device, first criminal data representing the criminal statistics, the criminal statistics being based on: second criminal data representing one or more criminal reports associated with the one or more criminal categories, the one or more criminal reports being associated with the second geographic area; and information describing image data captured by one or more audio/video (A/V) recording and communication devices located within the second geographic area.
In an embodiment of the sixth aspect, the operations further comprising: causing at least a portion of the criminal statistics to be displayed on the display.
In another embodiment of the sixth aspect, the operations further comprising: causing an indicator associated with a criminal report of the one or more criminal reports to be displayed on the map, the indicator is displayed at a location on the map that corresponds to a location associated with the criminal report.
In another embodiment of the sixth aspect, the operations further comprising: causing at least an indicator associated with a criminal category of the one or more criminal categories to be displayed on the display; and causing a value to be displayed on the display, the value corresponding to a number of criminal reports that correspond to the criminal category.
In another embodiment of the sixth aspect, the operations further comprising: receiving a fourth input indicating a period of time associated with the criminal statistics, the criminal statistics request further indicates the period of time, and the one or more criminal reports are associated with the period of time.
In another embodiment of the sixth aspect, the operations further comprising: receiving a fourth input indicating a frequency for receiving the criminal statistics, the criminal statistics request further indicates the frequency.
In another embodiment of the sixth aspect, the operations further comprising: causing indicators to be displayed on the display, the indicators depicting locations of at least a portion of the one or more criminal reports and a location associated with image data captured by an A/V recording and communication device of the one or more A/V recording and communication devices.
In another embodiment of the sixth aspect, the operations further comprising: receiving a fourth input indicating a selection of a criminal report of the one or more criminal reports; and based on the receiving of the fourth input, causing information associated with the criminal report to be displayed on the display.
In another embodiment of the sixth aspect, the operations further comprising: receiving a fourth input indicating a selection of an icon associated with the image data captured by an A/V recording and communication device of the one or more A/V recording and communication devices; and based on the receiving of the fourth input, causing the image data to be displayed on the display.
In another embodiment of the sixth aspect, the operations further comprising: causing a first portion of the criminal statistics to be displayed on the display; receiving a fourth input to view a second portion of the criminal statistics; and causing the second portion of the criminal statistics to be displayed on the display.
In another embodiment of the sixth aspect, the criminal statistics request is a first criminal statistics request, and the operations further comprise: receiving a notification from the backend device, the notification indicating that the criminal statistics are available; receiving a fourth input associated with viewing the criminal statistics; and transmitting, to the network device, a second criminal statistics request to view the criminal statistics, the receiving of the first criminal data representing the criminal statistics is based on the transmitting of the second criminal statistics request to view the criminal statistics.
In a seventh aspect, a method is implemented by a client device that includes a display, a communication module, and a processor, the method comprising: causing, by the processor, a graphical user interface (GUI) to be displayed on the display, the GUI for requesting criminal statistics; receiving, by the processor, a first input indicating a location; based on the receiving of the first input, causing, by the processor, a map of a first geographic area that is associated with the location to be displayed on the GUI; receiving, by the processor, a second input indicating a second geographic area, the second geographic area including a portion of the first geographic area; transmitting, by the processor and using the communication module, a criminal statistics request to a network device, the criminal statistics request indicating at least the second geographic area; based on the transmitting of the criminal statistics request, receiving, by the processor and using the communication module, and from the network device, first criminal data representing the criminal statistics, the criminal statistics being based on second criminal data associated with the second geographic area; and causing, by the processor, at least a portion of the criminal statistics to be displayed on the display.
In an embodiment of the seventh aspect, the method further comprises: based on the receiving of the second input, causing, by the processor, an indication of the second geographic area to be displayed on the map.
In another embodiment of the seventh aspect, the method further comprises: receiving, using the processor, a third input indicating a period of time associated with the criminal statistics, the criminal statistics request further indicates the period of time, and the criminal statistics are associated with criminal reports that correspond to the period of time.
In another embodiment of the seventh aspect, the method further comprises: receiving, using the processor, a third input indicating one or more criminal categories associated with the criminal statistics, the criminal statistics request further indicates the one or more criminal categories, and the at least the portion of the criminal statistics are further displayed on the display based on the one or more criminal categories.
In another embodiment of the seventh aspect, the method further comprises: receiving, using the processor, a third input indicating a frequency for receiving the criminal statistics, the criminal statistics request further indicates the frequency.
In another embodiment of the seventh aspect, the method further comprises: causing, by the processor, one or more indicators to be displayed on the display, the one or more indicators illustrating locations of criminal reports that are associated with the at least the portion of the criminal statistics.
In another embodiment of the seventh aspect, the method further comprises: receiving, using the processor, a third input indicating a selection of a criminal report associated with the at least the portion of the criminal statistics; and based on the receiving of the third input, causing, by the processor, information associated with the criminal report to be displayed on the display.
In another embodiment of the seventh aspect, the criminal statistics are further based on information describing image data captured by one or more audio/video (A/V) recording and communication devices located within the second geographic location.
In another embodiment of the seventh aspect, the method further comprises: receiving, using the processor, a third input indicating a selection of an icon associated with the image data; and based on the receiving of the third input, causing, by the processor, the image data to be displayed on the display.
In another embodiment of the seventh aspect, the at least the portion of the criminal statistics is a first portion of the criminal statistics, and the method further comprises: receiving, by the processor, a third input to view a second portion of the criminal statistics; and causing, by the processor, the second portion of the criminal statistics to be displayed on the display.
In another embodiment of the seventh aspect, the causing the at least the portion of the criminal statistics to be displayed on the display comprise causing, by the processor, the at least the portion of the criminal statistics to be displayed on the map of the first geographic area.
In another embodiment of the seventh aspect, the criminal statistics request is a first criminal statistics request, and the method further comprises: receiving, by the processor and using the communication module, a notification from the network device, the notification indicating that the criminal statistics are available; receiving, by the processor, a third input associated with viewing the criminal statistics; and transmitting, by the processor and using the communication module, a second criminal statistics request to view the criminal statistics to the network device, the receiving of the first criminal data representing the criminal statistics is based on the transmitting of the second criminal statistics request to view the criminal statistics.
In an eighth aspect, a computer program product is provided, the computer program product embodied in code executable by a processor, which when executed causes the processor to perform operations comprising: causing a graphical user interface (GUI) to be displayed on a display, the GUI for requesting criminal statistics; receiving a first input indicating a location; based on the receiving of the first input, causing a map of a geographic area that is associated with the location to be displayed on the GUI; receiving a second input indicating one or more criminal categories; transmitting a criminal statistics request to a network device, the criminal statistics request indicating at least a portion of the geographic area and the one or more criminal categories; and based on the transmitting of the criminal statistics request, receiving, from the network device, first criminal data representing the criminal statistics, the criminal statistics being based on: second criminal data representing one or more criminal reports associated with the one or more criminal categories; and information describing image data captured by one or more audio/video (A/V) recording and communication devices located within the at least the portion of the geographic area.
In an embodiment of the eighth aspect, the operations further comprising: causing at least a portion of the criminal statistics to be displayed on the display.
In another embodiment of the eighth aspect, the operations further comprising: receiving a third input indicating a second geographic area, the second geographic area including a portion of the first geographic area; and causing an indication of the second geographic area to be displayed on the map, the at least the portion of the geographic area indicated by the criminal statistics request is the second geographic area.
In another embodiment of the eighth aspect, the operations further comprising: causing an indicator associated with a criminal report of the one or more criminal reports to be displayed on the map, the indicator is displayed at a location on the map that corresponds to a location associated with the criminal report.
In another embodiment of the eighth aspect, the operations further comprising: causing at least an indicator associated with a criminal category of the one or more criminal categories to be displayed on the display; and causing a value to be displayed on the display, the value corresponding to a number of criminal reports that correspond to the criminal category.
In another embodiment of the eighth aspect, the operations further comprising: receiving a third input indicating a period of time associated with the criminal statistics, the criminal statistics request further indicates the period of time, and the one or more criminal reports and the information are associated with the period of time.
In another embodiment of the eighth aspect, the operations further comprising: receiving a third input indicating a frequency for receiving the criminal statistics, the criminal statistics request further indicates the frequency.
In another embodiment of the eighth aspect, the operations further comprising: causing one or more first indicators to be displayed on the display, the one or more first indicators depicting locations of at least a portion of the one or more criminal reports; and causing a second indictor to be displayed on the display, the second indicator depicting a location associated with the image data captured by an A/V recording and communication device of the one or more A/V recording and communication devices.
In another embodiment of the eighth aspect, the operations further comprising: receiving a third input indicating a selection of a criminal report of the one or more criminal reports; and based on the receiving of the third input, causing information associated with the criminal report to be displayed on the display.
In another embodiment of the eighth aspect, the operations further comprising: receiving a third input indicating a selection of an icon associated with the image data; and based on the receiving of the third input, causing the image data to be displayed on the display.
In another embodiment of the eighth aspect, the operations further comprising: causing a first portion of the criminal statistics to be displayed on the display; receiving a third input to view a second portion of the criminal statistics; and causing the second portion of the criminal statistics to be displayed on the display.
In another embodiment of the eighth aspect, the criminal statistics request is a first criminal statistics request, and the operations further comprise: receiving a notification from the backend device, the notification indicating that the criminal statistics are available; receiving a third input associated with viewing the criminal statistics; and transmitting, to the network device, a second criminal statistics request to view the criminal statistics, the receiving of the first criminal data representing the criminal statistics is based on the transmitting of the second criminal statistics request to view the criminal statistics.
In a ninth aspect, a method comprises: receiving, from a third-party server, first criminal data representing first criminal statistics from within a first geographic area; storing the first criminal data in one or more databases; receiving, from an audio/video (A/V) device, image data generated by the A/V device; receiving, from a first client device associated with the A/V device, consent data representing consent for sharing the image data, the consent including at least information describing a suspicious activity depicted by the image data; storing the image data in the one or more databases; receiving, from a second client device, request data representing a request for second criminal statistics, the request indicating a second geographic area that includes at least a portion of the first geographic area; after receiving the request data, identifying a portion of the first criminal statistics that occurred within the second geographic area; determining that a location of the A/V device is within the second geographic area; generating the second criminal statistics using at least the portion of the first criminal statistics that occurred within the second geographic area and the information describing the suspicious activity depicted by the image data; and transmitting, to the second client device, second criminal data representing the second criminal statistics.
In an embodiment of the ninth aspect, wherein the request further indicates a period of time associated with the second criminal statistics, and wherein the method further comprises: determining that the portion of the first criminal statistics occurred during the period of time; and determining that the image data was captured by the A/V device during the period of time.
In another embodiment of the ninth aspect, wherein the request further indicates one or more criminal categories, and wherein the method further comprises: determining that the portion of the first criminal statistics are associated with the one or more criminal categories; and determining, using the information, that the suspicious activity depicted by the image data is associated with a criminal category of the one or more criminal categories.
In a tenth aspect, a method comprises: storing first data representing first criminal statistics that occurred within a geographic area; storing image data generated by an electronic device; storing information describing the image data; determining that a location of the electronic device is within the geographic area; generating second data representing second criminal statistics, the second criminal statistics including at least a portion of the first criminal statistics and the information describing the image data; and transmitting the second data to a client device.
In an embodiment of the tenth aspect, wherein the geographic area is a first geographic area, and wherein the method further comprises: determining a second geographic area associated with the second criminal statistics, the second geographic area including at least a portion of the first geographic area; and determining that the at least the portion of the first criminal statistics occurred within the second geographic area, wherein determining that the location of the electronic device is within the geographic area comprises determining that the location of the electronic device is within the second geographic area.
In another embodiment of the tenth aspect, the method further comprising receiving, from the client device, third data representing a request for the second criminal statistics, the request indicating the second geographic area.
In another embodiment of the tenth aspect, the method further comprising: associating the second geographic area with user profile data, wherein the user profile data is associated with the client device, wherein determining the second geographic area associated with the second criminal statistics comprises determining that the user profile data is associated with the second geographic area.
In another embodiment of the tenth aspect, the method further comprising: receiving third data representing consent for sharing the image data with a geographic network; and determining, using the third data representing the consent, to include the image data in the second criminal statistics.
In another embodiment of the tenth aspect, the method further comprising: determining that the first criminal statistics are associated with one or more first criminal categories, wherein the second data organizes the at least the portion of the first criminal statistics and the information into one or more second criminal categories.
In another embodiment of the tenth aspect, the method further comprising: receiving, from the client device, third data representing a request for the second criminal statistics, the request indicating one or more criminal categories, wherein the second data organizes the at least the portion of the first criminal statistics and the information into the one or more categories.
In another embodiment of the tenth aspect, the method further comprising: determining at least a category associated with the second criminal statistics; determining that the information indicates the category; and after determining that the information indicates the category, determining to include the information in the second criminal statistics.
In another embodiment of the tenth aspect, the method further comprising: determining a period of time associated with the second criminal statistics; determining that the at least the portion of the first criminal statistics occurred during the period of time; and determining that the electronic device generated the image data during the period of time.
In another embodiment of the tenth aspect, the method further comprising: analyzing the image data; determining that the image data depicts a suspicious activity; and generating the information describing the image data, the information indicating at least the suspicious activity depicted by the image data.
In another embodiment of the tenth aspect, the method further comprising: determining that a given period of time has elapsed since generating third criminal statistics; wherein generating the second criminal statistics occurs after the given period of time has elapsed.
In another embodiment of the tenth aspect, the method further comprising: transmitting, to the client device, third data representing a notification associated with the second criminal statistics; and receiving, from the client device, fourth data representing a request for the second criminal statistics, wherein transmitting the second data to the client device occurs after receiving the fourth data.
In another embodiment of the tenth aspect, the method further comprising: receiving, from the client device, third data representing a request for the image data; and sending the image data to the client device.
In an eleventh aspect, a system comprises: one or more processors; and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: storing first data representing first criminal statistics that occurred within a geographic area; storing image data generated by an electronic device; receiving, from a first client device, information associated with the image data; generating second data representing second criminal statistics, the second criminal statistics including at least a portion of the first criminal statistics and the information describing the image data; and transmitting the second data to a second client device.
In an embodiment of the eleventh aspect, the one or more computer-readable media storing further instructions that, when executed by the one or more processors, cause the one or more processors to perform further operations comprising: determining that the electronic device is located within the geographic area; and determining to include the information describing the image data within the second criminal statistics.
In another embodiment of the eleventh aspect, wherein the geographic area is a first geographic area, and wherein the one or more computer-readable media store further instructions that, when executed by the one or more processors, cause the one or more processors to perform further operations comprising: determining a second geographic area associated with the second criminal statistics, the second geographic area including at least a portion of the first geographic area; and determining that the at least the portion of the first criminal statistics occurred within the second geographic area; and determining that the electronic device is located within the second geographic area.
In another embodiment of the eleventh aspect, the one or more computer-readable media storing further instructions that, when executed by the one or more processors, cause the one or more processors to perform further operations comprising: determining a period of time associated with the second criminal statistics; determining that the at least the portion of the first criminal statistics occurred during the period of time; and determining that the electronic device generated the image data during the period of time.

Claims (19)

What is claimed is:
1. A method comprising:
receiving location data associated with a first audio/video (A/V) device, the location data representing at least a first location;
receiving, from a user device associated with the first A/V device, an indication of a first geographic area that includes the first location;
sending, to the user device, first data representing first criminal statistics associated with the first geographic area;
receiving, from one or more computing devices, second data representing second criminal statistics associated with a second geographic area;
determining that a period of time has elapsed since the sending of the first data to the user device;
based at least in part on the period of time elapsing, determining that a portion of the second criminal statistics is associated with the first geographic area, the portion of the second criminal statistics representing at least:
a criminal category; and
a first number of incidents, associated with the criminal category, that occurred within the first geographic area during the period of time;
and
sending, to the user device, third data representing at least the portion of the second criminal statistics.
2. A method comprising:
storing first data representing first criminal statistics;
receiving location data associated with an electronic device;
storing image data generated by the electronic device;
storing information describing the image data;
determining, based at least in part on the first data, a first number of incidents, associated with a criminal category, that have occurred within a geographic area;
determining, based at least in part on the location data, that a location of the electronic device is within the geographic area;
determining, based at least in part on the information, that the image data is associated with the criminal category;
determining, based at least in part on the first number of incidents and the image data being associated with the criminal category, a second number of incidents, associated with the criminal category, that have occurred within the geographic area;
generating second criminal statistics that include at least the second number of incidents; and
sending, to a user device, second data representing the second criminal statistics.
3. The method ofclaim 2, further comprising receiving, from the user device, a request for the second criminal statistics, the request indicating the geographic area.
4. The method ofclaim 2, further comprising:
associating the geographic area with user profile data, wherein the user profile data is associated with the user device; and
determining the geographic area based at least in part on the user profile data.
5. The method ofclaim 2, further comprising:
receiving consent for sharing the image data; and
determining, using the consent, to include the information describing the image data in the second criminal statistics.
6. The method ofclaim 2, further comprising:
determining that the first criminal statistics are associated with the criminal category,
wherein the second criminal statistics are also associated with the criminal category.
7. The method ofclaim 2, further comprising:
receiving, from the user device, a request for the second criminal statistics, the request indicating the criminal category,
wherein the second criminal statistics are associated with the criminal category.
8. The method ofclaim 2, further comprising:
determining a period of time associated with the second criminal statistics;
determining that the first number of incidents occurred during the period of time; and
determining that the electronic device generated the image data during the period of time.
9. The method ofclaim 2, further comprising:
analyzing the image data;
determining that the image data represents a suspicious activity; and
generating the information describing the image data, the information indicating at least the suspicious activity represented by the image data.
10. The method ofclaim 2, further comprising:
determining that a given period of time has elapsed since generating third criminal statistics,
wherein the generating of the second criminal statistics is based at least in part on the given period of time elapsing.
11. The method ofclaim 2, further comprising:
sending, to the user device, a notification associated with the second criminal statistics; and
receiving, from the user device, a request for the second criminal statistics,
wherein the sending of the second data to the user device is based at least in part on the receiving of the request.
12. A system comprising:
one or more processors; and
one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
receiving location data associated with a first audio/video (A/V) device, the location data representing at least a first location;
receiving, from a user device associated with the first A/V device, an indication of a geographic area that includes the first location;
generating first criminal statistics associated with the geographic area;
sending, to the user device, first data representing first criminal statistics;
storing second data representing second criminal statistics;
determining that a period of time has elapsed since the generating of the first criminal statistics;
determining, based at least in part on the second data, a portion of the second criminal statistics that are associated with the geographic area, the portion of the second criminal statistics representing a first number of incidents, associated with a criminal category, that have occurred within the geographic area during the period of time;
and
sending, to the user device, third data representing the portion of the second criminal statistics.
13. The system as recited inclaim 12, one or more computer-readable media storing further instructions that, when executed by the one or more processors, cause the one or more processors to perform further operations comprising, receiving, from the user device, a request for the second criminal statistics.
14. The method as recited inclaim 1, further comprising:
receiving image data generated by a second A/V device;
determining that a second location associated with the second A/V device is within the first geographic area; and
sending the image data to the user device.
15. The method as recited inclaim 1, further comprising:
receiving image data generated by a second A/V device;
determining that the image data is associated with the criminal category;
determining, based at least in part on the first number of incidents and the image data being associated with the criminal category, a second number of incidents, associated with the criminal category, that occurred within the first geographic area during the period of time; and
generating the third data, the third data representing the second number of incidents.
16. The method as recited inclaim 1, further comprising:
storing user profile data, the user profile data associated with the location data; and
based at least in part on the receiving of the indication, associating the first geographic area with the user profile data.
17. The method as recited inclaim 1, further comprising:
sending, to the user device, a notification associated with the portion of the second criminal statistics; and
receiving, from the user device, a request for the portion of the second criminal statistics,
and wherein the sending of the third data is based at least in part on the receiving of the request.
18. The system as recited inclaim 12, one or more computer-readable media storing further instructions that, when executed by the one or more processors, cause the one or more processors to perform further operations comprising:
receiving image data generated by a second A/V device;
determining that a second location associated with the second A/V device is within the geographic area; and
sending the image data to the user device.
19. The system as recited inclaim 12, one or more computer-readable media storing further instructions that, when executed by the one or more processors, cause the one or more processors to perform further operations comprising:
receiving image data generated by a second A/V device;
determining that the image data is associated with the criminal category;
determining, based at least in part on the first number of incidents and the image data being associated with the criminal category, a second number of incidents, associated with the criminal category, that occurred within the geographic area during the period of time; and
generating the third data, the third data representing the second number of incidents.
US16/239,0362018-01-042019-01-03Generating statistics using electronic device dataActive2039-06-04US11243959B1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US16/239,036US11243959B1 (en)2018-01-042019-01-03Generating statistics using electronic device data

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US201862613466P2018-01-042018-01-04
US16/239,036US11243959B1 (en)2018-01-042019-01-03Generating statistics using electronic device data

Publications (1)

Publication NumberPublication Date
US11243959B1true US11243959B1 (en)2022-02-08

Family

ID=80215775

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US16/239,036Active2039-06-04US11243959B1 (en)2018-01-042019-01-03Generating statistics using electronic device data

Country Status (1)

CountryLink
US (1)US11243959B1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20230070108A1 (en)*2021-09-092023-03-09Selex Es Inc.Systems And Methods For Electronic Signature Tracking And Analysis
US20230144497A1 (en)*2020-03-302023-05-11Signify Holding B.V.A system for monitoring a space by a portable sensor device and a method thereof
US20230385405A1 (en)*2022-05-272023-11-30The Boeing CompanySystem, method, and program for analyzing vehicle system logs
US12060037B2 (en)*2022-12-192024-08-13Motorola Solutions, Inc.Method and system for creation and application of parked vehicle security rules
US12347315B2 (en)2022-01-242025-07-01Leonardo Us Cyber And Security Solutions LlcSystems and methods for parking management
US20250310487A1 (en)*2024-03-282025-10-02Adeia Guides Inc.Systems and methods for enhanced video doorbell experiences

Citations (48)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6405213B1 (en)*1997-05-272002-06-11Hoyt M. LaysonSystem to correlate crime incidents with a subject's location using crime incident data and a subject location recording device
US7193644B2 (en)2002-10-152007-03-20Revolutionary Concepts, Inc.Automated audio video messaging and answering system
US8139098B2 (en)2002-10-152012-03-20Revolutionary Concepts, Inc.Video communication method for receiving person at entrance
US8144183B2 (en)2002-10-152012-03-27Revolutionary Concepts, Inc.Two-way audio-video communication method for receiving person at entrance
US8154581B2 (en)2002-10-152012-04-10Revolutionary Concepts, Inc.Audio-video communication system for receiving person at entrance
US8549028B1 (en)*2008-01-242013-10-01Case Global, Inc.Incident tracking systems and methods
US8780201B1 (en)2013-07-262014-07-15SkyBell Technologies, Inc.Doorbell communication systems and methods
US20140266669A1 (en)*2013-03-142014-09-18Nest Labs, Inc.Devices, methods, and associated information processing for security in a smart-sensored home
US8872915B1 (en)2013-07-262014-10-28SkyBell Technologies, Inc.Doorbell communication systems and methods
US20140368601A1 (en)*2013-05-042014-12-18Christopher deCharmsMobile security technology
US8937659B1 (en)2013-07-262015-01-20SkyBell Technologies, Inc.Doorbell communication and electrical methods
US8941736B1 (en)2013-07-262015-01-27SkyBell Technologies, Inc.Doorbell communication systems and methods
US8947530B1 (en)2013-07-262015-02-03Joseph Frank ScalisiSmart lock systems and methods
US8953040B1 (en)2013-07-262015-02-10SkyBell Technologies, Inc.Doorbell communication and electrical systems
US9013575B2 (en)2013-07-262015-04-21SkyBell Technologies, Inc.Doorbell communication systems and methods
US9049352B2 (en)2013-07-262015-06-02SkyBell Technologies, Inc.Pool monitor systems and methods
US9053622B2 (en)2013-07-262015-06-09Joseph Frank ScalisiLight socket cameras
US9060103B2 (en)2013-07-262015-06-16SkyBell Technologies, Inc.Doorbell security and safety
US9058738B1 (en)2013-07-262015-06-16SkyBell Technologies, Inc.Doorbell communication systems and methods
US9060104B2 (en)2013-07-262015-06-16SkyBell Technologies, Inc.Doorbell communication systems and methods
US9065987B2 (en)2013-07-262015-06-23SkyBell Technologies, Inc.Doorbell communication systems and methods
US9094584B2 (en)2013-07-262015-07-28SkyBell Technologies, Inc.Doorbell communication systems and methods
US9113052B1 (en)2013-07-262015-08-18SkyBell Technologies, Inc.Doorbell communication systems and methods
US9113051B1 (en)2013-07-262015-08-18SkyBell Technologies, Inc.Power outlet cameras
US9118819B1 (en)2013-07-262015-08-25SkyBell Technologies, Inc.Doorbell communication systems and methods
US9142214B2 (en)2013-07-262015-09-22SkyBell Technologies, Inc.Light socket cameras
US9160987B1 (en)2013-07-262015-10-13SkyBell Technologies, Inc.Doorbell chime systems and methods
US9165444B2 (en)2013-07-262015-10-20SkyBell Technologies, Inc.Light socket cameras
US9172922B1 (en)2013-12-062015-10-27SkyBell Technologies, Inc.Doorbell communication systems and methods
US9172920B1 (en)2014-09-012015-10-27SkyBell Technologies, Inc.Doorbell diagnostics
US9172921B1 (en)2013-12-062015-10-27SkyBell Technologies, Inc.Doorbell antenna
US9179109B1 (en)2013-12-062015-11-03SkyBell Technologies, Inc.Doorbell communication systems and methods
US9179107B1 (en)2013-07-262015-11-03SkyBell Technologies, Inc.Doorbell chime systems and methods
US9179108B1 (en)2013-07-262015-11-03SkyBell Technologies, Inc.Doorbell chime systems and methods
US9197867B1 (en)2013-12-062015-11-24SkyBell Technologies, Inc.Identity verification using a social network
US9196133B2 (en)2013-07-262015-11-24SkyBell Technologies, Inc.Doorbell communication systems and methods
US9230424B1 (en)*2013-12-062016-01-05SkyBell Technologies, Inc.Doorbell communities
US9237318B2 (en)2013-07-262016-01-12SkyBell Technologies, Inc.Doorbell communication systems and methods
US9247219B2 (en)2013-07-262016-01-26SkyBell Technologies, Inc.Doorbell communication systems and methods
US9253455B1 (en)2014-06-252016-02-02SkyBell Technologies, Inc.Doorbell communication systems and methods
US9342936B2 (en)2013-07-262016-05-17SkyBell Technologies, Inc.Smart lock systems and methods
US9508239B1 (en)2013-12-062016-11-29SkyBell Technologies, Inc.Doorbell package detection systems and methods
US9736284B2 (en)2013-07-262017-08-15SkyBell Technologies, Inc.Doorbell communication and electrical systems
US9743049B2 (en)2013-12-062017-08-22SkyBell Technologies, Inc.Doorbell communication systems and methods
US9769435B2 (en)2014-08-112017-09-19SkyBell Technologies, Inc.Monitoring systems and methods
US20170289450A1 (en)*2016-02-262017-10-05BOT Home Automation, Inc.Powering Up Cameras Based on Shared Video Footage from Audio/Video Recording and Communication Devices
US9786133B2 (en)2013-12-062017-10-10SkyBell Technologies, Inc.Doorbell chime systems and methods
US20180184244A1 (en)*2016-12-222018-06-28Motorola Solutions, IncDevice, method, and system for maintaining geofences associated with criminal organizations

Patent Citations (51)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6405213B1 (en)*1997-05-272002-06-11Hoyt M. LaysonSystem to correlate crime incidents with a subject's location using crime incident data and a subject location recording device
US7193644B2 (en)2002-10-152007-03-20Revolutionary Concepts, Inc.Automated audio video messaging and answering system
US8139098B2 (en)2002-10-152012-03-20Revolutionary Concepts, Inc.Video communication method for receiving person at entrance
US8144183B2 (en)2002-10-152012-03-27Revolutionary Concepts, Inc.Two-way audio-video communication method for receiving person at entrance
US8154581B2 (en)2002-10-152012-04-10Revolutionary Concepts, Inc.Audio-video communication system for receiving person at entrance
US8549028B1 (en)*2008-01-242013-10-01Case Global, Inc.Incident tracking systems and methods
US20140266669A1 (en)*2013-03-142014-09-18Nest Labs, Inc.Devices, methods, and associated information processing for security in a smart-sensored home
US20140368601A1 (en)*2013-05-042014-12-18Christopher deCharmsMobile security technology
US9113052B1 (en)2013-07-262015-08-18SkyBell Technologies, Inc.Doorbell communication systems and methods
US9165444B2 (en)2013-07-262015-10-20SkyBell Technologies, Inc.Light socket cameras
US8872915B1 (en)2013-07-262014-10-28SkyBell Technologies, Inc.Doorbell communication systems and methods
US8823795B1 (en)2013-07-262014-09-02SkyBell Technologies, Inc.Doorbell communication systems and methods
US8937659B1 (en)2013-07-262015-01-20SkyBell Technologies, Inc.Doorbell communication and electrical methods
US8941736B1 (en)2013-07-262015-01-27SkyBell Technologies, Inc.Doorbell communication systems and methods
US8947530B1 (en)2013-07-262015-02-03Joseph Frank ScalisiSmart lock systems and methods
US8953040B1 (en)2013-07-262015-02-10SkyBell Technologies, Inc.Doorbell communication and electrical systems
US9013575B2 (en)2013-07-262015-04-21SkyBell Technologies, Inc.Doorbell communication systems and methods
US9049352B2 (en)2013-07-262015-06-02SkyBell Technologies, Inc.Pool monitor systems and methods
US9053622B2 (en)2013-07-262015-06-09Joseph Frank ScalisiLight socket cameras
US9060103B2 (en)2013-07-262015-06-16SkyBell Technologies, Inc.Doorbell security and safety
US9058738B1 (en)2013-07-262015-06-16SkyBell Technologies, Inc.Doorbell communication systems and methods
US9060104B2 (en)2013-07-262015-06-16SkyBell Technologies, Inc.Doorbell communication systems and methods
US9065987B2 (en)2013-07-262015-06-23SkyBell Technologies, Inc.Doorbell communication systems and methods
US9094584B2 (en)2013-07-262015-07-28SkyBell Technologies, Inc.Doorbell communication systems and methods
US8780201B1 (en)2013-07-262014-07-15SkyBell Technologies, Inc.Doorbell communication systems and methods
US9113051B1 (en)2013-07-262015-08-18SkyBell Technologies, Inc.Power outlet cameras
US9118819B1 (en)2013-07-262015-08-25SkyBell Technologies, Inc.Doorbell communication systems and methods
US9142214B2 (en)2013-07-262015-09-22SkyBell Technologies, Inc.Light socket cameras
US9160987B1 (en)2013-07-262015-10-13SkyBell Technologies, Inc.Doorbell chime systems and methods
US8842180B1 (en)2013-07-262014-09-23SkyBell Technologies, Inc.Doorbell communication systems and methods
US9736284B2 (en)2013-07-262017-08-15SkyBell Technologies, Inc.Doorbell communication and electrical systems
US9342936B2 (en)2013-07-262016-05-17SkyBell Technologies, Inc.Smart lock systems and methods
US9247219B2 (en)2013-07-262016-01-26SkyBell Technologies, Inc.Doorbell communication systems and methods
US9237318B2 (en)2013-07-262016-01-12SkyBell Technologies, Inc.Doorbell communication systems and methods
US9179107B1 (en)2013-07-262015-11-03SkyBell Technologies, Inc.Doorbell chime systems and methods
US9179108B1 (en)2013-07-262015-11-03SkyBell Technologies, Inc.Doorbell chime systems and methods
US9196133B2 (en)2013-07-262015-11-24SkyBell Technologies, Inc.Doorbell communication systems and methods
US9508239B1 (en)2013-12-062016-11-29SkyBell Technologies, Inc.Doorbell package detection systems and methods
US9786133B2 (en)2013-12-062017-10-10SkyBell Technologies, Inc.Doorbell chime systems and methods
US9179109B1 (en)2013-12-062015-11-03SkyBell Technologies, Inc.Doorbell communication systems and methods
US9172921B1 (en)2013-12-062015-10-27SkyBell Technologies, Inc.Doorbell antenna
US9197867B1 (en)2013-12-062015-11-24SkyBell Technologies, Inc.Identity verification using a social network
US9172922B1 (en)2013-12-062015-10-27SkyBell Technologies, Inc.Doorbell communication systems and methods
US9743049B2 (en)2013-12-062017-08-22SkyBell Technologies, Inc.Doorbell communication systems and methods
US9799183B2 (en)2013-12-062017-10-24SkyBell Technologies, Inc.Doorbell package detection systems and methods
US9230424B1 (en)*2013-12-062016-01-05SkyBell Technologies, Inc.Doorbell communities
US9253455B1 (en)2014-06-252016-02-02SkyBell Technologies, Inc.Doorbell communication systems and methods
US9769435B2 (en)2014-08-112017-09-19SkyBell Technologies, Inc.Monitoring systems and methods
US9172920B1 (en)2014-09-012015-10-27SkyBell Technologies, Inc.Doorbell diagnostics
US20170289450A1 (en)*2016-02-262017-10-05BOT Home Automation, Inc.Powering Up Cameras Based on Shared Video Footage from Audio/Video Recording and Communication Devices
US20180184244A1 (en)*2016-12-222018-06-28Motorola Solutions, IncDevice, method, and system for maintaining geofences associated with criminal organizations

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20230144497A1 (en)*2020-03-302023-05-11Signify Holding B.V.A system for monitoring a space by a portable sensor device and a method thereof
US20230070108A1 (en)*2021-09-092023-03-09Selex Es Inc.Systems And Methods For Electronic Signature Tracking And Analysis
US12236780B2 (en)*2021-09-092025-02-25Leonardo Us Cyber And Security Solutions, LlcSystems and methods for electronic signature tracking and analysis
US12347315B2 (en)2022-01-242025-07-01Leonardo Us Cyber And Security Solutions LlcSystems and methods for parking management
US20230385405A1 (en)*2022-05-272023-11-30The Boeing CompanySystem, method, and program for analyzing vehicle system logs
US12135782B2 (en)*2022-05-272024-11-05The Boeing CompanySystem, method, and program for analyzing vehicle system logs
US12060037B2 (en)*2022-12-192024-08-13Motorola Solutions, Inc.Method and system for creation and application of parked vehicle security rules
US20250310487A1 (en)*2024-03-282025-10-02Adeia Guides Inc.Systems and methods for enhanced video doorbell experiences

Similar Documents

PublicationPublication DateTitle
US11158067B1 (en)Neighborhood alert mode for triggering multi-device recording, multi-camera locating, and multi-camera event stitching for audio/video recording and communication devices
US11399157B2 (en)Augmenting and sharing data from audio/video recording and communication devices
US11532219B2 (en)Parcel theft deterrence for A/V recording and communication devices
US10769914B2 (en)Informative image data generation using audio/video recording and communication devices
US11232685B1 (en)Security system with dual-mode event video and still image recording
US12096156B2 (en)Customizable intrusion zones associated with security systems
US11196966B2 (en)Identifying and locating objects by associating video data of the objects with signals identifying wireless devices belonging to the objects
US10475311B2 (en)Dynamic assessment using an audio/video recording and communication device
US10885396B2 (en)Generating composite images using audio/video recording and communication devices
US20180338120A1 (en)Intelligent event summary, notifications, and video presentation for audio/video recording and communication devices
US10891839B2 (en)Customizable intrusion zones associated with security systems
US20180233010A1 (en)Neighborhood alert mode for triggering multi-device recording, multi-camera motion tracking, and multi-camera event stitching for audio/video recording and communication devices
US11393108B1 (en)Neighborhood alert mode for triggering multi-device recording, multi-camera locating, and multi-camera event stitching for audio/video recording and communication devices
US11243959B1 (en)Generating statistics using electronic device data
US20180247504A1 (en)Identification of suspicious persons using audio/video recording and communication devices
US11195408B1 (en)Sending signals for help during an emergency event
US11736665B1 (en)Custom and automated audio prompts for devices
US11349707B1 (en)Implementing security system devices as network nodes
US11659144B1 (en)Security video data processing systems and methods
US10713928B1 (en)Arming security systems based on communications among a network of security systems
US10943442B1 (en)Customized notifications based on device characteristics
US12212895B1 (en)Using motion sensors for direction detection
US12063458B1 (en)Synchronizing security systems in a geographic network
WO2018187451A1 (en)Augmenting and sharing data from audio/video recording and communication devices
US12380784B1 (en)Sharing video footage having audio/video recording and communication device model identifiers

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp