The present application claims priority of chinese patent application having application number 202110155795.3 and entitled "a cross-device authentication method and electronic device" filed at 4.02/2021 by the chinese patent office, the entire contents of which are incorporated herein by reference.
Detailed Description
The technical solution in the embodiments of the present application is described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terminology used in the following embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship that associates objects, meaning that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless otherwise noted. "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Before technical solutions of embodiments of the present application are introduced, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Some concepts related to embodiments of the present application are presented below:
(1) a secure environment of an authentication device may include: a hardware security Element (inSE) level, a Trusted Execution Environment (TEE) level, a white box, and a key segment.
In this embodiment of the present application, the hardware security unit may refer to: the independent security unit built in the main chip provides functions of secure storage of private information, secure execution of important programs, and the like. The security level of the root key is protected by using the inSE level is higher, and hardware can be prevented from being tampered.
The TEE in the embodiment of the application may refer to that the trusted execution environment is a hardware security isolation area of the main processor, and provides functions of confidentiality and integrity protection of codes and data, security access of external devices, and the like. The security level of the root key is protected by using the TEE level is higher, and the hardware security level can be reached.
(2) Distributed storage systems (distributed storage systems) store data in a distributed manner on a plurality of independent devices. The traditional network storage system adopts a centralized storage server to store all data, the storage server becomes the bottleneck of the system performance, is also the focus of reliability and safety, and cannot meet the requirement of large-scale storage application. The distributed network storage system adopts an expandable system structure, utilizes a plurality of storage servers to share the storage load, and utilizes the position server to position the storage information, thereby not only improving the reliability, the availability and the access efficiency of the system, but also being easy to expand.
(3) The authentication information may include: user secret data, biometric data, and the like. The user secret data may include a screen locking password of the user, a protection password of the user, and the like. The biometric data may include one or more of: physical biometric, behavioral biometric, soft biometric. The physical biometric characteristics may include: face, fingerprint, iris, retina, deoxyribonucleic acid (DNA), skin, hand, vein. The behavioral biometric may include: voiceprint, signature, gait. Soft biometrics may include: gender, age, height, weight, etc.
(4) A service, a transaction that is a process performed to implement its function or service for a device. Illustratively, the service may be an unlocking service, a payment service, a door opening service, an Artificial Intelligence (AI) computing service, various application services, a distribution service, and the like.
Currently, the identity authentication method of the electronic device does not support cross-device authentication. For example, a user has a PC and a mobile phone, and the user can connect the PC and the mobile phone together in a wireless or wired manner to cooperate with each other, thereby realizing cooperative work of the PC and the mobile phone. When the PC and the mobile phone work together, the display interface of the PC also comprises the interface of the mobile phone. Suppose the user does not operate the mobile phone for a long time, the mobile phone interface locks the screen, as shown in fig. 1A. In order to unlock the screen, the user has to operate the mobile phone to perform face unlocking. For example, as shown in fig. 1B, the user has to face the mobile phone end, so that the front camera at the mobile phone end can collect the face image of the user. And after the face image is collected, the mobile phone side authenticates the collected face image, and if the authentication is successful, the mobile phone successfully unlocks the screen. Illustratively, when the mobile phone is successfully authenticated, the mobile phone displays a main menu interface, and the interface of the mobile phone in the display interface of the PC also synchronously displays the main menu interface.
Therefore, at present, when a user accesses data of other electronic devices (such as a mobile phone) on an electronic device (such as a PC), and needs to perform identity authentication, the user has to perform authentication information collection and authentication operations on the accessed electronic device (such as a tablet personal computer), which brings inconvenience to the user operation.
In order to solve the above problem, an embodiment of the present application provides a cross-device authentication method, where a user accesses an interface and data of a second electronic device (e.g., a mobile phone) on a first electronic device (e.g., a PC), and when the second electronic device needs to perform identity authentication on an operating user, the method may complete acquisition of authentication information related to the user on the first electronic device currently operated by the user, and then send the acquired authentication information to the second electronic device, where the authentication information is authenticated by the second electronic device to generate an authentication result.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Please refer to fig. 2, which is a simplified diagram of a system architecture to which the above method can be applied according to an embodiment of the present disclosure. As shown in fig. 1, the system architecture may include at least:electronic device 100 andelectronic device 200.
Theelectronic device 100 and theelectronic device 200 may establish connection in a wired or wireless manner. Based on the established connection,electronic device 100 andelectronic device 200 may be used together in a mating manner. In this embodiment, the wireless communication protocol used when theelectronic device 100 and theelectronic device 200 establish a connection in a wireless manner may be a wireless fidelity (Wi-Fi) protocol, a Bluetooth (Bluetooth) protocol, a ZigBee protocol, a Near Field Communication (NFC) protocol, various cellular network protocols, and the like, and is not limited in particular. Hereinafter, theelectronic device 100 is also referred to as a first electronic device, and theelectronic device 200 is referred to as a second electronic device.
In a specific implementation, theelectronic device 100 and theelectronic device 200 may be a mobile phone, a tablet computer, a handheld computer, a Personal Computer (PC), a cellular phone, a Personal Digital Assistant (PDA), a wearable device (e.g., a smart watch), an intelligent home device (e.g., a television), a car computer, a game machine, and an Augmented Reality (AR) \ Virtual Reality (VR) device, and the like, and the specific device forms of theelectronic device 100 and theelectronic device 200 are not particularly limited in this embodiment. In this embodiment, theelectronic device 100 and theelectronic device 200 may have the same device form. Such aselectronic device 100 andelectronic device 200 both being cell phones. The device forms of theelectronic device 100 and theelectronic device 200 may be different. As shown in fig. 1, theelectronic device 100 is a PC, and theelectronic device 200 is a mobile phone.
Theelectronic device 100 and theelectronic device 200 may be touch screen devices or non-touch screen devices. In this embodiment, theelectronic device 100 and theelectronic device 200 are both terminals that can run an operating system, install an application, and have a display (or a display screen). The display screen including only the display processing module is not theelectronic apparatus 100 and theelectronic apparatus 200 described in the present embodiment. The operating systems of theelectronic device 100 and theelectronic device 200 may be an Android system, an ios system, a windows system, a mac system, a Linux system, and the like, which is not limited in this embodiment. The operating systems of theelectronic device 100 and theelectronic device 200 may be the same or different. As an example, theelectronic device 100, theelectronic device 200 may include a memory, a processor, and a display, respectively. The memory may be used to store an operating system, and the processor may be used to run the operating system stored in the memory.
In this embodiment of the application, when theelectronic device 100 is connected to theelectronic device 200, a user may utilize an input device (e.g., a mouse, a touch pad, or a touch screen) of theelectronic device 100 to operate UI elements, such as an application window, a freeform widget, a video component, a floating window, a picture-in-picture, a widget, and a UI control, displayed on a display screen of theelectronic device 100, so as to implement multi-screen cooperative use.
In one possible embodiment of the present application, the system architecture may further include aserver 300, and theelectronic device 100 may establish a connection with theelectronic device 200 through theserver 300 in a wired or wireless manner.
Theelectronic device 100 and theelectronic device 200 may be any electronic devices having a service processing function in the field, and may be, for example, a combination of any two electronic devices having a service processing function, such as a smart speaker, a smart door lock, a smart screen, a mobile phone, a tablet computer, a smart wearable device, a computer, a smart camera, a car machine, a game machine, a projector, and a remote controller.
Referring to fig. 3, a schematic structural diagram of anelectronic device 100 according to an embodiment of the present disclosure is shown. The method in the following embodiments may be implemented in theelectronic device 100 having the above-described hardware structure.
As shown in fig. 3, theelectronic device 100 may include aprocessor 110, anexternal memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, acharging management module 140, a power management module 141, a battery 142, anantenna 1, anantenna 2, awireless communication module 160, an audio module 170, a speaker 170A, areceiver 170B, amicrophone 170C, anearphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and the like. Optionally, theelectronic device 100 may further include amobile communication module 150, a Subscriber Identity Module (SIM)card interface 195, and the like.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: theprocessor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of theelectronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided inprocessor 110 for storing instructions and data. In some embodiments, the memory in theprocessor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by theprocessor 110. If theprocessor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of theprocessor 110, thereby increasing the efficiency of the system.
In some embodiments,processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM interface, and/or a USB interface, etc.
Thecharging management module 140 is configured to receive charging input from a charger. Thecharging management module 140 may also supply power to theelectronic device 100 through the power management module 141 while charging the battery 142. The power management module 141 is used to connect the battery 142, thecharging management module 140 and theprocessor 110. The power management module 141 may also receive input from the battery 142 to power theelectronic device 100.
The wireless communication function of theelectronic device 100 may be implemented by theantenna 1, theantenna 2, themobile communication module 150, thewireless communication module 160, a modem processor, a baseband processor, and the like.
Theantennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in theelectronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: theantenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
When theelectronic device 100 includes themobile communication module 150, themobile communication module 150 may provide a solution including 2G/3G/4G/5G and the like wireless communication applied on theelectronic device 100. Themobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. Themobile communication module 150 may receive the electromagnetic wave from theantenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. Themobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through theantenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of themobile communication module 150 may be disposed in theprocessor 110. In some embodiments, at least some of the functional modules of themobile communication module 150 may be disposed in the same device as at least some of the modules of theprocessor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, thereceiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as themobile communication module 150 or other functional modules, independent of theprocessor 110.
Thewireless communication module 160 may provide a solution for wireless communication applied to theelectronic device 100, including Wireless Local Area Networks (WLANs), such as Wi-Fi networks, Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), NFC, Infrared (IR) technologies, and the like. Thewireless communication module 160 may be one or more devices integrating at least one communication processing module. Thewireless communication module 160 receives electromagnetic waves via theantenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to theprocessor 110. Thewireless communication module 160 may also receive a signal to be transmitted from theprocessor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through theantenna 2 to radiate the electromagnetic waves.
In some embodiments,antenna 1 ofelectronic device 100 is coupled tomobile communication module 150 andantenna 2 is coupled towireless communication module 160 so thatelectronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
Theelectronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. Theprocessor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, theelectronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
Theelectronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like. In some embodiments, theelectronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Theexternal memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of theelectronic device 100. The external memory card communicates with theprocessor 110 through theexternal memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. Theprocessor 110 executes various functional applications of theelectronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of theelectronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
Theelectronic device 100 may implement audio functions via the audio module 170, the speaker 170A, thereceiver 170B, themicrophone 170C, theheadphone interface 170D, and the application processor. Such as music playing, recording, etc.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. When a touch operation is applied to the display screen 194, theelectronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. Theelectronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A.
The gyro sensor 180B may be used to determine the motion attitude of theelectronic device 100. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. Theelectronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. The acceleration sensor 180E may detect the magnitude of acceleration of theelectronic device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. Theelectronic device 100 can utilize the proximity light sensor 180G to detect that the user holds theelectronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen. The ambient light sensor 180L is used to sense the ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. Theelectronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on. The temperature sensor 180J is used to detect temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of theelectronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
When theelectronic device 100 includes theSIM card interface 195, theSIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with theelectronic apparatus 100 by being inserted into theSIM card interface 195 or being pulled out of theSIM card interface 195. Theelectronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. Theelectronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, theelectronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in theelectronic device 100 and cannot be separated from theelectronic device 100.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to theelectronic apparatus 100. In other embodiments,electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Referring to fig. 4, a schematic diagram of a software architecture provided in the embodiment of the present application is shown. As shown in fig. 4, the software architecture of theelectronic device 100 and theelectronic device 200 may each include: application layer and framework layer (FWK).
In some embodiments, the application layer may include various applications installed in the electronic device. For example, applications installed in the electronic device may include settings, calculators, cameras, short messages, music players, file managers, galleries, browsers, memos, news, video players, mail, and the like. The applications may be system applications of the electronic device, or may also be third-party applications, and the embodiments of the present application are not specifically limited herein. For example, the application layer of theelectronic device 100 may include various applications installed in theelectronic device 100, such as a file manager, a calculator, a music player, a video player, and the like. As another example, the application layer of theelectronic device 200 may include a file manager, gallery, memo, video player, mail, and the like.
In some embodiments, the framework layer includes a window management module to enable windowing of the display interface. The framework layer may include a resource management module, in addition to the window management module, for managing acquisition capability, authentication capability, and authentication information on theelectronic device 100 and theelectronic device 200, and optionally, the resource management module may further include an authentication method corresponding to each service in theelectronic device 100 and an authentication method of each service in theelectronic device 200; the framework layer can also comprise an authentication service module used for completing identity authentication according to the authentication information.
In the embodiment of the present application, after theelectronic device 100 and theelectronic device 200 establish a connection, based on the software architecture, the user may operate a UI element of a second electronic device displayed in theelectronic device 100 using an input device (such as a mouse, a touch pad, or a touch screen) of theelectronic device 100. In addition, the user may collect authentication information (e.g., a human face and a fingerprint) by using a collection device (e.g., a camera and a sensor) of theelectronic device 100, or the user may input authentication information (e.g., a password or touch operation) on theelectronic device 100 by using an input device (e.g., a mouse, a touch pad, or a touch screen) of theelectronic device 100, theelectronic device 100 may send the collected authentication information to theelectronic device 200, and an authentication service module of theelectronic device 200 authenticates the authentication information to generate an authentication result, thereby implementing cross-device authentication.
It should be noted that the software architecture illustrated in the present embodiment does not specifically limit theelectronic device 100 and theelectronic device 200. In other embodiments,electronic device 100 and/orelectronic device 200 may include more or fewer layers than shown, or more or fewer modules, or a combination of certain modules, or a different arrangement of modules, and the embodiments are not limited in this respect. For example, the software architecture shown above may include other layers, such as a kernel layer (not shown in fig. 4) and the like, in addition to the application layer and the framework layer described above. The kernel layer is a layer between hardware and software. The core layer may include at least a display driver, a camera driver, an audio driver, a sensor driver, and the like.
Example one
Embodiment one relates to fig. 5 to 7D.
Fig. 5 is a flowchart illustrating a cross-device authentication method according to an embodiment of the present application. In this embodiment, the following describes in detail the cross-device authentication method provided in this embodiment by taking the first electronic device as a PC and the second electronic device as a mobile phone as an example. Illustratively, the mobile phone and the PC perform multi-screen cooperative office, an input device of the PC is a mouse, and the PC further has a camera. It should be noted that the cross-device authentication method shown in this embodiment is also applicable to other types of electronic devices.
As shown in fig. 5, the cross-device authentication method provided by the embodiment of the present application may include the following S501-S505.
S501, receiving a target operation of a user on a first object in a first window by the PC, wherein the first window is a display window projected from the mobile phone to the PC.
The target operation is used for triggering the execution of the first service. The first service is a service of a mobile phone. For example, the first service is an unlocking service of the mobile phone. Illustratively, as shown in fig. 7A, a first window of the mobile phone is displayed in the PC interface, and the first window is alock screen interface 702. Assuming that the user wants to unlock the mobile phone, the user may click theface unlocking control 703 in the screen locking interface by operating the mouse of the PC. That is, the PC may receive a click operation of the user on theface unlock control 703 in thelock screen interface 702.
Optionally, the PC may also trigger the unlocking of theinterface 702 without receiving a target operation of the user, that is, without receiving the target operation, for example, the PC may monitor the user through thecamera 701, and execute subsequent steps after monitoring the face of the user.
S502, responding to the target operation, and the PC acquires a target authentication mode corresponding to the first service.
Specifically, the first service is a service in the second electronic device, such as unlocking, payment, and the like.
One possible mode is that the PC sends an authentication request to the mobile phone, the authentication request comprises information of a first service which is accessed currently, the mobile phone determines an authentication mode corresponding to the first service, and then the authentication mode corresponding to the first service is obtained from the mobile phone. Illustratively, the mobile phone determines that the target authentication mode of the face unlocking service is face authentication.
Optionally, the mobile phone may further query the resource pool, and determine that the PC side has a face acquisition capability, so that it is determined that the authentication capability corresponding to the face unlocking operation is the face capability of the face at the mobile phone side, and the acquisition capability corresponding to the face unlocking operation is the face acquisition capability of the face at the PC side, and the mobile phone may send the determined face authentication mode, and information such as the face acquisition capability and the face authentication capability associated with the face authentication mode, to the PC, so that the PC schedules the face acquisition capability to perform face acquisition.
In another possible mode, the PC and the mobile phone can use the pre-synchronization resources to obtain a resource pool, wherein the resource pool comprises authentication modes corresponding to various operations in the mobile phone, so that the PC can query the local resource pool and determine the authentication mode corresponding to the target operation. Optionally, the resource pool may further include templates of various authentication information, such as a fingerprint template and a face template.
Specifically, in this step, the specific manner in which the PC or the mobile phone determines the target authentication manner corresponding to the first service may be multiple manners:
first, as shown in fig. 6A, the resource pool in the PC or the mobile phone may include a preset configuration table, where the preset configuration table includes a corresponding relationship between various operations/various services and authentication methods. The PC or the mobile phone can determine a target authentication mode according to the first service by inquiring a local configuration table, if the target operation is face unlocking of a mobile phone screen, the target authentication mode is face authentication, and if the first service is payment service, the target authentication mode is face authentication or fingerprint authentication.
It should be understood that, in this embodiment, in the case that the PC determines the target authentication manner, before performing the above S501, the PC may obtain a preset configuration table from the mobile phone, and then determine the target authentication manner corresponding to the first service by using the configuration table obtained from the mobile phone. Or, the mobile phone may upload a preset configuration table to the server, and the PC acquires the preset configuration table from the server, and then determines a target authentication mode corresponding to the first service by using the configuration table acquired from the server. In one possible case, the preset configuration table may be generated according to the configuration of a user using the mobile phone, and in another possible case, the preset configuration table may also be generated according to the configuration of a service requirement by a developer.
In the second mode, as shown in fig. 6B, the PC or the mobile phone may determine the security risk level corresponding to the operation/service, and then determine the authentication mode corresponding to the security risk level.
It should be understood that in one possible scenario, a developer may pre-define a complete set of operations/services (e.g., open door, turn on light, pay, access lockers), and then establish a fixed mapping between actions in the complete set of operations/services and the security level of the risk. And if the safety risk level corresponding to the door opening operation is defined as a high safety risk level, the safety risk level corresponding to the lamp opening operation is defined as a low safety risk level. Illustratively, as shown in table 1, a full set of predefined operations (open door, turn on light, pay, access to secure files, access to general files, etc.) in a PC or handset establishes a mapping table of operations to security risk levels, as shown in the following table.
| Operations/services | Level of security risk | 
| Door opening | Height of | 
| Turning on the light | Is low in | 
| Payment | Height of | 
| Accessing secure documents | Height of | 
| Accessing general files | In | 
| … | … | 
In another possible scenario, the developer may dynamically determine a security risk level corresponding to the first service based on the analysis policy. For example, the analysis strategy may be: determining a correlation coefficient between the first service and the private data of the user, and when the correlation coefficient is lower, determining that the security risk level corresponding to the target operation is a low security risk level; when the correlation coefficient is in the middle, determining that the safety risk level corresponding to the target operation is a medium safety risk level; and when the correlation coefficient is higher, determining that the safety risk level corresponding to the target operation is a high safety risk level. For example, the PC or the mobile phone determines that the correlation coefficient between the unlocking operation and the private data of the user shown in fig. 6B is large according to the analysis policy, and thus determines that the security risk level of the door opening operation is a high security risk level. Illustratively, the PC analyzes the degree of correlation between target operation and privacy data of the user by using artificial intelligence, and dynamically judges the security risk level of the current business execution action according to the data analysis result. For example, as shown in the following table.
| Operations/services | Involving data | Level of security risk | 
| Door opening | Home data | Height of | 
| Payment | Payment data, user password | Height of | 
| Turning on the light | Home data | Is low in | 
| … | … | … | 
In addition, it should be understood that in the second mode, a corresponding relationship between the risk security level and the authentication mode needs to be established in advance, and a developer may define the reliability of different authentication modes in advance, and then match the corresponding relationship between the security risk level and the authentication mode according to the reliability, where the higher the security risk level is, the higher the reliability is, the lower the security risk level is, and the lower the reliability is. In this way, since the first service is a service of the mobile phone, the mobile phone may determine the security risk level corresponding to the operation/service first, and then determine the authentication method corresponding to the security risk level, and the mobile phone may send the determined target authentication method corresponding to the first service to the PC.
S503, the PC collects the authentication information of the user according to the target authentication mode.
In this embodiment, the authentication information is user information that needs to be authenticated by the target authentication method, for example, information such as a fingerprint of a user, a face of the user, or a password input by the user.
For example, as shown in fig. 7A, when the PC receives a click operation of a user on theface unlocking control 703 in thelock screen interface 702, the PC determines that the target authentication mode is face authentication, and the authentication information of the user to be acquired is a face image. The PC calls acamera 701 of the PC to collect a face image of the user.
S504, the PC sends an authentication request message to the mobile phone, the authentication request message comprises authentication information, and the authentication request is used for requesting authentication of the first service.
For example, in the scenario shown in fig. 7A, the PC acquires a face image from the camera, and sends a face authentication request message including the acquired face image to the mobile phone, where the face authentication request message is used to request face authentication for a face unlocking operation.
And S505, the mobile phone receives the authentication request message, acquires the authentication information from the authentication request message, authenticates the target operation by using the authentication information, and generates an authentication result.
Illustratively, the mobile phone receives a face image from the PC, performs face authentication using the face image and a face template stored in the mobile phone, and generates an authentication result of the face. And when the authentication result of the face is that the authentication is passed, the mobile phone responds, namely the screen is unlocked, the unlocked mobile phone interface is displayed, and the PC also synchronously displays the unlocked mobile phone interface. And when the authentication result of the face is authentication failure, the mobile phone responds, namely an unlocking failure interface is displayed, and the PC also synchronously displays the unlocking failure interface.
In a possible embodiment, the method may further include step S506, where the handset may further send a response message to the PC for the authentication request message, the response message including the authentication result. In this embodiment, the mobile phone sends the authentication result to the PC, and if the authentication result is successful, the PC may prompt the user that the authentication is successful in the interface. If the authentication fails, the PC may prompt the user in the interface that the authentication failed.
In order to describe the above method more systematically, the following describes, with reference to fig. 7B, a mobile phone screen unlocking process shown in fig. 7A, where a software architecture of a PC includes a resource management unit and a scheduling unit, and in addition, the PC further includes a human face acquisition unit. When the PC receives a face unlocking operation of a user, a service layer of the PC generates a face unlocking request, then a resource management unit in the PC determines acquisition capacity related to face authentication, and when the resource management unit determines that the face acquisition capacity in the PC is available, the resource management unit instructs the face acquisition unit to perform face acquisition, the face acquisition unit calls a camera to acquire a face of the user, then a scheduling unit in the PC transmits a face image acquired by the face acquisition unit to a mobile phone, the scheduling unit of the mobile phone schedules the authentication unit of the face to authenticate the face, each functional unit in fig. 7B is a service located in the PC or the mobile phone and can be implemented in an internal memory 121 in fig. 2, and a plurality of or one integrated corresponding functional units can be implemented, which is not limited herein.
In a possible embodiment, if the PC determines that the target authentication manner corresponding to the target operation is a combination of at least two different authentication manners in S502, the PC may collect at least two kinds of authentication information corresponding to the target authentication manner at the PC, and then, in S503, the PC sends the at least two kinds of authentication information to the mobile phone. And the mobile phone authenticates the authentication information acquired by the PC by using the template of the authentication information locally stored in the mobile phone.
As still another example, as shown in aninterface 711 shown in fig. 7C, theinterface 711 prompts the user to pay for purchasing a video, when the user determines that a purchase is needed, the user may click the purchaseimmediate control 712, and after the PC receives a click operation performed by the user on the purchaseimmediate control 712, the PC may determine that the target authentication manner corresponding to the payment operation includes face authentication and voiceprint authentication. As shown in fig. 7D, the PC calls thecamera 701 of the PC, and locally acquires a face image corresponding to the face authentication method in the PC, and adisplay window 721 in the PC displays a preview effect of the face image acquired by the camera. In addition, the user sends a voice command "Xiaoyi please pay" according to the instruction of thedisplay window 721, the PC calls themicrophone 722, the voiceprint corresponding to the voiceprint authentication mode is collected locally at the PC, and then the face image and the voiceprint are sent to the mobile phone at the PC. The mobile phone authenticates the face image by using the face template locally stored in the mobile phone, authenticates the voiceprint by using the voiceprint template locally stored in the mobile phone, and obtains the authentication result of the payment operation by combining the authentication result of the face and the authentication result of the voiceprint.
It can be seen from the foregoing embodiments that, in a multi-screen collaborative scenario, a user accesses services of other electronic devices (e.g., a mobile phone) on an electronic device (e.g., a PC), and when identity authentication is required, the user does not need to operate on the other devices, and the acquisition of authentication information can be completed on the electronic device currently operated by the user, so that convenience of authentication operation can be improved.
Example two
The difference between the second embodiment and the first embodiment is that the collecting and authenticating actions can both be performed on the same device. As shown in fig. 8, the cross-device authentication method provided by the embodiment of the present application may include the following S801 to S808.
S801 to S803 are the same as S501 to S503 described above.
S804, the PC sends a request message to the mobile phone, wherein the request message is used for requesting to acquire the template of the authentication information.
In this step, the template of the authentication information requested by the request message may be a secret template (e.g., a lock screen password) or a biometric template (e.g., a fingerprint template, a face template, etc.). Optionally, the request message includes a type of the authentication information to be requested, so that the mobile phone determines the template of the authentication information according to the type of the authentication information.
S805, the mobile phone sends a response message of the request message to the PC. The response message to the request message includes a template of authentication information.
It should be noted that, in a possible case, the PC and the mobile phone establish a secure channel in advance, and the mobile phone can send a response message to the PC through the secure channel; in another possible case, the PC and the mobile phone may negotiate a negotiation key in advance, encrypt the template for transmitting the authentication information using the negotiation key, and decrypt the template for obtaining the authentication information using the negotiation key.
It should be understood that if the PC and the mobile phone perform resource synchronization and the resource pool in the PC includes the template of the authentication information on the mobile phone, the above steps S804 and S805 may not be performed, that is, S804 and S805 are optional steps and are not necessarily performed steps. Illustratively, the PC acquires a face image, and performs face authentication by using the face image and a face template stored in the PC resource pool to generate an authentication result of the face. And when the authentication result of the face is that the authentication is passed, the mobile phone unlocks the screen and displays the unlocked mobile phone interface, and the PC also synchronously displays the unlocked mobile phone interface.
S806, the PC receives the template of the authentication information, and the PC authenticates the first service by using the authentication information and the template of the authentication information to generate an authentication result.
S807, the PC sends the authentication result to the mobile phone.
Optionally, the step further includes S808, the mobile phone receives the authentication result from the PC, and responds to the target operation according to the authentication result. Optionally, after the mobile phone responds, the mobile phone may further send a response message of the authentication request message to the PC, where the response message includes the authentication result. In this embodiment, the mobile phone sends the authentication result to the PC, and if the authentication result is successful, the PC may prompt the user that the authentication is successful in the interface. If the authentication fails, the PC may prompt the user in the interface that the authentication failed.
Illustratively, as shown in fig. 7A, a first window of the mobile phone is displayed in the PC interface, and the first window is alock screen interface 702. Assuming that the user wants to unlock the mobile phone, the user may click theface unlocking control 703 in the screen locking interface by operating the mouse of the PC. That is to say, the PC may receive a click operation of a user on theface unlocking control 703 in thelock screen interface 702, trigger the PC to acquire an authentication mode from the mobile phone as face authentication, and then invoke the camera to acquire a face, and in addition, the PC acquires a template of the face from the mobile phone, thereby authenticating the acquired face information and generating an authentication result. For a detailed interface example, reference may be made to the above embodiments, and details are not repeated here.
It can be seen from the foregoing embodiments that, in a multi-screen coordination scenario, a user accesses a service of another electronic device (e.g., a mobile phone) on an electronic device (e.g., a PC), when the device where the service is located needs to perform identity authentication on the user, the user does not need to operate on the accessed device, and the acquisition and authentication of authentication information can be completed on the electronic device currently operated by the user, so that convenience of authentication operation can be improved, and if the electronic device currently operated by the user has a secure execution environment and the other electronic devices do not have the secure execution environment, the security of an authentication result can also be improved.
EXAMPLE III
The third embodiment is different from the two embodiments described above in that the third embodiment is not limited to be performed in a multi-screen collaborative scenario, and may be that a user triggers a service request related to security of a second electronic device on a first electronic device, so as to initiate cross-device authentication. In this embodiment, the following describes in detail the cross-device authentication method provided in this embodiment by taking the first electronic device as an intelligent television and the second electronic device as a mobile phone as an example. Illustratively, the input device of the smart television is a remote controller, and the smart television further has a camera. It should be noted that the cross-device authentication method shown in this embodiment is also applicable to other types of electronic devices.
As shown in fig. 9, the cross-device authentication method provided by the embodiment of the present application may include the following S901 to S906.
S901, the smart television receives target operation acted on a display window of the smart television by a user.
The target operation is used to trigger execution of the first service. The first service is a service in the smart television, but the first service is associated with the mobile phone. In this embodiment, the first transaction is associated with sensitive data in the handset.
Illustratively, as shown in fig. 10, adisplay window 1000 of a video application is displayed in the smart tv interface, and the display window of the video application includes a purchase-soon control 1001. Assuming that the user wants to pay for watching the full-set video, the user can click on the buy-nowcontrol 1001 in the display window of the video application by operating the remote controller. That is, the smart tv may receive a click operation of the user on the buy-nowcontrol 1001 in thedisplay window 1000. Therefore, after the payment service is executed, the user data in the APP related to payment in the mobile phone is changed, and the user data belongs to the sensitive data in the mobile phone, so that the payment service in the smart television is associated with the sensitive data in the mobile phone. In the present invention, the sensitive data is different according to different scenarios, and for example, the sensitive data may be user data in the device, authentication information of the user, device information, and the like.
S902, responding to the target operation, and the smart television acquires a target authentication mode corresponding to the first service.
Illustratively, the resource pool of the smart television includes authentication manners corresponding to the respective operations, and the smart television may determine, by querying the resource pool, that the target authentication manner corresponding to the click operation of theimmediate purchase control 1001 is the face authentication. Optionally, the smart television may further determine that the acquisition unit of the face corresponding to the face authentication method on the smart television is available by querying the resource pool.
In addition, optionally, the smart television may request the mobile phone to acquire the target authentication method.
And S903, the smart television collects the authentication information of the user according to the target authentication mode.
In this embodiment, the authentication information is user information that needs to be authenticated by the target authentication method, for example, information such as a fingerprint of a user, a face of the user, or a password input by the user.
For example, as shown in fig. 10, when the smart television determines that the target authentication mode is face authentication, the authentication information of the user to be acquired is a face image. The smart television calls a camera 1003 of the smart television to acquire a face image of the user.
And S904, the smart television sends authentication information to the mobile phone.
Illustratively, in the scene shown in fig. 10, the smart television acquires a face image from the camera and transmits the acquired face image to the mobile phone.
S905, the mobile phone receives the authentication request message, acquires the authentication information from the authentication request message, authenticates the target operation by using the authentication information, and generates an authentication result.
Specific examples can be found in S505 above.
S906, the mobile phone sends a response message of sending the authentication request message to the smart television, and the response message comprises an authentication result.
In this embodiment, the mobile phone sends the authentication result to the smart television, and if the authentication result is that the authentication is successful, the smart television can prompt the user that the authentication is successful in the interface. If the authentication fails, the smart television can prompt the user that the authentication fails in the interface.
And S907, the smart television makes a response corresponding to the target operation according to the authentication result.
Illustratively, if the authentication is successful, the smart television displays that the payment is successful; and if the authentication fails, the smart television displays that the payment fails.
The steps S904 to S907 may also be performed on the smart television side for the smart television to acquire the authentication information template from the mobile phone, and the steps S804 to S808 are specifically performed in the embodiment and are not described herein again.
In this embodiment, when the first service triggered by the user on the smart television is associated with the mobile phone, the authentication process of the service needs to be executed by the mobile phone, so that the device security of the mobile phone or the security of sensitive data of the mobile phone can be ensured, and the mobile phone sends the authentication result to the smart television, so that the smart television responds according to the authentication result.
As another example, in a driving scenario, as shown in fig. 11, the in-vehicle terminal and the mobile phone may cooperate to complete cross-device authentication. Specifically, a user sends a voice instruction ' VIP for opening music application ' in a cockpit ', a payment authentication request is sent to a mobile phone by a vehicle-mounted terminal after the vehicle-mounted terminal receives the voice instruction, and the mobile phone determines that an authentication operation corresponding to the payment operation is face authentication. Or, the mobile phone determines that the authentication operation corresponding to the payment operation is face authentication, the vehicle-mounted terminal acquires an authentication mode from the mobile phone as the face authentication, acquires a face template, and the vehicle-mounted terminal performs the face authentication to generate an authentication result. That is, the cross-device authentication method shown in the above embodiment is also applicable to cooperative authentication between the in-vehicle terminal and the mobile phone.
Referring to fig. 12, fig. 12 shows a schematic structural diagram of a communication system. The communication system may include afirst communication device 1200 and asecond communication device 1210, thefirst communication device 1200 includes afirst transceiving unit 1201 and anacquisition unit 1202; wherein:
atransceiving unit 1201, configured to receive a target operation acting on a first interface of a first electronic device, where the target operation is used to trigger access to a first service, and the first service is associated with the second electronic device.
Thetransceiving unit 1201 is further configured to acquire a target authentication manner corresponding to the first service.
And theacquisition unit 1202 is configured to acquire authentication information.
Specifically, thecommunication apparatus 1200 may include at least onecollecting unit 1201, wherein onecollecting unit 1201 may be used to collect at least one type of authentication information (hereinafter, simply referred to as a type of authentication information). The embodiment of the present application takes an example in which one acquisition unit acquires one authentication information. The authentication information may be a fingerprint, a face, a heart rate, a pulse, a behavior habit, a device connection state, or the like. For example, the human face acquisition unit may be used to acquire a human face, and the human face acquisition unit may be a camera 193 shown in fig. 3; the gait acquisition unit can be used for acquiring gait, and the gait acquisition unit can be referred to as a camera 193 shown in fig. 3; the pulse acquisition unit can be used for acquiring pulses, and the pulse acquisition unit can be a pulse sensor 180N shown in fig. 3; the heart rate acquisition unit may be used for acquiring a heart rate, and the heart rate acquisition unit may refer to a heart rate sensor 180P shown in fig. 3; the touch screen behavior acquisition unit may be used to acquire a touch screen behavior, and the touch screen behavior acquisition unit may refer to the display screen 194 shown in fig. 3; the acquisition unit of the trusted device may be configured to acquire a connection status and/or a wearing status of the wearable device.
Thesecond communication device 1210 comprises asecond transceiving unit 1211, anauthentication unit 1212 and adecision unit 1213. Wherein:
thesecond transceiving unit 1211 is configured to receive an authentication request, where the authentication request includes authentication information. Optionally, thesecond transceiving unit 1211 is further configured to receive a request message from the first electronic device, where the request message is used to request a target authentication manner corresponding to the first service.
And anauthentication unit 1212 configured to perform authentication according to the authentication information and generate an authentication result. Theauthentication unit 1202 is typically an authentication service in software implementation, integrated in the operating system.
Thecommunication apparatus 1200 may comprise at least oneauthentication unit 1202, wherein the oneauthentication unit 1202 may be configured to authenticate at least one type of authentication information to obtain an authentication result. The embodiment of the present application takes an example in which one authentication unit authenticates one kind of authentication information. For example, the authentication unit of the face may be configured to authenticate the face to obtain an authentication result of the face; the gait authentication unit can be used for authenticating the gait information to obtain the gait authentication result; the pulse authentication unit can be used for authenticating the collected pulse to obtain an authentication result of the pulse; the heart rate authentication unit can be used for authenticating the collected heart rate to obtain an authentication result of the heart rate; the authentication unit of the touch screen behavior can be used for authenticating the acquired touch screen behavior information to obtain an authentication result of the touch screen behavior; the authentication unit of the trusted device may be configured to authenticate the acquired connection state and/or wearing state of the electronic device to obtain an authentication result of the trusted device.
Thedecision unit 1213 is configured to determine a target authentication method corresponding to the first service.
Optionally, the first communications apparatus and the second communications apparatus may further include aresource management unit 1220, configured to perform resource synchronization with other devices in the device networking, generate a resource pool, or maintain or manage the resource pool. Theresource management unit 1220 includes a resource pool, and the resource in the resource pool may be an authentication factor (or information of the authentication factor), a collection capability of the device, an authentication capability of the device, or the like. Theresource management unit 1220 may refer to the internal memory 121 shown in fig. 3, and the internal memory 121 stores a resource pool.
Based on the same concept, fig. 13 illustrates an apparatus 1300 provided in the present application, where the apparatus 1300 may be a first electronic apparatus or a second electronic apparatus. The device 1300 includes at least oneprocessor 1310,memory 1320, and atransceiver 1330. Theprocessor 1310 is coupled with thememory 1320 and thetransceiver 1330, and in the embodiment of the present invention, the coupling is an indirect coupling or a communication connection between devices, units or modules, and may be in an electrical, mechanical or other form, which is used for information interaction between the devices, units or modules. The connection medium between thetransceiver 1330, theprocessor 1310 and thememory 1320 is not limited in the embodiments of the present invention. For example, in fig. 13, thememory 1320, theprocessor 1310 and thetransceiver 1330 may be connected by a bus, which may be divided into an address bus, a data bus, a control bus, and the like.
In particular,memory 1320 is used to store program instructions.
Thetransceiver 1330 is used to receive or transmit data.
Processor 1310 is configured to call program instructions stored inmemory 1320 to cause device 1300 to perform the steps performed by electronic device 30 or the steps performed by electronic device 10 or electronic device 20 of the above-described method.
In the embodiments of the present application, theprocessor 1310 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In the embodiment of the present application, thememory 1320 may be a non-volatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory (RAM), for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
It should be understood that the apparatus 1300 may be used to implement the method shown in the embodiments of the present application, and the related features may refer to the above description, which is not repeated herein.
An embodiment of the present application further provides a computer-readable storage medium, where a computer instruction is stored in the computer-readable storage medium, and when the computer instruction runs on an electronic device, the electronic device is enabled to execute the relevant method steps to implement the information sorting method in the foregoing embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the information sorting method in the above embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the information sequencing method in the above method embodiments.
In addition, the electronic device, the computer storage medium, the computer program product, or the chip provided in the embodiments of the present application are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be discarded or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.