Movatterモバイル変換


[0]ホーム

URL:


CN113468351B - Intelligent device and image processing method - Google Patents

Intelligent device and image processing method
Download PDF

Info

Publication number
CN113468351B
CN113468351BCN202010342171.8ACN202010342171ACN113468351BCN 113468351 BCN113468351 BCN 113468351BCN 202010342171 ACN202010342171 ACN 202010342171ACN 113468351 BCN113468351 BCN 113468351B
Authority
CN
China
Prior art keywords
preset
label
user
images
labels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010342171.8A
Other languages
Chinese (zh)
Other versions
CN113468351A (en
Inventor
孟卫明
高雪松
陈维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co LtdfiledCriticalHisense Co Ltd
Priority to CN202010342171.8ApriorityCriticalpatent/CN113468351B/en
Publication of CN113468351ApublicationCriticalpatent/CN113468351A/en
Application grantedgrantedCritical
Publication of CN113468351BpublicationCriticalpatent/CN113468351B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses intelligent equipment and an image processing method, which comprise a memory for storing images and preset labels of the images, a receiver for receiving a search instruction of a user, and a processor for extracting n preset labels in the search instruction and then sequentially carrying out layer-by-layer label matching with the images according to the importance degree of the n preset labels to determine k target images. The method comprises the steps of matching an ith preset label with an ith-1 matching result of the ith-1 preset label, determining an ith matching result of the ith preset label, and stopping matching if the number k of corresponding target images is smaller than a first threshold value. Otherwise, continuing to match the ith matching result with the (i+1) th preset label to determine the (i+1) th matching result, wherein the importance degree of the (i+1) th preset label is greater than that of the (i+1) th preset label. The display is used for displaying information of the target image. Therefore, the effect of quickly searching the image wanted by the user is achieved, and the image searching efficiency is improved.

Description

Intelligent device and image processing method
Technical Field
The application relates to the field of intelligent home, in particular to intelligent equipment and an image processing method.
Background
At present, when a user views an album on an intelligent device, the album is generally found through a remote controller, and when the images are very many, the user is difficult to find the images wanted by the user, so that the user experience is poor.
In the prior art, a method for searching images through keyword matching exists, but the number of image results searched through the method may be excessive and may not be wanted by a user, and at this time, the user still needs to use a remote controller to page to search the wanted images, which is troublesome.
Therefore, there is a need for an intelligent device and an image processing method, which avoid that the found image results are too many or do not meet the actual demands of users, and achieve the effect that users find target images quickly.
Disclosure of Invention
The invention provides intelligent equipment and an image processing method, which are used for improving the efficiency of searching images by a user.
The intelligent device comprises a memory, a receiver, a processor and a processor, wherein the memory is used for storing images and preset labels of the images, the receiver is used for receiving a search instruction of a user, the processor is used for extracting n preset labels in the search instruction, n is a positive integer, layer-by-layer label matching is carried out on the images in the memory according to the importance degree of the n preset labels in sequence to determine k target images, k is a positive integer, the layer-by-layer label matching comprises the step of matching an i preset label in the search instruction with an i-1 matching result of an i-1 preset label in the search instruction to determine an i preset label in the search instruction, if the number k of target images in the i preset label in the search result is smaller than a first threshold value, the matching is stopped, otherwise, the step of continuously matching the i+1 preset label according to the i preset label in the search instruction is determined, the importance degree of the i preset label is greater than the importance degree of the i+1 preset label, and the i is smaller than n. And the display is used for displaying information of the k target images.
Illustratively, the memory of the smart device stores the image and a preset tag corresponding to the image. The receiver of the intelligent device receives a search instruction input by a user, and further, the processor of the intelligent device processes the search instruction to extract n preset labels of the search instruction, wherein the preset labels can be ranked according to importance degrees. Further, the intelligent device sequentially matches preset labels of the images according to the importance degrees of the n preset labels, wherein first searching is performed in the images according to the first important preset labels (first preset labels) in the searching instruction, k target images including the first preset labels, namely first matching results, are determined, when k is larger than or equal to a first threshold value, second matching results including second preset labels are continuously searched in the first matching results, when the number of corresponding target images is still larger than or equal to the first threshold value, third matching results including third preset labels are continuously searched in the second matching results until the number k of target images in the j matching results is smaller than the first threshold value, matching is stopped, and information of the target images corresponding to the j matching results is output on the display screen. According to the image tag matching method and device, the image tag matching is sequentially carried out according to the importance degree of the preset tag, the proper number of the target images of the determined matching result is ensured, the obtained target images are images which the user really wants to search, the fact that the number of the target images obtained after matching according to the keywords in the prior art is too large or too small is avoided, and the user experience is better.
In one possible design, the preset tag includes a persona identification tag. The processor is configured to identify facial features of the image in the memory and determine a recognition result. Further, if the similarity between the identification result and the face features of the user in the character database is greater than a second threshold, determining the character identity tag corresponding to the face features of the user as the character identity tag of the image.
The intelligent device receives the image uploaded by the user and stores the image in the memory, and further, the processor of the intelligent device processes and identifies the image and identifies the corresponding preset label. And carrying out face feature recognition on the image, and determining a recognition result. And if the similarity between the identification result and the pre-stored face features of the user is larger than a second threshold value, directly determining the task identity label of the image as the corresponding user. Through the scheme, the person identity tag corresponding to the image is determined through face recognition, follow-up intelligent equipment is helped to match according to the preset tag, and the improvement of the searching efficiency is helped to be achieved.
In one possible design, the processor is further configured to determine an age difference between the person age tag of the image and the person age tag corresponding to the user face feature when the similarity is less than a second threshold and greater than a third threshold, and determine the person identity tag corresponding to the user face feature as the person identity tag of the image if the identification error corresponding to the age difference is not less than the error of the similarity, wherein the error of the similarity is a difference between the first threshold and the similarity. The face recognition processing is carried out on the image, so that the character identity tag of the face is determined, the follow-up intelligent equipment is helped to match according to the preset tag, and the searching efficiency is helped to be improved.
In one possible design, the n preset labels comprise preset labels corresponding to the center word and preset labels corresponding to the auxiliary word, the importance degree of the preset labels corresponding to the center word is larger than that of the preset labels corresponding to the auxiliary word, and a foundation is laid for subsequent intelligent equipment to perform layer-by-layer matching according to the preset labels through the importance degree of the preset labels corresponding to the preset center word and the importance degree of the preset labels corresponding to the auxiliary word, so that the search efficiency is improved, and images really wanted by a user are found out.
In one possible design, the search instruction is voice information, the intelligent device further comprises a voice collector for collecting the voice information, the processor is further configured to recognize corresponding text information and intonation information according to the voice information, and determine center segmentation and preset labels corresponding to the center segmentation, auxiliary segmentation and preset labels corresponding to the auxiliary segmentation according to the text information and the intonation information. The technical scheme is realized in a way that the follow-up intelligent equipment performs layer-by-layer matching according to the preset label.
In one possible design, the processor is specifically configured to extract a preset tag corresponding to the center word by using a main predicate in the text information as the center word when the speech information is determined to be a statement sentence according to the intonation information. And taking the fixed-shape complement in the text information as an auxiliary word segmentation, and extracting a preset label corresponding to the auxiliary word segmentation. The method helps to determine the preset label of the voice information and improves the matching efficiency.
In one possible design, the receiver is further configured to receive an image uploaded by the user, the processor is further configured to identify the uploaded image, determine a preset tag corresponding to each image, and the preset tag includes at least one of a person identity tag, a person age tag, a definition tag, a scene tag, an address tag, and a time information tag. And performing identification processing through the uploaded image, and preparing for layer-by-layer matching of subsequent intelligent equipment according to the preset label.
In one possible design, the processor is further configured to perform deduplication processing on the target image when the number of target images in the nth matching result is greater than or equal to a first threshold, and/or remove images in which sharpness in the target image does not meet a preset condition. The number of the obtained target images accords with the requirements of users, a better display effect is achieved, and user experience is improved.
In one possible design, the first threshold is determined according to the size of the display screen and the display size of the information of the target image, so that a better display effect is achieved, and user experience is improved.
The method has the beneficial effects that the method realizes quick and accurate searching of the images in the intelligent equipment, improves the efficiency of man-machine interaction, and enables the intelligent equipment to quickly identify the intention of the user, thereby achieving the beneficial effects of accurately searching high-quality target images with proper quantity.
In a second aspect, the embodiment of the invention further provides an image processing method, which comprises the steps of receiving a search instruction of a user, further extracting n preset labels in the search instruction, and sequentially performing layer-by-layer label matching on the image by the intelligent device according to the importance degree of the n preset labels, wherein k is a positive integer. The layer-by-layer label matching method comprises the steps of matching an ith preset label in a search instruction with an ith-1 matching result of the ith preset label in the search instruction, determining an ith matching result of the ith preset label in the search instruction, stopping matching if the number k of target images in the ith matching result is smaller than a first threshold value, otherwise, continuously matching the ith preset label in the search instruction according to the ith matching result and the ith+1 preset label in the search instruction, and determining an ith+1 matching result, wherein the importance degree of the ith preset label is greater than that of the ith+1 preset label, and i is a positive integer and smaller than or equal to n. And finally, the intelligent equipment displays the information of the k target images.
In one possible design, the smart device performs facial feature recognition on the image to determine a recognition result.
And if the similarity between the identification result and the user face features in the character database is greater than a second threshold value, determining the character identity tag corresponding to the user face features as the character identity tag of the image.
In one possible design, when the intelligent device determines that the similarity is smaller than the second threshold and larger than the third threshold, determining an age difference between the person age tag of the image and the person age tag corresponding to the face feature of the user. And if the identification error corresponding to the age difference is not smaller than the error of the similarity, determining the person identity tag corresponding to the user face feature as the person identity tag of the image, wherein the error of the similarity is the difference value between the first threshold and the similarity.
In one possible design, the n preset labels include preset labels corresponding to the central word segment and preset labels corresponding to the auxiliary word segment, and the importance degree of the preset labels corresponding to the central word segment is greater than that of the preset labels corresponding to the auxiliary word segment.
In one possible design, when the search instruction is voice information, the intelligent device collects the voice information, then recognizes corresponding text information and intonation information according to the voice information, and further determines the center word and the preset label corresponding to the center word, the auxiliary word and the preset label corresponding to the auxiliary word according to the text information and the intonation information.
In one possible design, the smart device determines, according to the text information and the intonation information, a center word and a preset tag corresponding to the center word, an auxiliary word and a preset tag corresponding to the auxiliary word, including:
when the voice information is determined to be a statement sentence according to the intonation information, the main predicate in the text information is taken as a central word, a preset label corresponding to the central word is extracted, the fixed complement in the text information is taken as an auxiliary word, and the preset label corresponding to the auxiliary word is extracted.
In one possible design, the intelligent device receives the images uploaded by the user and performs recognition processing on the uploaded images to determine a preset label corresponding to each image. The preset labels comprise at least one of a person identity label, a person age label, a definition label, a scene label, an address label and a time information label.
In one possible design, when the number of the target images in the nth matching result is greater than or equal to a first threshold, performing de-duplication processing on the target images, and/or removing images whose sharpness does not meet a preset condition from the target images.
In one possible design, the first threshold is determined based on the size of the display screen and the display size of the information of the target image.
The embodiment of the invention provides a computing device which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program stored in the memory and executing the image processing method according to any one of the obtained programs.
An embodiment of the present invention provides a computer-readable non-volatile storage medium including a computer-readable program which, when read and executed by a computer, causes the computer to execute the image processing method of any one of the above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of an intelligent device according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a hardware configuration block diagram of an intelligent device according to an embodiment of the present invention;
Fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a display for displaying a large-sized target image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a display for displaying a small-sized target image according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of a computer according to an embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a further description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. However, the exemplary embodiments can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus a repetitive description thereof will be omitted. The words expressing the positions and directions described in the present invention are described by taking the drawings as an example, but can be changed according to the needs, and all the changes are included in the protection scope of the present invention. The drawings of the present invention are merely schematic representations of relative positional relationships and are not intended to represent true proportions.
Fig. 1 is a schematic view of an application scenario of an intelligent terminal according to an embodiment of the present invention, including an intelligent device 20 and a client 10. Alternatively, the smart device 20 may be a digital television, a web television, an Internet Protocol Television (IPTV), a smart phone or an electronic album, etc., and the client 10 may be a smart phone, a wearable device, etc.
For example, the client 10 may install a software application with the smart device 20, and implement connection communication through a network communication protocol, so as to achieve the purpose of one-to-one control operation and data communication. For example, the client 10 and the intelligent device 20 can be made to establish a control instruction protocol, the functions such as key pressing are realized by operating various function keys or virtual controls of a user interface provided on the client 10, and the corresponding search instruction can be sent to the intelligent device 20 through connection after the client 10 receives the search instruction in the form of voice or text of the user. The audio and video content displayed on the client 10 can also be transmitted to the display 275 of the smart device 20, so as to realize the synchronous display function.
The hardware configuration block diagram of the smart device 20 shown in fig. 2, where the smart device 20 may include a communicator 220, a detector 230, a controller 250 including a processor 254, a memory 260, and a display 275, where the communicator 220 and the detector 230 may each correspond to the receiver in the present embodiment, that is, only have a search instruction for receiving user input. In other words, the user may input the search instruction through the client 10 and then transmit the search instruction to the smart device 20 through the communicator 220, or may directly send a voice search instruction or a text search instruction to the smart device 20 through the detector 230.
In some smart devices 20, such as televisions, smart device 20 also includes a modem 210, a detector 230, an external device interface 240, a user interface 265, a video processor 270, an audio processor 280, an audio output interface 285, a power supply 290, and the like.
Specifically, the memory 260 is used for storing images and preset labels thereof. The receiver is configured to receive a search instruction input by a user, and then the controller 250 is configured to perform tag matching in the images according to the search instruction, so as to search out the images with a proper number, which are wanted by the user, where the receiver may be the communicator 220 or the detector 230.
The specific functions of the respective components and the roles in the present embodiment are described below, respectively.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the smart device 20 may transmit content data to an external device connected via the communicator 220, such as the client 10, or browse and download content data from an external device connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module such as a WIFI module 221, a bluetooth communication protocol module 222, a wired ethernet communication protocol module 223, etc., so that the communicator 220 may receive a control signal of the client 10 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, etc.
The detector 230 is a component of the smart device 20 for collecting signals of the external environment or interaction with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive the user's sound, such as a control command in the form of a voice message from the user to the smart device 20, or may collect ambient sound for identifying the type of ambient scene, so that the smart device 20 may adapt to ambient noise.
In other exemplary embodiments, the detector 230 may further include an image collector 232, such as a camera, a video camera, etc., that may be used to collect external environmental scenes to adaptively change display parameters of the smart device 20, and to collect attributes of the user or gestures to interact with the user to implement the functionality of the interaction between the smart device 20 and the user.
In other exemplary embodiments, the detector 230 may further include a light receiver for collecting ambient light intensity to adapt to changes in display parameters of the smart device 20, etc.
In other exemplary embodiments, the detector 230 may also include a temperature sensor, such as by sensing ambient temperature, the smart device 20 may adaptively adjust the display color temperature of the image. Illustratively, the smart device 20 may be adjusted to display a colder color temperature shade of the image when the temperature is higher, and the smart device 20 may be adjusted to display a warmer color temperature shade of the image when the temperature is lower.
The controller 250 controls the operation of the smart device 20 and responds to the user's operations by running various software control programs (e.g., an operating system and various application programs) stored on the memory 260.
The controller 250 includes a Random Access Memory (RAM) 251, a read-only memory (ROM) 252, a graphics processor 253, a processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphics processor 253, the CPU processor 254, and the communication interface 255 are connected by a communication bus 256.
A ROM252 for storing various system boot instructions. If the power of the smart device 20 starts to be started when the power-on signal is received, the processor 254 executes a system start instruction in the ROM252, and copies the operating system stored in the memory 260 into the RAM251 to start to run the start-up operating system. When the operating system is started, the processor 254 copies the various applications in the memory 260 to the RAM251, and then starts running the various applications.
The graphic processor 253 generates various graphic objects such as icons, operation menus, and user input instruction display graphics, etc. The graphic processor 253 may include an operator for performing an operation by receiving user input of various interactive instructions to display various objects according to display attributes, and a renderer for generating various objects based on the operator and displaying the result of the rendering on the display 275.
Processor 254 is operative to execute operating system and application program instructions stored in memory 260. And executing processing of various application programs, data and contents according to the received user input instructions so as to finally display and play various audio and video contents.
In some example embodiments, the processor 254 may include a plurality of processors. The plurality of processors may include one main processor and a plurality or one sub-processor. A main processor for performing some initialization operations of the smart device 20 in a pre-load mode of the smart device 20 and/or for displaying a picture in a normal mode. A plurality or one sub-processor for performing an operation in a state of the smart device 20 in a standby mode or the like.
Communication interface 255 may include a first interface through an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
Controller 250 may control the overall operation of smart device 20. For example, in response to receiving a user input command for selecting a graphical user interface (graphic user interface, GUI) object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user input command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. The operation related to the selected object, for example, an operation of displaying a link to a hyperlink page, a document, an image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the smart device 20 or a voice command corresponding to a voice uttered by the user.
Specifically, the controller 250 is configured to extract n preset labels in the search instruction, perform layer-by-layer label matching with the images in the memory 260 sequentially according to the importance levels of the n preset labels in the search instruction, and determine k target images, wherein layer-by-layer label matching includes matching an i preset label with an i-1 th matching result of the i-1 th preset label, determining an i-th matching result of the i preset label matching, stopping matching if the number k of target images in the i-th matching result is smaller than a first threshold value, and otherwise, continuing to match with the i+1 th preset label according to the i-th matching result, and determining an i+1 th matching result, wherein the importance level of the i preset label is greater than the importance level of the i+1 th preset label. Memory 260 is used to store various types of data, software programs, or applications that drive and control the operation of smart device 20. Memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes memory 260, RAM251 and ROM252 of controller 250, or a memory card in smart device 20.
In some embodiments, the memory 260 is specifically used to store an operating program that drives the controller 250 in the smart device 20, store various applications built into the smart device 20 and downloaded by a user from an external device, store data for configuring various GUIs provided by the display 275, various objects related to the GUIs, visual effect images of selectors for selecting GUI objects, and the like.
In some embodiments, the memory 260 is specifically configured to store drivers and related data for the modem 210, the communicator 220, the detector 230, the external device interface 240, the video processor 270, the display 275, the audio processor 280, etc., such as external data (e.g., audio-video data) received from the external device interface or user data (e.g., key information, voice information, touch information, etc.) received by the user interface 265, the receiver.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example, kernels, middleware, application Programming Interfaces (APIs), and/or application programs. The kernel may illustratively control or manage system resources, as well as functions implemented by other programs, such as the middleware, API, or application, while the kernel may provide interfaces to allow the middleware, API, or application to access the controller to implement control or management of system resources.
Specifically, the memory 260 may also be used to store images and preset labels of the images in this embodiment. The preset label of the image is a mark for identifying the image information, which is determined according to an algorithm and the image information, and the specific determination process of the preset label is described later.
And a display 275 for receiving image signals from the video processor 270 and displaying video content, images and menu manipulation interfaces. The display 275 may be a liquid crystal display, an organic light emitting display, a projection device. The specific display device type, size, resolution, etc. are not limited. The display 275 may include a display assembly for rendering pictures and a drive assembly for driving the display of images. Or a projection device and projection screen, provided that display 275 is a projection display.
Specifically, the display 275 is used to display information of the target image, that is, related information of the image conforming to the search instruction, such as a thumbnail, and some basic information of the image, such as time, size, etc., determined by layer-by-layer matching. The modem 210 receives broadcast television signals through a wired or wireless manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance, for demodulating an audio/video signal carried in a frequency of a television channel selected by a user and additional information (e.g., EPG data) from among a plurality of wireless or wired broadcast television signals.
The tuning demodulator 210 is responsive to the frequency of the television channel selected by the user and the television signal carried by that frequency, as selected by the user, and as controlled by the controller 250.
The modem 210 may receive signals in various ways such as terrestrial broadcasting, cable broadcasting, satellite broadcasting, or internet broadcasting, according to broadcasting systems of television signals, and may demodulate analog signals and digital signals according to modulation types, according to the types of received television signals.
In other exemplary embodiments, the modem 210 may also be in an external device, such as an external set-top box or the like. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal to the smart device 20 through the external device interface 240.
The external device interface 240 is a component that provides the controller 250 to control data transmission between the smart device 20 and an external device. The external device interface 240 may be connected to an external device such as a set-top box, a game device, a notebook computer, etc., in a wired/wireless manner, and may receive data such as a video signal (e.g., a moving image), an audio signal (e.g., music), additional information (e.g., an EPG), etc., of the external device.
The external device interface 240 may include any one or more of a High Definition Multimedia Interface (HDMI) terminal 241, a Composite Video Blanking Sync (CVBS) terminal 242, an analog or digital Component terminal 243, a Universal Serial Bus (USB) terminal 244, a Component (Component) terminal (not shown), a Red Green Blue (RGB) terminal (not shown), and the like. The user interface 265 may be used to receive various user interactions. Specifically, it is used to transmit an input signal of a user to the controller 250 or transmit an output signal from the controller 250 to the user. For example, the remote controller may transmit an input signal such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., inputted by a user, to the user interface 265 and then transferred to the controller 250 by the user interface 265, or the remote controller may receive an output signal such as audio, video, or data, etc., outputted from the user interface 265 through the controller 250 and display the received output signal or output the received output signal in an audio or vibration form.
In some embodiments, a user may enter user commands through a Graphical User Interface (GUI) displayed on the display 275. In particular, the user interface 265 may receive user input commands for a user to select different objects or items through the position of the remote control in the GUI. Wherein a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of a user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display of the electronic device, where the control may include a visual interface element such as an icon, a control, a menu, a tab, a text box, a dialog box, a status bar, a channel bar, an applet widgets, etc.
The audio processor 280 is configured to receive an external audio signal, decompress and decode according to a standard codec of an input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification, so as to obtain an audio signal that can be played in the speaker 286.
Illustratively, the audio processor 280 may support various audio formats. Such as MPEG-2, MPEG-4, advanced Audio Coding (AAC), high efficiency AAC (HE-AAC), etc.
An audio output interface 285 for receiving the audio signal output from the audio processor 280 under the control of the controller 250, the audio output interface 285 may include a speaker 286, or an external audio output terminal 287, such as a headphone output terminal, for outputting to a generating device of an external device.
In other exemplary embodiments, video processor 270 may include one or more chip components. Audio processor 280 may also include one or more chip components.
And, in other exemplary embodiments, video processor 270 and audio processor 280 may be separate chips or integrated with controller 250 in one or more chips.
The power supply 290 is configured to provide power supply support for the smart device 20 with power input from an external power source under the control of the controller 250. The power supply 290 may be a built-in power supply circuit installed inside the smart device 20 or may be a power supply installed outside the smart device 20.
In connection with the above scenario, fig. 3 presents a flow chart of a method of image processing, comprising:
in step 301, the smart device 20 receives a search instruction from a user.
Illustratively, the user inputs a find instruction "find photographs of a hali basketball at a gym" to the smart device 20. Wherein a user may input a search instruction to the smart device through the client 10, the client 10 may be connected to the communicator 220 of the smart device 20, and the two may implement mutual messaging. The search instruction in the form of a voice uttered by the user may also be detected and collected by the sound detector 231 of the detector 230.
In step 302, the smart device 20 extracts n preset tags in the search instruction.
Illustratively, the processor 254 of the smart device 20 processes and identifies the search instruction, and extracts the preset labels of the search instruction of the user, including a first preset label "hali", a second preset label "basketball, and a third preset label" gym ". The preset label is extracted by processing the search instruction, so that preparation is made for matching in the subsequent steps, the intention of a user is better understood, and the efficiency of man-machine interaction is improved.
Step 303, the intelligent device 20 sequentially performs layer-by-layer label matching with the images according to the importance levels of the n preset labels, and determines k target images, where k is a positive integer, and the layer-by-layer label matching includes:
The method comprises the steps of matching an ith preset label in a search instruction with an ith-1 matching result of the ith preset label in the search instruction, determining an ith matching result of the ith preset label in the search instruction, stopping matching if the number k of target images in the ith matching result is smaller than a first threshold value, otherwise, continuing to match the ith preset label in the search instruction according to the ith matching result, and determining an ith+1 matching result, wherein the importance degree of the ith preset label is greater than that of the ith+1 preset label, and i is a positive integer and smaller than or equal to n.
For example, the processor 254 of the smart device 20 performs label matching with the pre-stored image and the preset label thereof according to the most important first preset label "hali", finds out the first matching result of 85 images including the preset label "hali", judges that the number of images 85 of the first matching result is not less than the first preset threshold 20, and further searches the first matching result for the second matching result including the image of the second preset label "basketball", if the number of images in the second matching result is 12,12 is less than the first preset threshold 20, the matching is stopped, the image in the second matching result is output as the target image, and the display 275 of the smart device 20 displays the information of the corresponding target image. If the number of images in the second matching result is 25, it is obvious that 25 is not less than the first preset threshold 20, matching is continued according to the above method until the number of images in the matching result is less than the first preset threshold. According to the technical scheme, layer-by-layer label matching can be carried out on the images according to the importance degree of the preset labels, the finally obtained target images are ensured to meet the user intention and the display quantity requirement, and the user experience is improved.
In step 303, the smart device 20 displays information of the target image.
When the processor 254 of the smart device 20 determines that the number of images of the matching result is less than the first threshold, a target image is determined, and then the display 275 displays information of the corresponding target image.
By adopting the technical scheme, the intelligent device 20 processes and identifies the search instruction input by the user, so that each preset label and the importance degree thereof in the search instruction are effectively and automatically determined, and further, the intelligent device 20 sequentially performs layer-by-layer matching identification with the images prestored in the memory 260 and the corresponding preset labels according to the importance degree, so that the quantity and quality of the finally identified target images are ensured to meet the actual demands of the user, the image processing efficiency is improved, and the high efficiency of human-computer interaction is realized.
Further, before step 301, the receiver of the smart device 20 is further configured to receive an image uploaded by the user, and the processor is configured to perform recognition processing on the uploaded image to determine a preset tag corresponding to each image, where the preset tag includes at least one of a person identity tag, a person age tag, a definition tag, a scene tag, an address tag, and a time information tag.
Optionally, the user opens an intelligent album application on the client 10, selects an image and uploads the image to an image management cloud (which may be a home edge computing server with a photo storage management function or a photo storage management server centralized in a machine room), further, after the image management cloud receives the image, the intelligent album application of the intelligent device 20 may perform algorithm recognition processing on the image in the image management cloud by the intelligent device 20, extract information such as character features, environmental features, location information, shooting time, storage time, number of characters, clothing color, text content, picture background, daytime, weather condition and the like in the image, extract preset labels of each image and store the labels in the memory 260 of the intelligent device 20, or store the labels in the photo management cloud, and the intelligent device 20 directly checks the images through the intelligent album application. Through the steps, the processing and identification of the uploaded images of the user are realized, the preset labels of the images are extracted, the subsequent image searching according to the preset labels is facilitated, the image processing efficiency is improved, and the searched images are more accurate and more accord with the user requirements.
Optionally, the user may connect the photo management cloud to the intelligent album application of the client 10 or the intelligent album application of the intelligent device 20, view, download, delete, etc. the photos stored in the cloud, and may also view each image according to the extracted tag information in a classified manner, for example, the user may directly view an image whose time information tag is "1 month in 2020", which is convenient for the user to view the image in a classified manner, and improves user experience.
Further, in one possible embodiment, the intelligent device 20 performs face feature recognition on the image in the memory to determine a recognition result, and if the similarity between the recognition result and the face feature of the user in the character database is greater than a second threshold, determines the face tag corresponding to the face feature of the user as the face tag of the image.
Illustratively, the processor 254 of the smart device 20 performs face feature recognition on the image uploaded by the user, and determines a person identity tag corresponding to a face in the image, where if the similarity between the face identified in the image and the face feature of the user pre-stored in the person database is 99%, which is greater than a preset second threshold value of 90%, then the person identity tag corresponding to the face in the image is directly determined to be "hali".
Further, when the degree of similarity is smaller than a second threshold and larger than a third threshold, determining an age difference value of a person age label of an image and a person age label corresponding to a user face feature, and if an identification error corresponding to the age difference value is not smaller than an error of the similarity, determining the person identity label corresponding to the user face feature as the person identity label of the image, wherein the error of the similarity is a difference value between the first threshold and the similarity.
For example, if the similarity between the face in the image and the face feature of the user "hali" in the user database is 89%, less than the second preset threshold value 90% and greater than the third threshold value 85%, the age of the face in the image is further identified, and if the age label of the face corresponding to the identified image is "18 years", the age label of the user "hali" in the user database is "28 years". Further, the allowable similarity error corresponding to the preset age difference of 10 years is 12%, the error of the similarity is 1% (the difference between the second threshold and the similarity) and 1% <12%, so that the person identity tag corresponding to the face in the image is determined to be "hali".
Optionally, if the similarity between the face in the image and the face feature of the user "hali" in the user database is 50% and less than the third threshold value of 85%, the processor 254 of the intelligent device 20 does not identify the user as "hali", and continues to match with the face feature in the user database or receive the person identity tag input by the user and tag the person identity tag on the face in the image.
In one possible embodiment, in step 301, the user may input a search instruction in the form of speech.
Optionally, the smart device 20 may also collect voice information of the user.
In one possible embodiment, the n preset labels of the search instruction include preset labels corresponding to the center word and preset labels corresponding to the auxiliary word, and the importance degree of the preset labels corresponding to the center word is greater than that of the preset labels corresponding to the auxiliary word.
Illustratively, the processor 254 of the smart device 20 may divide the preset tag extracted from the search instruction of the user into a preset tag corresponding to the center word and a preset tag corresponding to the auxiliary word, and set the former to have a higher importance than the latter. By distinguishing the central word segmentation and the auxiliary word segmentation, the method helps to better determine the importance degree ordering of the preset labels from the search instructions of the user, and the image search results obtained by real-layer-by-layer matching more accord with the user requirements, so that the user experience and the image search efficiency are improved.
Further, in a possible embodiment, in step 302, the intelligent device 20 identifies corresponding text information and intonation information according to the voice information of the user, and further determines the center word and the preset tag corresponding to the center word, the auxiliary word and the preset tag corresponding to the auxiliary word according to the text information and the intonation information.
Optionally, the processor 254 of the smart device 20 first identifies the voice information to obtain the corresponding text information and intonation information, and further determines the content of the text based on the intonation information, such as rising (↗), falling (↙), rising (Λ), falling (v) and flat (→). If there is an upregulation, this word acts as an auxiliary word instead of a center word.
Illustratively, the user utters a voice "find a photograph of a basketball in a gym with hali" to the smart device 20, the sound collector 231 of the smart device 20 collects and processes the voice information, converts the voice information to generate corresponding text information "find a photograph of a basketball in a gym with hali" and determines intonation information of the voice information. For example, the intelligent device 20 identifies text information and intonation information of "search for a photograph of a hali playing basketball in a gym" according to a pre-trained language identification model, extracts corresponding preset labels, including preset labels corresponding to center word segmentation, assists the preset labels corresponding to word segmentation, helps to better order importance degrees of the preset labels, ensures to more accord with search intention of users, and improves user experience.
When the intelligent device 20 determines that the voice information is a statement sentence according to the intonation information, the main predicate in the text information is used as a central word to extract a preset label corresponding to the central word, and the fixed complement in the text information is used as an auxiliary word to extract the preset label corresponding to the auxiliary word.
Illustratively, the processor 254 of the smart device 20 determines "find hali's photo of basketball in gym" as a statement sentence, further wherein "hali's", "find", "basketball" are character identity tags in the preset tags, "basketball" are scene tags in the preset tags, "find" is omitted, and "hali's", "basketball" are preset tags corresponding to the center word are determined, and similarly, determines "gym" is a preset tag corresponding to the auxiliary word. The importance degree of the preset label corresponding to the center word is larger than that of the preset label corresponding to the auxiliary word.
Optionally, the smart device 20 may preset the importance level of each preset tag, for example, the user may set the importance level of the preset tag, namely, the person identity tag > address tag > scene tag > time information tag > person age tag > definition tag, through the user interface 265.
Optionally, the importance degree ranking of the preset labels preset by the user and the importance degrees of the central word segmentation and the auxiliary word segmentation corresponding to the search instruction of the user can be combined, the importance degrees of the preset labels corresponding to the central word segmentation are determined according to the importance degrees preset by the user, the importance degrees of the preset labels corresponding to the auxiliary word segmentation are determined, and finally the importance degree ranking of the preset labels extracted from the search instruction is obtained. The importance degree ranking in the center word segmentation is "hali" > "play basketball", and then the importance degree ranking of the finally determined preset label is "hali" > "play basketball" > "gym", and the first preset label is determined to be "hali" >, the second preset label is determined to be "play basketball", and the third preset label is determined to be "gym". From this, confirm first preset label, second preset label, third preset label etc. and confirm the order that follow-up step always carries out layer-by-layer matching to the label of image, realize that the image result of looking for accords with user's requirement more, improve user experience.
Optionally, when the form of the search instruction input by the user is in a text form, the intelligent device 20 processes and analyzes the search instruction, extracts each preset label therein, sorts the importance degrees of each preset label according to the importance degrees of the preset labels, determines a first preset label, a second preset label, an nth preset label and the like, and sorts the importance degrees of the preset labels extracted from the search instruction, so that the search instruction is convenient for orderly performing layer-by-layer label matching with the images according to the labels in the subsequent steps, the images of the matching result are ensured to more conform to the search intention of the user, the number of the obtained images is more consistent with the use habit of the user, and the user experience is improved.
In one possible embodiment, step 303 further comprises:
And (3) when the number of the target images in the matching result is greater than or equal to a first threshold value, performing de-duplication processing on the target images, and/or removing images of which the definition does not meet the preset condition from the target images.
After matching is performed according to the preset label extracted from the search instruction, the number of images in the final nth matching result is still greater than the first threshold, and then the intelligent device 20 performs similarity recognition on each image in the nth matching result, and performs deduplication processing on the image with higher similarity. Optionally, the intelligent device 20 may perform sharpness recognition on the images, label each image with a corresponding sharpness, and remove the images whose sharpness is not satisfactory when the number of images of the matching result is greater than a first preset threshold. The above-described operations may be performed by the processor 254 of the smart device 20, for example.
Optionally, assuming that 5 preset labels corresponding to the center word are provided, firstly, matching the preset labels with the highest importance degree with the labels of the stored pictures to obtain a first batch of pictures, counting the number, and if the number is greater than a first threshold (for example, 9 sheets of pictures which can be displayed by a current display 275 set by a user), continuously matching the preset labels with the second importance degree in the 5 center word with the labels in the first batch of pictures until the number of the obtained pictures is less than or equal to the current first threshold.
For example, when the number of images in the obtained third matching result is 22 after the first preset label, the second preset label and the third preset label in the "search for the pictures of basketball in the gym by hali" are respectively matched according to the search instruction of the user, and obviously 22 is not less than the set first preset threshold 20, similarity recognition is performed on the 22 images of the third matching result, duplicate removal processing is performed, the images with similarity greater than the preset condition are removed, and the remaining images are determined as target images and are output to the display 275 for display.
For example, if the number of images after de-duplication is still greater than the first threshold (for example, the similarity in the images is not high, and de-duplication is not performed), the image is further subjected to sharpness recognition processing, and the blurred image is removed (for example, the image with sharpness less than 80% is judged as blurred).
In one possible embodiment, the first threshold is determined according to the size of the display screen and the display size of the information of the target image. Illustratively, the smart device 20 may set the number of images that the display 275 is most suitable for, depending on the size of the display 275 itself and the size of the display size set by the target image. The display quantity and the display size can be set by a user according to the own requirements. Illustratively, as shown in fig. 4, when the user wants to see the searched image at a glance, the user may set the image display size by himself, select "display large-size image", the display 275 may display 8 images at a time, and the first threshold may be set to 8, or a multiple of 8. As shown in fig. 5, when the user selects "display small-sized image", the display may display 20 images at a time, and the first threshold may be set to 20, or a multiple of 20. By setting the first threshold, the target image is flexibly displayed, and user experience is improved.
According to the embodiment of the application, the voice text information is analyzed to obtain the preset label according to the voice information or the text information input by the user, and then the stored image and the preset label are subjected to layer-by-layer label matching, so that the picture can be positioned rapidly. When the voice text information is analyzed to obtain the preset label, the importance degree ordering is further given by dividing the center word segmentation and the auxiliary word segmentation, the searching logic is enhanced, and the searching efficiency is improved.
Based on the same inventive concept, embodiments of the present invention also provide a computer-readable storage medium including instructions that, when run on a computer, cause the computer to perform the above-described image processing method.
Based on the same inventive concept, embodiments of the present invention also provide a computer program product, which when run on a computer, causes the computer to perform the above-described image processing method.
Based on the same technical concept, an embodiment of the present invention provides a computer, as shown in fig. 6, including at least one processor 601 and a memory 602 connected to the at least one processor, where in the embodiment of the present invention, a specific connection medium between the processor 601 and the memory 602 is not limited, and in fig. 6, the processor 601 and the memory 602 are connected by a bus as an example. The buses may be divided into address buses, data buses, control buses, etc.
In the embodiment of the present invention, the memory 602 stores instructions executable by the at least one processor 601, and the at least one processor 601 may perform the steps included in the aforementioned image processing method by executing the instructions stored in the memory 602.
The processor 601 is a control center of a computer, and various interfaces and lines can be used to connect various parts of the computer, and through execution or execution of instructions stored in the memory 602 and invocation of data stored in the memory 602, data processing can be achieved. Alternatively, the processor 601 may include one or more processing units, and the processor 601 may integrate an application processor and a modem processor, where the application processor primarily processes an operating system, a user interface, an application program, and the like, and the modem processor primarily processes instructions issued by an operator. It will be appreciated that the modem processor described above may not be integrated into the processor 601. In some embodiments, processor 601 and memory 602 may be implemented on the same chip, or they may be implemented separately on separate chips in some embodiments.
The processor 601 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or a combination thereof, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the invention. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with an embodiment of a processing method based on a distributed batch processing system may be embodied directly in hardware processor execution or in a combination of hardware and software modules in a processor.
The memory 602 is a non-volatile computer readable storage medium that can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. The Memory 602 may include at least one type of storage medium, which may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), magnetic Memory, magnetic disk, optical disk, and the like. Memory 602 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 602 in embodiments of the present invention may also be circuitry or any other device capable of performing storage functions for storing program instructions and/or data.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, or as a computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
The above-provided specific embodiments are only a few examples under the concept of the present application, and do not limit the scope of the present application. Any other embodiments which are extended according to the solution of the application without inventive effort fall within the scope of protection of the application for a person skilled in the art.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

The image processing device comprises a storage, a storage and a storage, wherein the storage is used for storing images and preset labels of the images, the preset labels comprise character identity labels, the character identity labels are determined by a processor according to the recognition result of face feature recognition of the images in the storage, if the similarity between the recognition result and the face feature of a user in a character database is larger than a second threshold value, the character identity labels of the images are character identity labels corresponding to the face feature of the user, and if the similarity is smaller than the second threshold value and larger than a third threshold value, the character identity labels of the images are character identity labels corresponding to the face feature of the user according to the age difference value of the character age labels of the images and the character age labels corresponding to the face feature of the user, and when the recognition error corresponding to the age difference value is not smaller than the error of the similarity is determined;
The method comprises the steps of carrying out matching on an ith preset label in a search instruction and an ith-1 matching result of the ith preset label in the search instruction, determining the ith matching result of the ith preset label in the search instruction, stopping matching if the number k of target images in the ith matching result is smaller than a first threshold value, otherwise, continuously carrying out matching on the ith matching result and the (i+1) th preset label in the search instruction, and determining the (i+1) th matching result, wherein the importance degree of the ith preset label is greater than that of the (i+1) th preset label, i is a positive integer and is smaller than n, and the first threshold value is determined according to the size of a display screen and the display size of information of the target images or is set by a user according to the requirement of the user;
Performing layer-by-layer label matching with pre-stored images according to the importance degrees of the n pre-stored labels, determining that each pre-stored image is provided with pre-stored labels of the images, wherein each pre-stored image comprises a person identity label, the person identity label is determined according to the recognition result of face feature recognition of the image, if the similarity between the recognition result and the face feature of a user in a person database is larger than a second threshold, the person identity label of the image is the person identity label corresponding to the face feature of the user, if the similarity is smaller than the second threshold and larger than a third threshold, the person identity label of the image is the difference between the second threshold and the similarity, and if the similarity is smaller than the third threshold, the person identity label of the image is determined to be the person identity label corresponding to the face feature of the user according to the difference of the age, and if the recognition error corresponding to the difference of the age is not smaller than the error of the similarity, the person identity label of the image is the difference between the second threshold and the similarity, and the matching comprises:
The method comprises the steps of carrying out matching on an ith preset label in a search instruction and an ith-1 matching result of the ith preset label in the search instruction, determining the ith matching result of the ith preset label in the search instruction, stopping matching if the number k of target images in the ith matching result is smaller than a first threshold value, otherwise, continuously carrying out matching on the ith matching result and the (i+1) th preset label in the search instruction, and determining the (i+1) th matching result, wherein the importance degree of the ith preset label is greater than that of the (i+1) th preset label, i is a positive integer and is smaller than n, and the first threshold value is determined according to the size of a display screen and the display size of information of the target images or is set by a user according to the requirement of the user;
CN202010342171.8A2020-04-272020-04-27Intelligent device and image processing methodActiveCN113468351B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010342171.8ACN113468351B (en)2020-04-272020-04-27Intelligent device and image processing method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010342171.8ACN113468351B (en)2020-04-272020-04-27Intelligent device and image processing method

Publications (2)

Publication NumberPublication Date
CN113468351A CN113468351A (en)2021-10-01
CN113468351Btrue CN113468351B (en)2025-04-08

Family

ID=77865830

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010342171.8AActiveCN113468351B (en)2020-04-272020-04-27Intelligent device and image processing method

Country Status (1)

CountryLink
CN (1)CN113468351B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114815707B (en)*2022-05-172023-04-07重庆伏特猫科技有限公司Intelligent device control method and system based on Netty network framework
CN114756700B (en)*2022-06-172022-09-09小米汽车科技有限公司Scene library establishing method and device, vehicle, storage medium and chip

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108228715A (en)*2017-12-052018-06-29深圳市金立通信设备有限公司A kind of method, terminal and computer readable storage medium for showing image
CN110210307A (en)*2019-04-302019-09-06中国银联股份有限公司Face Sample Storehouse dispositions method is based on recognition of face method for processing business and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101853297A (en)*2010-05-282010-10-06英华达(南昌)科技有限公司 A method for quickly obtaining desired images in electronic equipment
KR101203165B1 (en)*2010-11-192012-11-20조광현Appratus and method for extracting tag
CN105095288B (en)*2014-05-142020-02-07腾讯科技(深圳)有限公司Data analysis method and data analysis device
CN110413818B (en)*2019-07-312023-11-17腾讯科技(深圳)有限公司Label paper recommending method, device, computer readable storage medium and computer equipment
CN110781421B (en)*2019-08-132023-10-17腾讯科技(深圳)有限公司Virtual resource display method and related device
CN110569332B (en)*2019-09-092023-01-06腾讯科技(深圳)有限公司Sentence feature extraction processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108228715A (en)*2017-12-052018-06-29深圳市金立通信设备有限公司A kind of method, terminal and computer readable storage medium for showing image
CN110210307A (en)*2019-04-302019-09-06中国银联股份有限公司Face Sample Storehouse dispositions method is based on recognition of face method for processing business and device

Also Published As

Publication numberPublication date
CN113468351A (en)2021-10-01

Similar Documents

PublicationPublication DateTitle
CN110737840B (en) Voice control method and display device
CN112000820A (en)Media asset recommendation method and display device
CN111343512B (en)Information acquisition method, display device and server
CN112346695A (en)Method for controlling equipment through voice and electronic equipment
CN112163086A (en)Multi-intention recognition method and display device
CN111914134B (en) A kind of association recommendation method, intelligent device and service device
CN113722542A (en)Video recommendation method and display device
CN111625716A (en)Media asset recommendation method, server and display device
CN114945102A (en) Display device and method for character recognition display
CN111770370A (en)Display device, server and media asset recommendation method
CN111526402A (en)Method for searching video resources through voice of multi-screen display equipment and display equipment
CN112004131A (en)Display system
CN112380420A (en)Searching method and display device
CN112053688A (en)Voice interaction method, interaction equipment and server
CN111949782A (en)Information recommendation method and service equipment
CN113468351B (en)Intelligent device and image processing method
CN112004126A (en)Search result display method and display device
CN111556350B (en)Intelligent terminal and man-machine interaction method
CN112188249B (en)Electronic specification-based playing method and display device
CN111866568B (en)Display device, server and video collection acquisition method based on voice
WO2022012299A1 (en)Display device and person recognition and presentation method
CN113163228B (en) Media asset playback type marking method and server
CN114442989B (en) Natural language analysis method and device
CN111950288B (en)Entity labeling method in named entity recognition and intelligent device
CN113115081B (en) Display device, server and media asset recommendation method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp