Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the image processing method.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may receive the target video frames using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, application iOS, UNIX-like operating systems, linux or Linux-like operating systems (e.g., GOOGLE Chrome OS), or include various mobile operating systems, such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual special server (VPS PRIVATE SERVER) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Ultrasound scanning is of great importance for medical diagnosis. Of 5000 thousands of doctors worldwide, only 2% of doctors have mastered the skills of ultrasonic scanning. In the ultrasonic scanning process, images of different sections need to be acquired for real-time diagnosis, and the diagnosis results obtained from ultrasonic images are very dependent on the experience of doctors. But simultaneously, the ultrasonic scanning has no radiation to human body, low price and wide application range, and the ultrasonic scanning can be used for ultrasonic examination from head to foot of the human body, and can be preferentially adopted for special patients. The ultrasonic scanning can continuously and dynamically observe the movement and function of viscera, and can track lesions and display three-dimensional changes without being limited by imaging layering. And the ultrasonic equipment is easy to move, has no wound, and can diagnose patients with inconvenient movement at the bedside. How to conduct intelligent guidance in ultrasonic scanning to assist doctors in obtaining accurate section images is a widely focused problem in ultrasonic scanning.
In the related art, when the ultrasonic scanning is carried out on the tissue to be detected, the pose difference between the current tangent plane and the standard tangent plane of the ultrasonic image is calculated according to the ultrasonic image obtained in the scanning process, the pose difference is used for designing the operation logic to provide the guidance for the next ultrasonic scanning, or the information fusion is carried out by acquiring the current scanning tangent plane information and IMU (gyroscope) information and combining the two types of data, and the guidance is provided for the next ultrasonic scanning from the image angle and the space information angle.
Since the pose includes 3 axial information and information of different dimensions such as organs, tangential planes, etc., the pose deviation is calculated by only one ultrasound image, which often makes the provided guidance inaccurate. IMUs are used to provide spatial information, but IMUs suffer from inertial drift problems, and as the probe moves, positional errors become larger and larger, which also makes the provided guideline inaccurate.
According to an aspect of the present disclosure, there is provided an image processing method. Referring to fig. 2, an image processing method 200 according to some embodiments of the present disclosure includes:
step S210, obtaining an ultrasonic image of a first position of a tissue to be detected;
Step S220, based on the ultrasonic image, a first detection result is obtained, the first detection result indicates whether continuous ultrasonic scanning is performed on the tissue to be detected, and the continuous ultrasonic scanning is used for performing ultrasonic scanning on a plurality of continuous positions on the tissue to be detected by taking the first position as a starting point so as to obtain an ultrasonic video of the tissue to be detected;
Step S230, responding to the first detection result to indicate to carry out continuous ultrasonic scanning on the tissue to be detected to obtain the ultrasonic video, and
And step S240, obtaining target video frames from the ultrasonic video, wherein the target video frames indicate target sections of the tissue to be detected.
The method comprises the steps of obtaining a first detection result based on an ultrasonic image obtained by ultrasonic scanning of a tissue to be detected, wherein the first detection result indicates whether the tissue to be detected is subjected to continuous ultrasonic scanning or not, responding to the first detection result to indicate the tissue to be detected to carry out continuous ultrasonic scanning so as to obtain an ultrasonic video, providing intelligent guidance for a continuous ultrasonic scanning process and avoiding unnecessary continuous ultrasonic scanning, and simultaneously, obtaining a target video frame from the ultrasonic video obtained by continuous ultrasonic scanning to realize automatic selection of a standard section image and assist doctors to obtain an accurate section image.
In some embodiments, the tissue to be measured may be any biological tissue. For example, living tissues of human bodies and animals. In some examples, the tissue to be tested includes the neck, abdomen, or leg of a human body.
In some embodiments, the ultrasound image may be an image obtained by ultrasound scanning with an ultrasound probe for a first location of the tissue under test.
The ultrasonic probe obtains ultrasonic echo information by transmitting ultrasonic waves to the tissue to be tested and receiving ultrasonic echoes returned from the tissue to be tested, and obtains ultrasonic images of the tissue to be tested by performing signal processing on ultrasonic echo signals.
In some embodiments, the image processing method according to the present disclosure further comprises the step of determining a target organ to be scanned. For example, a target organ determined by a doctor is obtained, which may be, for example, the thyroid gland.
In some embodiments, in response to determining a target organ, a plurality of slice types corresponding to the target organ are determined. For example, a tangential plane along the thyroid long axis direction (extending direction) is a longitudinal plane, and a tangential plane perpendicular to the thyroid long axis is a transverse plane.
In some embodiments, as shown in fig. 3, step S220, based on the ultrasound image, obtaining a first detection result includes:
Step S310 of obtaining a second detection result indicating whether the detected tissue corresponds to a target organ based on the ultrasound image, and
Step S320, the first detection result is obtained in response to the second detection result indicating that the detected tissue corresponds to a target organ.
And obtaining a first detection result by obtaining a second detection result indicating whether the detected tissue corresponds to the target organ, namely judging whether the detected tissue corresponds to the target organ before determining whether to carry out continuous ultrasonic scanning on the detected tissue or not, so as to avoid idle shooting and organ scanning errors.
For example, in the process of ultrasonically scanning the abdomen to obtain an ultrasonic image of the target organ being the kidney, since the abdomen also includes the liver. In order to avoid scanning the liver, a second detection result is obtained based on the ultrasonic image so as to judge whether the ultrasonic image acquired at the first position of the tested tissue corresponds to the kidney, and if the ultrasonic image does not correspond to the kidney, the judgment of whether the subsequent continuous ultrasonic scanning is performed, so that the time of ultrasonic scanning is saved.
In some embodiments, the second detection result is obtained by inputting an ultrasound image to a target organ identification model. The target organ recognition model is a trained machine learning model, and takes ultrasonic images of various organs as training images for training.
In some embodiments, the target organ has a corresponding plurality of section types, as shown in fig. 4, step S320, in response to the second detection result indicating that the tested tissue corresponds to the target organ, obtaining the first detection result includes:
step S410, obtaining a third detection result based on the ultrasonic image, the third detection result indicating whether the ultrasonic image corresponds to one of the plurality of section types, and
And step S420, obtaining the first detection result in response to the third detection result indicating that the ultrasonic image corresponds to one standard section image in the plurality of standard section images.
And obtaining a first detection result by obtaining a third detection result indicating whether the ultrasonic image corresponds to one of a plurality of section types, namely judging whether the section of the tested tissue to be scanned at the first position meets the requirement before continuous ultrasonic scanning is carried out on the tested tissue, so as to avoid section scanning errors.
In some embodiments, the third detection result is obtained by inputting an ultrasound image into a section identification model. For example, the section identification model is obtained by training with a plurality of images corresponding to a plurality of section types of the target organ.
In some embodiments, the target organ identification model and the tangent plane identification model may be the same model, e.g., the target organ identification model is trained on a plurality of tangent plane images of the target organ corresponding to a plurality of tangent plane types. When the target organ identification model identifies a certain input image as a target organ, it is simultaneously identified which of a plurality of slice types of the target organ the image corresponds to.
In some embodiments, as shown in fig. 5, step S220, based on the ultrasound image, obtaining a first detection result includes:
step S510 of obtaining a target structure in the ultrasound image, the target structure corresponding to one of the plurality of target facets, and
And step S520, determining the first detection result in response to the target structure in the preset area of the ultrasonic image, wherein the first detection result indicates continuous ultrasonic scanning of the tissue to be detected.
In response to the target structure being in a preset area (such as a middle area), continuous ultrasonic scanning is performed, so that the target structure can be focused on a key structure in the continuous ultrasonic scanning process, and the obtained ultrasonic video can occupy the center of a visual field.
In some embodiments, the predetermined area is an area of the ultrasound image surrounding the central location, the area occupying a proportion of the ultrasound image corresponding to the target structure. For example, when the target structure is the left lobe of the thyroid, the area occupies 1/3 to 1/2 of the ultrasound image.
In some embodiments, responsive to the first detection result indicating continuous ultrasound scanning of the tissue under test, the ultrasound probe performs automatic scanning of the tissue under test to obtain an ultrasound video, wherein the tissue under test is automatically scanned along a path from a first location.
In some embodiments, as shown in fig. 6, in step S230, in response to the first detection result indicating that the continuous ultrasonic scanning is performed on the tissue to be detected, acquiring the ultrasonic video includes:
step S610, responding to the first detection result to indicate to perform continuous ultrasonic scanning on the tissue to be detected, outputting prompt information for prompting an executive to perform the continuous ultrasonic scanning, and
Step S620, obtaining the ultrasound video in response to the continuous ultrasound scan being performed.
And prompting an executive to execute the continuous ultrasonic scanning by outputting prompt information, so as to realize the execution of the continuous ultrasonic scanning.
In some embodiments, the alert information is a text alert information displayed on the display. In other embodiments, the prompt is a voice command.
In some embodiments, the practitioner obtains the ultrasound video by scanning the tissue under test along a path from the first location by the ultrasound probe to effect a continuous ultrasound scan.
In some embodiments, the target structure is tracked in the acquired ultrasound video during the continuous ultrasound scan in response to the continuous ultrasound scan being performed.
In the continuous ultrasonic scanning process, an operator can be guided to move the ultrasonic probe correctly by tracking the target structure, so that the obtained ultrasonic video is focused on the target structure, and the target structure in the obtained ultrasonic video can occupy the center of a visual field.
In some embodiments, the target structure is highlighted in the display during tracking of the target structure to enable the practitioner to adjust the scan path based on the displayed target structure.
In some embodiments, the ultrasound video is input to the target section image recognition model to obtain a target video frame in the ultrasound video, wherein the target video frame is a video frame corresponding to a target section of the tested tissue.
In some embodiments, as shown in fig. 7, step S240, obtaining a target video frame from the ultrasound video includes:
Step S710, obtaining image features of each video frame in the ultrasound video, and
Step S720, comparing the image characteristics of each video frame in the ultrasonic video with the image characteristics of each image in a preset image library to obtain the target video frame, wherein the preset image library comprises a plurality of standard section images, the similarity between the image characteristics of the target video frame and the image characteristics of a first standard section image in the standard section images is the largest, and the similarity is larger than a preset similarity threshold.
The image characteristics of each video frame in the ultrasonic video are compared with the image characteristics of each image in a preset image library, a target video frame is obtained, and the robustness of key frame identification is ensured by using an unsupervised algorithm.
In some embodiments, the plurality of standard slice images are ultrasound images obtained by ultrasound scanning a target slice of a plurality of tissues corresponding to the plurality of organs. In some embodiments, each tissue includes a plurality of target sections, and the plurality of standard section images includes ultrasound images corresponding to the plurality of target sections of each tissue, respectively.
In some embodiments, after the target video frame is acquired, the target video frame is stored to a preset queue. Meanwhile, steps S210-S240 are performed for the second location of the tissue under test, so as to obtain a target video frame corresponding to another target section of the tissue under test, thus completing the full ultrasound scan of the tissue under test.
Referring to fig. 8, in one embodiment according to the present disclosure, an image processing method according to the present disclosure is implemented by performing steps S801 to S818.
As shown in fig. 8, first, step S801 is performed to determine an organ to be scanned, for example, thyroid. In one example, the organ to be scanned may be determined by receiving an instruction input by a worker through an input/output device, wherein the instruction indicates the organ to be scanned.
Next, step S802 is performed to determine an organ scan rule. In one example, the organ scan rules include the type of slice of the organ, as well as the slice images to be scanned, and the like.
Next, step S803 is performed, in which scanning is performed on the tissue to be measured using the ultrasound probe, to obtain an ultrasound image. In one example, the scan indication may be issued to instruct the operator to operate the ultrasound probe to scan the tissue under test.
Next, step S804 is performed to determine, based on the ultrasound image, whether the tissue to be detected is the target organ and whether the current scanning position meets the scanning rule. For example, by determining whether a slice in the ultrasound image corresponds to one of a plurality of slice types of the organ, it is determined whether the current scan location meets the scan rules.
When it is determined that the tissue to be detected is the target organ and the current scanning position meets the scanning rule, step S805 is performed to obtain the key structure in the ultrasound image.
Next, step S806 is performed to determine whether the position of the key structure in the ultrasound image meets the scanning requirement. For example, whether the critical structures are in a preset region (e.g., middle region) of the ultrasound image.
When the determination result in step S806 is "no", step S807 is executed to move the ultrasonic probe to continue scanning. In one example, the scanning is continued by outputting a movement prompt to cause the operator to move the ultrasound probe.
When the determination result in step S806 is yes, step S808 is performed, and a recording prompt is output, which indicates that recording of the ultrasonic video is started.
Next, step S809 is performed to move the ultrasound probe to continue the sweep. In one example, scanning is continued by outputting a movement prompt to cause the operator to move the probe.
Next, step S810 is executed to locate the location key structure in the video frame obtained in the process of recording the ultrasonic video.
Next, step S811 is performed to track the key structure in the video frame obtained during the recording of the ultrasound video.
Next, step S812 is performed to determine whether the video frame is a target video frame.
When the determination result in step S812 is yes, step S813 is performed to add the video frame to the storage queue, and step S814 is performed continuously after step S813 is performed to determine whether the number of target frames in the storage queue meets the requirement.
When the determination result in step S812 is "no", step S814 is performed to determine whether the target number of video frames in the storage queue satisfies the requirement.
When the determination result in step S814 is yes, step S815 is executed to output a completion cue indicating that scanning of the current portion of the tissue under test has been completed. For example, when the scanning rule includes a plurality of slice types, it is determined that scanning of the current portion of the tissue under test is completed when a target video frame corresponding to one slice type is obtained.
When the determination result in step S814 is "no", the routine returns to step S809 to continue moving the probe, thereby scanning the tissue under test to complete the scanning of the current portion of the tissue under test.
Next, step S816 is executed to determine whether the complete scanning of the tissue under test is completed.
When the determination result in step S816 is yes, step S817 is performed, and a completion cue indicating that all scanning of the tissue under test has been completed is output. For example, when the scan rule includes a slice type, it is determined that all scans of the tissue under test are completed while the scan of the current portion of the tissue under test is determined to be completed when the target video frame corresponding to the slice type is obtained.
When the determination result in step S816 is "no", step S818 is performed to continue scanning the tissue to be tested, obtain an ultrasound image of another location of the tissue to be tested, and after step S818 is completed, step S804 is performed to complete all scanning of the tissue to be tested.
According to another aspect of the present disclosure, there is also provided an image processing apparatus, referring to fig. 9, the apparatus 900 includes an ultrasound image acquisition unit 910 configured to acquire an ultrasound image for a first position of a tissue to be measured, a first detection result acquisition unit 920 configured to acquire a first detection result indicating whether to perform continuous ultrasound scanning on the tissue to be measured for performing ultrasound scanning on a plurality of positions on the tissue to be measured continuously starting from the first position to acquire an ultrasound video of the tissue to be measured, an ultrasound video acquisition unit 930 configured to perform continuous ultrasound scanning on the tissue to be measured in response to the first detection result indicating to acquire the ultrasound video, and a target video frame acquisition unit 940 configured to acquire a target video frame indicating a target section of the tissue to be measured from the ultrasound video.
In some embodiments, the first detection result acquisition unit 920 includes a second detection result acquisition unit configured to obtain a second detection result based on the ultrasound image, the second detection result indicating whether the tissue under test corresponds to a target organ, and a first acquisition subunit configured to obtain the first detection result in response to the second detection result indicating that the tissue under test corresponds to a target organ.
In some embodiments, the first acquisition subunit includes a third detection result acquisition unit configured to obtain a third detection result based on the ultrasound image, the third detection result indicating whether the ultrasound image corresponds to one of the plurality of slice types, and a second acquisition subunit configured to obtain the first detection result in response to the third detection result indicating that the ultrasound image corresponds to one of the plurality of standard slice images.
In some embodiments, the first detection result acquisition unit comprises a target structure acquisition unit configured to acquire a target structure in the ultrasound image, the target structure corresponding to one of the plurality of target sections, and a determination unit configured to determine the first detection result in response to the target structure being in a preset region of the ultrasound image, the first detection result indicating continuous ultrasound scanning of the tissue under test.
In some embodiments, the ultrasonic video acquisition unit comprises an output unit configured to instruct continuous ultrasonic scanning of the tissue to be detected in response to the first detection result, output prompt information for prompting an executive to execute the continuous ultrasonic scanning, and an acquisition subunit configured to acquire the ultrasonic video in response to the continuous ultrasonic scanning being executed.
In some embodiments, the apparatus 900 further comprises a tracking unit configured to track the target structure in the obtained ultrasound video during the continuous ultrasound scan in response to the continuous ultrasound scan being performed.
In some embodiments, the target video frame acquisition unit comprises a feature extraction unit configured to obtain an image feature of each video frame in the ultrasound video, and a comparison unit configured to compare the image feature of each video frame in the ultrasound video with an image feature of each image in a preset image library to obtain the target video frame, wherein the preset image library comprises a plurality of standard facet images, the similarity between the image feature of the target video frame and an image feature of a first standard facet image in the plurality of standard facet images is the largest, and the similarity is greater than a preset similarity threshold.
In some embodiments, the tissue under test comprises the neck, abdomen, or leg of a human body.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 10, a block diagram of a structure of an electronic device 1000 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the electronic device 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the electronic apparatus 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in the electronic device 1000 are connected to the I/O interface 1005, including an input unit 1006, an output unit 1007, a storage unit 1008, and a communication unit 10010. The input unit 1006 may be any type of device capable of inputting information to the electronic device 1000, the input unit 1006 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 1007 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 1008 may include, but is not limited to, magnetic disks, optical disks. The communication unit 10010 allows the electronic device 1000 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1000 via the ROM 1002 and/or the communication unit 1009. One or more of the steps of the method 200 described above may be performed when the computer program is loaded into RAM 1003 and executed by the computing unit 1001. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the method 200 in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.