Movatterモバイル変換


[0]ホーム

URL:


CN113454732A - Voice interaction method for coexistence of multiple medical devices, medical system and medical device - Google Patents

Voice interaction method for coexistence of multiple medical devices, medical system and medical device
Download PDF

Info

Publication number
CN113454732A
CN113454732ACN201980092353.XACN201980092353ACN113454732ACN 113454732 ACN113454732 ACN 113454732ACN 201980092353 ACN201980092353 ACN 201980092353ACN 113454732 ACN113454732 ACN 113454732A
Authority
CN
China
Prior art keywords
interaction
user
identity
feature
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980092353.XA
Other languages
Chinese (zh)
Other versions
CN113454732B (en
Inventor
赵亮
陈巍
谈琳
罗军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co LtdfiledCriticalShenzhen Mindray Bio Medical Electronics Co Ltd
Publication of CN113454732ApublicationCriticalpatent/CN113454732A/en
Application grantedgrantedCritical
Publication of CN113454732BpublicationCriticalpatent/CN113454732B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明多医疗设备共存时的语音交互方法,用于对同一空间环境内存在多个具备语音交互功能的医疗设备时的语音交互操作。医疗设备检测环境中的语音信息,并分析确定到该语音信息中存在与本机对应的身份特征后,医疗设备才启动本机的交互系统与用户进行交互。其中身份特征由预分配获取,不同的身份特征对应到空间环境内不同的医疗设备。由此,本语音交互方法得以在同一空间环境内基于身份特征而对多个医疗设备具备了指向性,可以单独或分组对医疗设备进行交互,避免了语音指令的混乱。本申请还涉及执行本交互方法的医疗设备和医疗系统。

Figure 201980092353

The voice interaction method when multiple medical devices coexist in the present invention is used for voice interaction operation when there are multiple medical devices with voice interaction function in the same space environment. The medical device detects the voice information in the environment, and analyzes and determines that there is an identity feature corresponding to the local machine in the voice information, and then the medical device starts the local interaction system to interact with the user. The identity features are acquired by pre-assignment, and different identity features correspond to different medical devices in the space environment. As a result, the voice interaction method can provide directionality to multiple medical devices based on identity features in the same spatial environment, and can interact with medical devices individually or in groups, avoiding confusion of voice commands. The present application also relates to medical devices and medical systems implementing the present interaction method.

Figure 201980092353

Description

Voice interaction method for coexistence of multiple medical devices, medical system and medical deviceTechnical Field
The present application relates to the technical field of medical devices, and in particular, to a voice interaction method for coexistence of multiple medical devices, a medical device for performing the method, and a medical system for performing the method.
Background
At present, more and more medical equipment with a voice interaction function are provided, so that the interaction between a user and the medical equipment is more convenient. However, since the medical devices of the same model or type function similarly, the voice interaction instructions are also substantially the same. In the same spatial environment, such as a ward, an outpatient service, an operating room, etc., if a plurality of medical devices with the same model or the same type and having the voice interaction function exist at the same time, the plurality of medical devices all receive the same voice interaction command in the spatial environment and respond.
Such a scenario results in the voice interaction of the user losing directivity, and the voice interaction and the medical function cannot be performed for one or part of the medical devices. If some medical devices in the space environment are in the monitoring stage and some other medical devices are in the idle state, the logic confusion of some medical devices is easily caused by executing the same voice command. Further, if a plurality of medical devices simultaneously perform voice feedback on the user, mutual interference between messages of the plurality of voice feedback may be caused, which affects effective reception of information by the user.
Disclosure of Invention
The application provides a voice interaction method for coexistence of a plurality of medical devices, which is used for determining the directivity of a voice instruction when a plurality of medical devices exist in the same spatial environment. The application also relates to a medical device for carrying out the method, and a medical system for carrying out the method. The application specifically comprises the following technical scheme:
in a first aspect, a voice interaction method for coexistence of multiple medical devices includes:
obtaining voice information in an environment;
determining that the identity feature exists in the voice information, wherein the identity feature is obtained through pre-allocation;
and starting the local interactive system to interact with the user.
Wherein, the identity feature includes a single-machine feature and a multi-machine feature, and the determining that the identity feature exists in the voice message includes:
determining that the voice information comprises a single machine characteristic or a plurality of machine characteristics;
the method for starting the local interactive system to interact with the user comprises the following steps:
when the identity characteristic is determined to be a single machine characteristic, starting a local interaction system to interact with a user;
and when the identity characteristic is determined to be a multi-machine characteristic, starting a local interactive system and interacting with the user based on the interactive time sequence.
Wherein the determining that the identity characteristic exists in the voice information comprises:
determining that the identity feature is a multi-machine feature;
determining that the voice information also comprises time sequence information;
the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
and interacting with the user according to the interaction time sequence of the time sequence information.
Wherein the multi-machine feature comprises a ranking feature, and the determining that the identity feature exists in the voice message comprises:
determining that the identity feature is a multi-machine feature;
analyzing the ranking of the ranking features in the multi-machine features;
the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
and interacting with the user based on the interaction time sequence of the sequencing characteristics.
Wherein the determining that the identity characteristic exists in the voice information comprises:
determining that the identity feature is a multi-machine feature;
determining that the voice information further comprises host information;
judging whether the local computer is the host computer or not according to the host computer information;
the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
and interacting with the user based on the interaction time sequence determined by the host.
Wherein the multi-machine feature comprises a ranking feature, and the determining that the identity feature exists in the voice message comprises:
determining that the identity feature is a multi-machine feature;
analyzing and comparing the sequence of the sequence characteristics of the local computer;
judging whether the local computer is a host computer or not according to the comparison result;
the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
and interacting with the user based on the interaction time sequence determined by the host.
Wherein the determining that the identity characteristic exists in the voice information comprises:
determining that the voice information comprises time sequence information;
the method for starting the local interactive system to interact with the user comprises the following steps:
and interacting with the user based on the interaction time sequence of the time sequence information.
Wherein the identity feature comprises a ranking feature, and the determining that the identity feature is present in the voice message comprises:
analyzing the ranking of the ranking features;
the method for starting the local interactive system to interact with the user comprises the following steps:
and interacting with the user based on the interaction time sequence of the sequencing characteristics.
Wherein the determining that the identity characteristic exists in the voice information comprises:
determining that the voice information comprises host information;
judging whether the local computer is the host computer or not according to the host computer information;
the method for starting the local interactive system to interact with the user comprises the following steps:
and interacting with the user based on the interaction time sequence determined by the host.
Wherein the identity feature comprises a ranking feature, and the determining that the identity feature exists in the voice message comprises:
analyzing and comparing the sequence of the sequence characteristics of the local computer;
judging whether the local computer is a host computer or not according to the comparison result;
the method for starting the local interactive system to interact with the user comprises the following steps:
and interacting with the user based on the interaction time sequence determined by the host.
Wherein the host determined interaction timing comprises:
if the local computer is judged to be the host computer, starting a local computer interaction system to interact with a user;
and if the local computer is judged not to be the host computer, starting the local computer interaction system to indirectly interact with the user through the host computer.
Wherein, the identity feature comprises a sound source distance condition, and the obtaining of voice information in the environment comprises:
acquiring environmental voice information and a sound source distance value;
the determining that the identity feature exists in the voice information comprises:
determining that the distance to the sound source value satisfies the sound source distance condition;
and judging that the identity characteristics exist in the voice information.
Wherein the identity characteristic comprises a volume condition, and the obtaining of voice information in the environment comprises:
acquiring environmental voice information and a volume value;
the determining that the identity feature exists in the voice information comprises:
determining that the volume value satisfies the volume condition;
and judging that the identity characteristics exist in the voice information.
Wherein the sound source distance condition comprises a sound source distance threshold, or/and
and judging that the distance value to the local sound source is larger than the sound source distance value of any rest of medical equipment in the environment.
Wherein the volume condition comprises a volume threshold, or/and
and judging that the local volume value is larger than the volume value of any rest medical equipment in the environment.
After the local interactive system is started to interact with the user, the method further comprises the following steps:
obtaining voice information in an environment;
determining that group quitting information exists in the voice information;
and exiting the local interactive system and stopping interacting with the user.
After the local interactive system is started to interact with the user, the method further comprises the following steps:
obtaining voice information in an environment;
determining that networking information exists in the voice information;
starting interactive systems of other medical equipment matched with the networking information in the environment based on the networking information;
and controlling the local interaction system to interact with the user based on the interaction time sequence of the networking information.
After the local interactive system is started to interact with the user, the method further comprises the following steps:
and generating and displaying feedback information to display that the local interactive system is started.
Wherein the feedback information comprises visual feedback information and/or auditory feedback information.
According to the voice interaction method during coexistence of multiple medical devices, whether the identity characteristics exist in the voice information is analyzed and determined by acquiring the voice information in the environment. The medical devices may be the same type or model of medical device or different types or models of medical device. The introduction of the identity feature enables each medical device to obtain a voice-activated command that is independent within the spatial environment. When the voice information output by the user is determined to contain the identity characteristics, the medical equipment starts the local interactive system to interact with the user. It will be appreciated that the interactive system of the medical device may include voice interactive functionality, or various functionality such as visual interactive functionality, communication interaction, etc. to interact with the user. Therefore, the voice interaction method when multiple medical devices coexist utilizes different identity characteristics to clarify the directivity when the multiple medical devices interact with each medical device in the space environment, and a user can conveniently interact with the corresponding medical device by outputting the identity characteristics corresponding to the medical device. The defects that the voice instruction direction is unknown and the logic of part of medical equipment is disordered due to the fact that the medical equipment with the same type or the same type has similar functions and the voice interaction instructions are the same or similar are overcome.
In a second aspect, the present application also relates to a medical apparatus comprising a processor, an input device, an output device and a storage device. The processor, the input device, the output device and the storage device are connected with each other, wherein the storage device is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions to execute the voice interaction method when the multiple medical devices coexist.
In a third aspect, the present application is also directed to a medical system comprising:
the acquisition module is used for acquiring voice information in the environment;
the analysis module is used for determining that the identity characteristics exist in the voice information, and the identity characteristics are obtained through pre-distribution;
and the control module is used for starting the local interactive system to interact with the user.
Wherein the medical system further comprises a pairing module for obtaining a pre-assigned identity feature.
The medical system further comprises a sequencing module, and the sequencing module is used for determining the sequence of interaction between the local interactive system and a user.
The sequencing module determines the time sequence of interaction between the local interactive system and a user based on the time sequence information acquired by the acquisition module.
The sequencing module is used for determining the sequence of interaction between the local interactive system and a user based on the sequencing characteristics acquired by the pairing module.
The medical system further comprises a judging module, and the judging module is used for determining whether the local computer is used as a host computer to interact with a user.
The judging module determines whether the local computer is used as a host computer to interact with a user or not based on the host computer information acquired by the acquiring module.
The sorting module determines whether the local computer is used as a host computer to interact with a user or not based on the sorting characteristics acquired by the pairing module.
The medical system further comprises a sound source ranging module, and the sound source ranging module is used for detecting a sound source distance value.
Wherein, the medical system further comprises a volume detection module, and the volume detection module is used for detecting the volume value.
The medical system further comprises a networking module, and the networking module is used for starting the interactive systems of the rest medical equipment matched with the networking information in the environment.
The medical system further comprises a feedback module, and the feedback module is used for generating and displaying feedback information to display that the local interactive system is started.
The pairing module obtains pre-distributed identity characteristics comprising a single-machine characteristic and a multi-machine characteristic;
when the analysis module determines that the identity characteristics exist in the voice information, the analysis module determines that the voice information comprises single-computer characteristics or multi-computer characteristics;
when the control module starts a local interactive system to interact with a user, and the analysis module determines that the identity feature is a single-machine feature, the control module starts the local interactive system to interact with the user;
and when the analysis module determines that the identity feature is a multi-machine feature, the control module starts a local interactive system and interacts with a user based on an interactive time sequence.
The analysis module is used for analyzing the multi-machine characteristics and the time sequence information included in the obtained voice information when the identity characteristics exist in the voice information;
and the control module is used for starting the local interactive system and interacting with the user based on the interaction time sequence, and the sequencing module is used for controlling the local interactive system to interact with the user based on the interaction time sequence of the time sequence information.
Wherein the pairing module includes a ranking feature in obtaining pre-assigned multi-machine features. When the analysis module determines that the identity characteristics exist in the voice information, the analysis module determines that the identity characteristics are multi-machine characteristics;
the ranking module is further configured to analyze a ranking of the ranking features in the identity features;
and the control module is used for interacting with the user based on the interaction time sequence of the sequencing characteristics when the interaction system is started to interact with the user based on the interaction time sequence.
After the acquisition module acquires the voice information in the environment, the analysis module analyzes the voice information to obtain the characteristics of the multiple computers and the host information;
the judging module is used for judging whether the local computer is the host computer or not based on the host computer information;
the control module is used for interacting with a user based on the interaction time sequence determined by the host.
The matching module obtains pre-distributed multi-machine characteristics comprising a sequencing characteristic, the acquisition module is used for determining that the identity characteristic is the multi-machine characteristic when acquiring voice information in an environment, and the sequencing module is also used for analyzing and comparing the sequencing of the sequencing characteristic of the local machine;
the judging module is used for judging whether the local computer is the host computer or not according to the comparison result;
the control module is used for interacting with a user based on the interaction time sequence determined by the host.
The analysis module is used for determining that the voice information also comprises time sequence information when the identity characteristics exist in the voice information;
and the control module is used for starting the local interactive system to interact with the user, and interacting with the user based on the interaction time sequence of the time sequence information.
The identity features distributed and obtained by the pairing module comprise sorting features, and the sorting module is used for comparing the sorting of the sorting features when the identity features exist in the voice information;
and the control module is used for interacting with the user based on the interaction time sequence of the sequencing characteristics when the local interaction system is started to interact with the user.
The analysis module is used for determining that the voice information also comprises host information when the identity characteristics exist in the voice information;
the judging module is used for judging whether the local computer is the host computer or not based on the host computer information;
and the control module is used for interacting with the user based on the interaction time sequence determined by the host when the local interaction system is started to interact with the user.
The identity features distributed and obtained by the pairing module comprise sorting features, and the analysis module is used for determining that the sorting module compares the sorting of the sorting features of the mobile phone when the identity features exist in the voice message;
the judging module is used for judging whether the local computer is a host computer or not based on the comparison result;
and the control module is used for interacting with the user based on the interaction time sequence determined by the host when the local interaction system is started to interact with the user.
Wherein the host determined interaction timing comprises:
if the judging module judges that the local computer is the host computer, the control module starts a local computer interaction system to interact with a user;
and if the judging module judges that the local computer is not the host computer, the control module starts the local computer interaction system to indirectly interact with the user through the host computer.
The pairing module obtains pre-distributed identity characteristics including sound source distance conditions;
when the acquisition module acquires voice information in an environment, the sound source ranging module is used for acquiring a sound source distance value;
the analysis module is used for determining that the distance value to the sound source meets the sound source distance condition;
the analysis module is used for judging that the identity characteristics exist in the voice information.
The pairing module obtains pre-allocated identity characteristics including volume conditions;
when the acquisition module acquires the voice information in the environment, the volume detection module is used for acquiring the volume value of the voice information in the environment;
the analysis module is to determine that the volume value satisfies the volume condition;
the analysis module is further configured to determine that the identity characteristic exists in the voice message.
Wherein the sound source distance condition pre-allocated by the pairing module comprises a sound source distance threshold, or/and
and judging that the distance value to the local sound source is larger than the sound source distance value of any rest of medical equipment in the environment.
Wherein the volume condition for the pairing module to obtain pre-assignment includes a volume threshold, or/and
and judging that the local volume value is larger than the volume value of any rest medical equipment in the environment.
After the control module starts a local interactive system to interact with a user, the acquisition module is used for acquiring voice information in an environment;
the analysis module is used for determining that group quitting information exists in the voice information;
the control module is also used for exiting the local interactive system and stopping interaction with the user.
After the control module starts the local interactive system to interact with the user, the method further comprises the following steps:
the acquisition module is used for acquiring voice information in an environment;
the analysis module is used for determining that networking information exists in the voice information;
the networking module is used for starting an interactive system of other medical equipment matched with the networking information in the environment based on the networking information;
the control module is also used for controlling the local interactive system to interact with the user based on the interaction time sequence of the networking information.
The feedback information generated and displayed by the feedback module comprises visual feedback information and/or auditory feedback information.
In the above aspects, in a scenario where a plurality of medical devices with interactive systems coexist, the directionality of the user voice information in voice interaction with each medical device in a spatial environment can be clarified by calling the identity characteristics corresponding to different medical devices. The method and the device facilitate interaction between the user and the corresponding medical equipment through the identity characteristics, and avoid the defects that the voice instruction direction is unknown and logic confusion occurs in part of the medical equipment due to the fact that the medical equipment with the same type or the same type has similar functions and the voice interaction instructions are the same or similar.
Drawings
FIG. 1 is a flow chart illustrating a method for voice interaction in coexistence of multiple medical devices according to an embodiment of the present application;
FIG. 2 is a schematic view of a medical device according to an embodiment of the present application;
FIG. 3 is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 4 is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 4a is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 5 is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 5a is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 6 is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 6a is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 7 is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 8 is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 8a is a flow chart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application;
FIG. 9 is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 10 is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 11 is a flow chart illustrating a method for voice interaction during coexistence of multiple medical devices according to another embodiment of the present application;
FIG. 12 is a flow chart illustrating a method for voice interaction during coexistence of multiple medical devices according to another embodiment of the present application;
FIG. 13 is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 13a is a flow chart of a method for voice interaction in coexistence of multiple medical devices in another embodiment of the present application;
FIG. 14 is a schematic illustration of a medical system in an embodiment of the present application;
fig. 15 is a schematic illustration of a medical system in another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Please refer to a flowchart of a voice interaction method when multiple medical devices coexist shown in fig. 1, and amedical device 100 corresponding to the voice interaction method when multiple medical devices coexist shown in fig. 2. The voice interaction method during coexistence of multiple medical devices comprises the following steps:
and S10, acquiring the voice information in the environment.
Specifically, a plurality ofmedical devices 100 exist in the spatial environment, and the plurality ofmedical devices 100 include aninteractive system 110, so that themedical devices 100 implement interactive functions with the user through theinteractive system 110. It will be appreciated that theinteractive system 110 of themedical device 100 may include voice interactive functionality, or various functionality such as visual interactive functionality, communication interaction, etc. to interact with the user. When theinteractive system 110 of themedical device 100 has the voice interactive function, the voice information of the user can be directly acquired in real time through theinteractive system 110. Alternatively, when themedical apparatus 100 does not have the voice interaction function or is not convenient to acquire the voice information of the user by using the voice interaction function, themedical apparatus 100 may monitor the voice information of the user in the spatial environment by using a dedicated audio acquisition device. For convenience of description, the present application is developed based on theinteractive system 110 of themedical device 100 having a voice interactive function. Themedical device 100 employing the audio acquisition apparatus is similar in principle in performing the scheme of the present application and is not affected by the absence of the voice interaction function. Theinteractive systems 110 of the plurality ofmedical devices 100 each obtain voice information of users within the environment in real time. Because themedical device 100 has theinteractive system 110, and theinteractive system 110 can interact with the user, theinteractive system 110 has the function of acquiring the voice information of the user.
S20, determining that the identity characteristics exist in the voice information, wherein the identity characteristics are obtained through pre-allocation.
Specifically, after obtaining the voice information of the user in the spatial environment, themedical device 100 needs to analyze whether the voice information has an identity feature corresponding to the local device. The voice information includes all dialogues of all users in the spatial environment, and as long as the voice information exhaled by any one user includes the identity feature pre-allocated and obtained by themedical device 100, themedical device 100 can determine that the voice information includes the identity feature corresponding to the local device by analyzing the voice information.
The identity needs to be obtained by pre-allocation. Before interacting with a plurality ofmedical devices 100 with interactive functions in a spatial environment, a user needs to set a specific identity characteristic for eachmedical device 100. Each identity feature has a unique identifier that is different from other identity features, so that when a user calls out the identity feature corresponding to the unique identifier in a spatial environment, the identity feature can be matched with themedical device 100 corresponding to the identity feature and interact with themedical device 100.
The assignment of identity characteristics may be done in a spatial environment. After multiplemedical devices 100 are placed in the same spatial environment, the user may assign identity characteristics to eachmedical device 100 based on the specific number and type ofmedical devices 100 in the spatial environment. The assignment of identity characteristics may be accomplished by a variety of means, such as a key press, touch input, or voice input on eachmedical device 100. Themedical device 100 can confirm the identity feature, input the identity feature by calling the identity feature by the user, or input characters and numbers by the user on themedical device 100, and then themedical device 100 converts the input of the user into the identity feature corresponding to the local device. The simplest implementation may be to number eachmedical device 100 in the spatial environment one by one: if 9medical devices 100 with voice interaction function exist in the spatial environment at the same time, the number numbers 1 to 9 are sequentially input to the 9medical devices 100 as the identity characteristics of the 9medical devices 100. It can be understood that the number obtained by any one of themedical devices 100 is the identity corresponding to themedical device 100, and the number is a unique identifier that is different from the othermedical devices 100. If the voice information called out by the user includes the number "4", it represents that the voice information with the identity feature "4" is called out by the user, at this time, the user will correspond to themedical device 100 whose identity feature is defined as "4" among the 9medical devices 100 in the spatial environment, and the other 8medical devices 100 with the number "1, 2, 3, 5, 6, 7, 8, 9" are considered as not selected because the identity feature corresponding to the local device is not obtained.
The identity characteristics can also be distributed in a factory original setting mode. Similarly, eachmedical device 100 may obtain an identity ID corresponding to the local device at the time of factory shipment, and the identity ID is used as an identification code for distinguishing the correspondingmedical device 100 from any othermedical device 100 on the market, so that themedical device 100 can be distinguished from any othermedical device 100 in any spatial environment. Therefore, with similar logic, the identity ID is used as the unique identifier of themedical device 100, or a set of encoding mechanism is established separately from the identity ID to be the unique identifier of themedical device 100, so that the identity of the corresponding local device can be obtained through pre-allocation at the factory stage of themedical device 100. It will be appreciated that, since themedical device 100 obtains the identity characteristic in the factory stage, i.e., pre-assignment, differently from any of the remainingmedical devices 100, the pre-assigned identity characteristic has been obtained among a plurality ofmedical devices 100 in any spatial environment.
And S30, starting the localinteractive system 110 to interact with the user.
Specifically, after determining that the user in the spatial environment has called out a message containing the identity of the corresponding local device, themedical device 100 initiates thelocal interaction system 110 to interact with the user. It is to be noted that the initiation of the interaction of the local interactive system or the medical device with the user as referred to in the present application means the initiation of the interactive function of the interactive system or the medical device with the user. For example, the present embodiment refers to the starting of the interaction between the nativeinteractive system 110 and the user, which means the starting of the interactive function between the nativeinteractive system 110 and the user. Because of the foregoing, themedical device 100 may acquire voice information in a spatial environment through theinteractive system 110. Thus, theinteractive system 110 may also perform the task of voice information acquisition before interacting with the user. Thus, in this case, theinteractive system 110 is not initiated depending on whether the identity of the corresponding native machine is determined to be present in the voice message. But the interactive functions of theinteractive system 110 with the user are initiated by determining that the identity is present in the voice message.
Therefore, according to the voice interaction method when multiple medical devices coexist, themedical devices 100 are distinguished by allocating the corresponding identity features to themedical devices 100 in the spatial environment, so that the user can start the interaction function of theinteraction system 110 of the correspondingmedical device 100 by calling out the identity features corresponding to the targetmedical device 100, and the purpose of performing directional interaction operation on themedical devices 100 in the spatial environment is achieved. For eachmedical device 100 in the space, because the identity feature corresponding to the local device is obtained through pre-allocation, the user can be interacted after the voice information exhaled by the user is determined to contain the identity feature corresponding to the local device. Under the condition that themedical equipment 100 is not started, themedical equipment 100 is always in the state that the interaction function is not started, and the problem that when a user interacts with othermedical equipment 100 in a space environment, the normal work of the local machine is influenced because the local machine responds or executes corresponding actions after receiving voice commands sent by the user to the othermedical equipment 100 is avoided.
It is to be noted that if the user calls out a voice message including an identity feature, an operation to be performed by themedical device 100 corresponding to the identity feature, or a specific function described as needing to be activated, is also defined, and the operation or the specific function can be controlled by theinteractive system 110 of themedical device 100. At this time, themedical device 100 determines that the voice information includes the identification feature and also receives an instruction to start the operation including the function corresponding to the local device. Themedical device 100 may simultaneously perform the operations of activating theinteractive system 110 and activating the corresponding functions of the local device directly via the voice information of the user. For example, after receiving the voice information of the user, themedical device 100 determines the identity characteristic corresponding to the local device, and directly starts the data acquisition function of themedical device 100, or feeds back the acquired physical sign data to the user through a display screen or voice broadcast. Themedical device 100 determines that the instruction for starting the operation of the user is directed to the local device after determining the identity of the corresponding local device, so that the corresponding function of the local device is started. The situation can also be considered that the user first activates theinteractive system 110 by means of the identity feature and then activates the corresponding function of themedical device 100 by means of theinteractive system 110. Therefore, the interaction between theinteractive system 110 and the user is the initiation of the corresponding function which is completed in case of non-response of the voice interaction function, which also belongs to the scope of the claimed voice interaction method when multiple medical devices coexist. Theinteractive system 110 correspondingly completes the starting of the corresponding function without responding to the user, and also belongs to a completion mode for interacting with the user after theinteractive system 110 is started by the identity feature.
It is understood that the user may interact with a plurality ofmedical devices 100 in a spatial environment by one-to-one interaction with respect to only one of themedical devices 100, or may initiate a plurality ofmedical devices 100 at a time for batch interaction. Due to the adoption of the voice interaction method when multiple medical devices coexist, the targetmedical device 100 can be started relatively conveniently by a user. Therefore, when the user activates themedical device 100 through the identity feature, one or more requiredmedical devices 100 can be arbitrarily extracted and interacted according to actual needs.
Themedical device 100 with voice interaction function is currently qualified for one-to-one stand-alone voice interaction. Theinteractive system 110 of mostmedical devices 100 is also designed based on one-to-one interactive logic, and the instructions are relatively compact and easy to execute. In order to implement batch operation of a plurality ofmedical devices 100 by voice commands, a set of relatively complex and detailed interaction logic needs to be designed in terms of interaction timing, function correspondence, and the like. Themedical devices 100 also need to be set adaptively in the process of dealing with batch operation of users, so as to meet the requirement of users for interacting with a plurality ofmedical devices 100 simultaneously in a space environment.
Therefore, please refer to the embodiment of fig. 3. In the present embodiment, theinteractive system 110 of themedical device 100 is provided with a single-machine interactive mode and a multi-machine interactive mode at the same time. Correspondingly, the voice interaction method when multiple medical devices coexist comprises the following steps:
and S10a, acquiring voice information in the environment.
S20a, determining that the voice information comprises single machine characteristics or multi-machine characteristics, wherein the identity characteristics are obtained through pre-allocation.
Specifically, the identity features obtained by themedical device 100 through pre-allocation may include a standalone feature for a standalone interaction mode only, and a multi-machine feature for a multi-machine interaction mode. That is, the identity feature pre-assigned and obtained by themedical device 100 is not limited to one, and may be a plurality of identity features. At least one of the plurality of identity features belongs to a single machine feature, and the other identity features belong to a plurality of machine features. The multi-machine feature may be understood to correspond to a variety of different groupings between the samemedical device 100 and different othermedical devices 100. When the user calls out different multi-machine features, the multi-machine features as identity features can all correspond to the samemedical device 100, so that theinteractive system 110 of themedical device 100 is started. However, as the multi-modality feature differs, othermedical devices 100 that interact with the user simultaneously with the samemedical device 100 may differ. Or, it is described that, corresponding to the samemedical device 100, the user performs batch operations on different preset groups ofmedical devices 100 in the spatial environment by calling different multi-machine features, and each preset group includes the samemedical device 100 corresponding to the identity feature.
S30a, when the identity feature is determined to be a stand-alone feature, starting the localinteractive system 110 to interact with the user;
when the identity feature is determined to be a multi-machine feature, the localinteractive system 110 is started and interacts with the user based on the interaction timing sequence.
Specifically, in the present embodiment, for the sake of convenience of distinction, theinteractive system 110 of themedical device 100 is set to have both a single-machine interactive mode and a multi-machine interactive mode. And correspondingly starting or switching modes through the pre-allocated single-computer characteristic and multi-computer characteristic. That is, after acquiring the voice information called by the user, themedical device 100 determines that the identity feature includes an identity feature and also analyzes whether the identity feature is a single-machine feature or a multi-machine feature. When the identity characteristic is determined to be a stand-alone characteristic, thelocal interaction system 110 can be directly started to enter a stand-alone interaction mode to interact with the user; when the identity feature is determined to be a multi-machine feature, the localinteractive system 110 is started and interacts with the user based on the interaction timing sequence.
While in other embodiments, the division of the interaction pattern may not be performed. That is, themedical device 100 interacts with the user by only determining whether the identity feature is specific to a standalone feature or a multi-machine feature when thelocal interaction system 110 is enabled for interaction with the user. It will be appreciated that when the identity feature is determined to be a stand-alone feature, themedical device 100 can interact with the user one-to-one, and the instructions are relatively compact and easy to execute. When the identity feature is determined to be a multi-machine feature, the interaction between themedical device 100 and the user needs to be based on a certain interaction time sequence, so that the difficulty of receiving the user information after a plurality ofmedical devices 100 which are started at the same time interact with the user at the same time is avoided. At this time, themedical device 100 does not divide the single-machine interaction mode or the multi-machine interaction mode, but only differs in whether the interaction timing is followed. Of course, the division of the interaction mode in the embodiment can be convenient for understanding, and clearly express the scheme of the applicant.
As mentioned above, the stand-alone interaction mode of themedical device 100 is relatively straightforward, and themedical device 100 directly initiates thelocal interaction system 110 to interact with the user after determining the corresponding local stand-alone feature. The scenario of multi-machine batch interaction is relatively complex, and when themedical device 100 is based on the interaction time sequence, certain time sequence allocation is performed between themedical device 100 and themedical device 100 which is started together in the same group, so that situations that a user cannot hear clearly due to simultaneous voice response of a plurality ofmedical devices 100 or the user is not convenient to receive due to simultaneous feedback of monitoring data by a plurality ofmedical devices 100 are avoided. Thus, the embodiment of fig. 3 provides the convenience of a user interacting in bulk with multiplemedical devices 100 in a spatial environment.
See table 1 for an application scenario. The application scene can correspond to a ward, three sickbeds are placed in the ward at the same time, and each sickbed is provided with one medical device 100, namely three medical devices 100 of the medical devices 1, 2 and 3. The distribution of the identity characteristics corresponding to the three medical devices 100 is shown in table 1:
Figure PCTCN2019084442-APPB-000001
TABLE 1
As can be understood, through the assignment of the identity characteristics in table 1, the identity characteristics obtained by the medical device 1 through the pre-assignment include four kinds, "a \1\3\ 4". Wherein the letter "A" is a single machine characteristic in the identity characteristics of the medical equipment 1, and the number "1 \3\ 4" is three multi-machine characteristics of the medical equipment 1. When the voice information called out by the user includes any one of the identity features "a, 1, 3, and 4", theinteraction system 110 of the medical device 1 may be started correspondingly, and interact with the medical device 1. Further, when the user calls the letter "a", the medical device 1 determines that the identity characteristic of the user is a stand-alone characteristic, and theinteractive system 110 of the medical device 1 enters a stand-alone interactive mode to interact with the user. It will be understood that the letters "B" and "C" correspond to the medical devices 2 and 3, respectively, at this time. I.e. letter B is a standalone feature of the medical device 2 and letter C is a standalone feature of the medical device 3. The medical devices 2 and 3 do not determine the identity of the corresponding local devices, and theinteraction system 110 does not start the interaction function, so that the user can interact with the medical device 1 independently. Similarly, when the identity characteristic of the outgoing call of the user is the letter "B" or the letter "C", theinteractive system 110 to the medical device 2 or the medical device 3 is activated.
And for medical device 2, the multimachine features obtained by pre-allocation thereof include the number "2 \3\ 4". When the user calls the number "3", since the number 3 is a multi-machine feature of the medical devices 1 and 2 at the same time, the number "3" simultaneously starts theinteractive systems 110 of the medical devices 1 and 2, and the user can perform batch interaction on the medical devices 1 and 2 through their respectiveinteractive systems 110 after calling the number "3". That is, the number "3" groups the medical devices 1 and 2 such that the medical devices 1 and 2 have the number "3" as a multi-modality feature at the same time, under which the medical devices 1 and 2 can interact with the user independently of the medical devices 3.
For medical device 3, its multimachine features obtained by pre-allocation include the number "1 \2\ 4". When the user calls the number "1", the medical device 3 is activated simultaneously with the interactive function of theinteractive system 110 of the medical device 1, facilitating the user to interact with the medical devices 1 and 3 in bulk. When the user calls the number "4", since the medical devices 1, 2 and 3 are all assigned to "4" as a multi-machine feature, theinteraction systems 110 of all themedical devices 100 in the ward are all activated, and the user can interact with all themedical devices 100 in the ward in batch.
In this scenario, the user may be a medical professional. Based on the actual number of patients and the distribution of the beds in the three sickbeds, the medical staff can randomly combine and interact with the threemedical devices 100 by calling the corresponding identity characteristics. For example, when only one sickbed has a patient in the ward, themedical device 100 located beside the sickbed is called to perform voice interaction with theinteraction system 110 of themedical device 100 by calling the identity corresponding to themedical device 100, so as to perform operations such as physical sign data acquisition, physical sign data reporting, and regular physical sign data acquisition on the patient. When there are two or three patient beds in the ward, the medical staff can simultaneously start the interactive functions of theinteractive systems 110 of two or threemedical devices 100 by calling the multi-machine feature of the correspondingmedical device 100, and perform the above operations by batch interaction with theinteractive systems 110.
The identity assignment of table 1 is more suitable for the case of a smaller number of medical devices 100 in a spatial environment. In this case, the setting of the single-machine feature and the multi-machine feature can be satisfied by simple number or letter assignment. When there are many medical devices 100 in the spatial environment, the identity assignment method in table 2 can be seen:
Figure PCTCN2019084442-APPB-000002
Figure PCTCN2019084442-APPB-000003
TABLE 2
Unlike the allocation manner of table 1, the identity allocation manner of table 2 groups a plurality ofmedical devices 100 in the spatial environment into a large group, and then corresponds to eachmedical device 100 in the group by the number in the group. Also take a ward as an example, suppose there are two sickbeds in a ward, each sickbed has Nmedical devices 100, and the Nmedical devices 100 respectively and correspondingly acquire different physical sign data of a patient on the sickbed. Physical data such as electrocardiogram, blood pressure, heart rate, body temperature, etc. are collected using a singlemedical device 100. Thus, it can be defined that themedical devices 100 used for acquiring the same patient sign data in the ward have "a" as a multi-machine feature, and themedical devices 100 used for acquiring another patient sign data have "B" as a multi-machine feature. Then, according to the difference in function of eachmedical device 100 in the bed "a" and the bed "B", a number is set to define themedical devices 100 of different functions in two groups differently as a stand-alone feature of eachmedical device 100, such as medical device a1, medical device B2, and the like.
Further, themedical devices 100 with the same functions of the two hospital beds are provided with unified digital multi-machine features, for example, the medical device a1 and the medical device B1 which acquire electrocardiographic sign data are both provided with a number "1" as the multi-machine feature, and the medical device a2 and the medical device B2 which acquire blood pressure are both provided with a number "2" as the multi-machine feature, so that after the medical device AN and the medical device BN are both provided with a number "N" as the multi-machine feature, the numbers "1-N" can be defined as the acquisition functions of different sign data.
According to the identity feature assignment of table 2, when there is only one patient in the ward, the medical staff can interact with a plurality ofmedical devices 100 corresponding to the hospital bed one by only calling the single feature corresponding to eachmedical device 100. Or the medical staff calls out the multi-machine feature "a" or "B" corresponding to the hospital bed to simultaneously start the interactive functions of theinteractive systems 110 of the plurality ofmedical devices 100 corresponding to the hospital bed, so as to perform batch interaction. When two patients exist in the ward at the same time, the medical staff can start twomedical devices 100 with the same functions on two sickbeds at the same time through calling numbers, and collect the same physical sign data of the two patients in batches through voice interaction.
Therefore, according to the voice interaction method during coexistence of multiple medical devices, after reasonable grouping planning is performed on eachmedical device 100 in a spatial environment, through simple identity feature allocation, the convenience that a user accurately points to start the interaction function of theinteraction system 110 of themedical device 100 and simultaneously starts the interaction functions of theinteraction systems 110 of the multiplemedical devices 100 for batch interaction is provided. Through scientific and reasonable grouping planning, the voice interaction method when multiple medical devices coexist can deal with any number of single-machine interaction or multi-machine interaction scenes of themedical devices 100 in the space environment, simultaneously avoids the situation that themedical devices 100 have disordered instruction receiving, ensures that the user can effectively receive the feedback of themedical devices 100, and simplifies the complicated repeated interaction process of the user in the interaction function using process.
It should be noted that the assignment of tables 1 and 2 takes the form of a combination of numbers and letters. However, the voice interaction method when multiple medical devices coexist in the present application is not limited to the specific setting content of the identity feature. For example, the user may also assign identity to themedical device 100 using functional words. For example, the words such as "blood pressure", "electrocardiogram", "heart rate", etc. correspond to the functions, and then also simplify the assignment logic, so that the identity features can correspond to the functions of themedical device 100, which is convenient for the user to memorize. In addition, the user can set any words, letters, numbers or any combination of forms to complete the setting of the identity characteristics. The assignment of the identity feature is referred to herein as the identity feature assignment, as long as the effect that the identity feature is distinguished from one or more othermedical devices 100 is achieved, so that the user can correspondingly activate themedical device 100 by calling the identity feature.
Referring to the embodiment of fig. 4, fig. 4 is a flowchart illustrating a voice interaction method when multiple medical devices coexist according to another embodiment of the present invention. In this embodiment, the identity features include a stand-alone feature and a multi-machine feature. The method comprises the following steps:
and S10b, acquiring voice information in the environment.
S21b, determining that the voice information comprises the multi-machine characteristics, wherein the identity characteristics are obtained through pre-allocation.
And S22b, determining that the voice information also comprises time sequence information.
Specifically, the identity feature included in the voice message called out by the user is a multi-machine feature. Meanwhile, the voice information called out by the user also comprises time sequence information. The timing information is acquired by themedical device 100 along with the characteristics of the multiple machines.
And S30b, starting the localinteractive system 110 to interact with the user in sequence based on the interaction time sequence of the time sequence information.
Specifically, themedical device 100 interacts with the user based on the interaction timing sequence after determining the multi-machine feature corresponding to the medical device. Because the multi-machine feature simultaneously enables theinteractive systems 110 of the multiplemedical devices 100 to interact with the user, if the multipleinteractive systems 110 simultaneously interact with the user, there is also a situation that the multipleinteractive systems 110 simultaneously perform voice feedback on the user, so that the user information reception is interfered. To avoid this phenomenon, the user may order the plurality ofmedical devices 100 corresponding to the multiple machine features by calling out the multiple machine features and simultaneously calling out the timing information, and the plurality ofmedical devices 100 sequentially interact with the user based on the ordering of the timing information.
The timing information may describe a real-time sequencing of multiplemedical devices 100 that need to interact simultaneously for a user. For example, according to the embodiment of table 1, the user may order the medical devices 1, 2, and 3 by voice at the same time when calling out the multi-machine feature "4", for example, "answer in the order of the medical device 3, the medical device 2, and the medical device 1". At this time, "response is made in the order of the medical device 3, the medical device 2, and the medical device 1" can be analyzed and determined as the time series information. Because the timing information obtained by the plurality ofmedical devices 100 is the same, the plurality ofmedical devices 100 each interact with the user according to the same ranking criteria.
It should be noted that, themedical device 100 and the user sequentially interact based on the time sequence information, and it is not limited that each information interaction of theinteraction system 110 of themedical device 100 must be performed sequentially. The plurality ofmedical devices 100 may be configured to interact with the user sequentially based on the timing information only when reporting information related to the vital sign data measured locally or when sequential reporting is required to avoid interference with the user's reception. The setting can further save the interaction time and improve the efficiency.
Referring to the embodiment of fig. 5, fig. 5 is a flowchart illustrating a voice interaction method when multiple medical devices coexist according to another embodiment of the present invention. In this embodiment, the identity features include a single-machine feature and a multi-machine feature, wherein the multi-machine feature further includes a sorting feature. The method comprises the following steps:
and S10c, acquiring voice information in the environment.
S21c, determining that the voice information comprises the multi-machine characteristics, wherein the multi-machine characteristics are obtained through pre-allocation.
S22c, analyzing the ranking of the ranking features in the multi-machine features.
Specifically, in this embodiment, the identity feature also presets a ranking feature corresponding to the multi-machine feature at the same time. The ranking features are similar to the ranking information in the embodiment of fig. 4, except that the ranking information is set by the user in the outgoing voice message, and the ranking features are preset at the stage of assigning the identity feature. Compared with the time sequence information, the sequencing feature can simplify the operation that the user additionally sets the response sequence of themedical equipment 100 after starting a plurality ofmedical equipment 100 through the multi-machine feature every time, and is relatively more convenient for the user to use.
The identity features contained in the voice information called out by the user are multi-machine features. After obtaining the multi-machine feature, themedical device 100 determines a priority order of the local device when the plurality ofmedical devices 100 interact with the user based on the multi-machine feature and a priority order of the corresponding local ranking feature in the multi-machine feature.
And S30c, starting the localinteractive system 110 and interacting with the user in sequence based on the sequence of the interaction of the sequencing characteristics.
In particular, the implementation of the set ordering feature after themedical device 100 activates thelocal interaction system 110 is similar to the embodiment of fig. 4. Because the sequence of interaction between eachmedical device 100 and the user is determined by the sorting feature, the embodiment can also avoid the defect that the user information reception is interfered because the plurality ofinteraction systems 110 simultaneously perform voice feedback on the user.
Corresponding to the method of this embodiment, table 2 is used for example: in the stage of assigning identity characteristics, the number numbers "1-N" in medical devices A1-AN may be defined as ordering characteristics. After the user has called out the multi-phone feature "A", theinteractive systems 110 of the medical devices A1-AN are each enabled to interact with the user. At this time, in the step S22c, the medical devices a1-AN each perform analysis based on the number "1-N" of the local device, so as to determine the interaction sequence of the local device in the multi-device mode corresponding to the multi-device feature "a", and then sequentially interact with the user through the medical device a1, the medical device a2, and the medical device AN based on the interaction sequence.
It will be appreciated that during the interaction, there is a problem of handover between the formermedical device 100 and the lattermedical device 100. For example, medical device a2 may be the lastmedical device 100 to receive a signal that the interaction with medical device a1 is complete in order to determine the information that the current interaction sequence is in turn local. There are many ways in which this can be solved, for example, after the interaction is completed, the medical device a1 may signal the completion of the interaction to hand over the timing of the interaction to the medical device a 2; or signaling completion of the interaction by the user to hand over the interaction timing to medical device a 2. Meanwhile, the signal instructions of the interaction completion can be the same instruction, the medical device a2 determines that the current interaction timing is turned to the local machine after receiving one signal instruction of the interaction completion, the medical device A3 can determine that the current interaction timing is turned to the local machine after receiving two signal instructions of the interaction completion by counting, and so on. The signal command for completing the interaction may be set individually for differentmedical devices 100, for example, the medical device a1 may send "please the medical device a2 to start responding" after the interaction is completed, so as to include the information of the identity or name of the medical device a 2. Of course, if a plurality ofmedical devices 100 are connected to a server at the same time, the relevant control can be performed by the deployment of the server.
Referring to the embodiment of fig. 6, in the present embodiment, the identity feature includes a single-machine feature and a multi-machine feature. The voice interaction method during coexistence of multiple medical devices comprises the following steps:
and S10d, acquiring voice information in the environment.
S21d, determining that the identity features are multi-machine features, wherein the identity features are obtained through pre-allocation.
S22d, determining that the voice information also comprises host information.
S23d, judging whether the local computer is the host computer or not based on the host computer information.
Specifically, when the user calls the information containing the multi-machine features to perform batch interaction, the user also calls the content including the host information, so that themedical device 100 obtains the multi-machine features and obtains the host information through analysis, and performs corresponding operation and response. Based on the host information, any one of themedical devices 100 corresponding to the multi-machine feature may be set as the host. The present application does not limit the specific content of the host information, and defines onemedical device 100 of the plurality ofmedical devices 100 as a host in any manner, so that themedical device 100 can be regarded as host information as distinguished from the rest of themedical devices 100. For example, when a multi-machine feature is called, a single-machine feature of one of themedical devices 100 is also called, and themedical device 100 corresponding to the single-machine feature is defined as a host.
And S30d, starting the localinteractive system 110 to interact with the user based on the interaction sequence determined by the host.
Themedical device 100, defined as the host, interacts with the user through thenative interaction system 110 based on the interaction timing. At this time, all themedical devices 100 corresponding to the multi-machine feature can interact with the user only through themedical device 100 defined as the host. By setting the host, the user only needs to interact with onemedical device 100, and the convenience of batch interaction of a plurality ofmedical devices 100 is realized. The voice instruction input of the user or the voice feedback actions of a plurality ofmedical devices 100 are all performed by the samemedical device 100. Such an embodiment also avoids the disadvantages of poor information reception, logic confusion, and the like, which are easily encountered when a user interacts with a plurality ofmedical devices 100 simultaneously in the same spatial environment. The multi-machine interaction mode of the host can be understood as that the user performs single-machine interaction with themedical device 100 set as the host, and thus the effects of batch instruction input or information feedback on a plurality ofmedical devices 100 can be achieved.
Further, by setting the host, in the process that the user interacts with themedical device 100 as the host based on theinteractive system 110, the instruction set by the user that needs to perform batch operation may be received by themedical device 100 as the host and then distributed to othermedical devices 100 one by one, or may be completed by the othermedical devices 100 directly obtaining the voice information of the user and receiving the instruction.
Specifically, the interaction with the user is performed based on the interaction timing determined by the host, and there is also a case where the host information does not correspond to the host as the host. That is, the determination result of step S23d may correspond to a case where the host is not the host. Referring to the flowchart of fig. 7, in the embodiment corresponding to fig. 6, the interaction between themedical device 100 and the user, which is initiated by the same multi-machine feature but is not configured as the host, can be accomplished by the following steps:
and S10e, acquiring voice information in the environment.
S21e, determining that the identity features are multi-machine features, wherein the identity features are obtained through pre-allocation.
S22e, determining that the voice information also comprises host information.
S23e, determining that the local computer is not the host computer based on the host computer information.
And S30e, starting the localinteractive system 110 to interact with the user based on the interaction sequence determined by the host, and indirectly interacting the localinteractive system 110 with the user through the host.
Specifically, in the present embodiment, the case corresponds to the case where themedical device 100 in the embodiment of fig. 6 is activated by the multi-machine feature, but is not set as the master. At this point, after thenative interaction system 110 is activated, themedical device 100 that is not configured as a host indirectly interacts with the user through the host during subsequent interactions. That is, the localinteractive system 110 is only used for receiving the instructions in the voice information called out by the user, and does not directly answer the user. In the batch interaction process, the plurality ofmedical devices 100 corresponding to the multi-machine features do not emit sound feedback through devices such as a loudspeaker except themedical device 100 set as the host, so that the user only receives the sound feedback of themedical device 100 set as the host, and the interaction between the user and themedical device 100 set as the host is not interfered.
In this embodiment, since the non-hostedmedical device 100 does not respond to the voice of the user directly in the multi-machine interaction mode, when the user needs to perform feedback on the non-hostedmedical device 100, themedical device 100 can perform feedback on the content to be fed back to the user through the hostedmedical device 100 by using the communication connection with the hostedmedical device 100. The non-hostmedical device 100 may also provide feedback to the user through other feedback means, such as a display, indicator lights, etc.
Described another way, in the embodiments of fig. 6 and 7, the host-decision-based interaction timing includes two cases:
1. if the local computer is judged to be the host computer, starting the localcomputer interaction system 110 to interact with the user;
2. if the local computer is not the host computer, the localcomputer interaction system 110 is started to indirectly interact with the user through the host computer.
Referring to the embodiment of FIG. 8, in the present embodiment, the identity feature includes a single-machine feature and a multi-machine feature, and the multi-machine feature includes a sorting feature. The voice interaction method during coexistence of multiple medical devices comprises the following steps:
and S10f, acquiring voice information in the environment.
S21f, determining that the voice information comprises a multi-machine feature, wherein the identity feature is obtained through pre-allocation.
S22f, analyzing and comparing the ranking of the ranking characteristics of the local computer.
S23f, judging whether the local computer is the host computer according to the comparison result.
And S30f, starting the localinteractive system 110 to interact with the user based on the interaction sequence determined by the host.
Specifically, in the present embodiment, multi-machine interaction between a plurality ofmedical devices 100 and a user is also performed by setting up a host. However, at this time, the user does not need to set a host specifically while calling out the multi-machine features, but sets the ranking features while allocating the identity features, and the plurality ofmedical devices 100 corresponding to the same multi-machine features compare the priority ranking among themselves by the ranking features, so that themedical device 100 with the highest priority in the ranking features is automatically pushed out to serve as the host to interact with the user. It is understood that the ranking features in this embodiment may be the same as those in the embodiment of fig. 5, and the usage method of the ranking features is determined according to the specific implementation of the preset multi-machine interaction mode or the interaction timing, and the two are not in conflict.
Similar logic, corresponding to the embodiment of FIG. 9, the identity feature includes a stand-alone feature, a multiple machine feature, and the multiple machine feature includes a sort feature. If the ranking features of themedical device 100 are not the highest ranking feature among the multi-machine features, the method may include the steps of:
and S10g, acquiring voice information in the environment.
S21g, determining that the identity features are multi-machine features, wherein the identity features are obtained through pre-allocation.
S22g, analyzing the ranking of the ranking features in the multi-machine features, and determining that the priority of the ranking feature of at least onemedical device 100 in the plurality ofmedical devices 100 corresponding to the multi-machine features is higher than the priority of the ranking feature of the local medical device.
And S30g, starting the localinteractive system 110 to indirectly interact with the user through the host machine based on the interaction time sequence determined by the host machine.
Specifically, in the present embodiment, corresponding to the determination of the ranking characteristics of themedical device 100 in fig. 8, if the ranking characteristics of the local device are not the highest in priority after the comparison, that is, in the embodiment of fig. 9, themedical device 100 has a lower priority of the ranking characteristics than at least one ranking characteristic of themedical devices 100 that are activated at the same time in the received multiple devices, that is, themedical device 100 is considered not to be set as the master. Similar to the embodiment of fig. 7, themedical device 100 that is not set as the host indirectly interacts with the user through the host during the subsequent interaction process, and is only used for receiving the instruction in the voice message called out by the user, and does not directly respond to the user, so as to ensure that the interaction between the user and themedical device 100 that is set as the host is not interfered.
It is understood that, in the present embodiment, when the user needs the non-hostmedical device 100 for feedback, themedical device 100 may send the content needing feedback to the user through the hostmedical device 100 via the communication connection with the hostmedical device 100. The non-hostmedical device 100 may also provide feedback to the user through other feedback means, such as a display, indicator lights, etc.
It should be noted that, in the embodiment of fig. 3, step S20a of the method of the present application is to simplify the decision logic for determining whether to interact with the user based on the interaction timing in step S30a to some extent by distinguishing between a single-machine feature and a multi-machine feature. That is, when the identity feature is determined to be the standalone feature in the step S20a, the step of determining the interaction timing may be skipped, and the user may directly interact with the identity feature. When the identity feature is determined to be a multi-machine feature in step S20a, the interaction timing is determined based on the pre-assigned settings or user-defined related settings. Of course, there are some embodiments that do not distinguish between a single-machine feature and a multi-machine feature for the identity feature, directly perform the determination of the interaction timing sequence after receiving the voice information containing the identity feature, and interact with the user based on the determination result of the interaction timing sequence. For example the embodiment in fig. 4 a:
and S100b, acquiring voice information in the environment.
S200b, determining that the voice information comprises time sequence information.
And S300b, starting the localinteractive system 110 to interact with the user in sequence based on the interaction time sequence of the time sequence information.
Specifically, the embodiment of fig. 4a is different from the embodiment of fig. 4 in that, in step S200b, the determination of the identity characteristic is implemented by determining that the voice information sent out by the user includes time-series information. Themedical device 100 can determine the time sequence information and can consider that the time sequence information includes both the identity and the interaction time sequence. At this time, the function of the timing information also includes two aspects: theinteraction system 110 of the plurality ofmedical devices 100 is enabled as an identity feature and the timing of the interaction of the plurality ofmedical devices 100 is determined. Therefore, when the voice information called out by the user includes the time sequence information, themedical device 100 can directly start thelocal interaction system 110 to interact with the user based on the interaction time sequence of the time sequence information. Generally, the user will call out the timing information together when theinteractive system 110 of the plurality ofmedical devices 100 is activated simultaneously, and the interaction timing directly based on the timing information interacts with the user, which may also achieve the effect similar to fig. 4. See fig. 5a for another example. In this embodiment, the identity features include a ranking feature, and this embodiment specifically includes:
and S100c, acquiring voice information in the environment.
S201c, determining that the voice information comprises identity characteristics.
S202c, analyzing the ranking of the ranking features in the identity features.
And S300c, starting the localinteractive system 110 and interacting with the user in sequence based on the sequence of the interaction of the sequencing characteristics.
Specifically, in this embodiment, since the single-machine feature or the multi-machine feature is not separately provided, the sorting feature is directly included in the identity feature. When the user calls out an identity feature that includes a ranking feature, themedical device 100 determines the interaction timing directly through the ranking feature. It is to be understood that, corresponding to the embodiment of fig. 5, although the embodiment does not separate the single-machine feature from the multi-machine feature, for an identity feature that includes a ranking feature, the identity feature may default to activating two or more of theinteractive systems 110 of themedical devices 100, and thus will include the ranking feature. If one of the identity features does not include the ordering feature, the one identity feature should correspond to only activating theinteractive system 110 of one of themedical devices 100. Although the embodiment does not define the difference between the identity features corresponding to the single machine or the multiple machines, theinteraction systems 110 of the multiplemedical devices 100 can be simultaneously started and theinteraction systems 110 of the multiplemedical devices 100 can be sequentially interacted by whether the identity features include the ordering feature.
Alternatively, the embodiment of fig. 5a may also be described as: when the identity feature corresponds to only activating theinteractive system 110 of onemedical device 100, the identity feature includes a ranking feature, and theinteractive system 110 of themedical device 100 corresponding to the ranking feature directly interacts with the user. I.e. the ordering of only oneinteractive system 110 in the ordering feature, without waiting for the end of the responses of the otherinteractive systems 110.
Referring to fig. 6a, for the case that the voice information called out by the user includes host information, the following steps may be further adopted to complete:
and S100d, acquiring voice information in the environment.
S201d, determining that the voice information comprises host information.
S202d, judging whether the local computer is the host computer according to the host computer information.
S300d, starting the localinteractive system 110 and interacting with the user based on the interaction timing determined by the host.
Specifically, in the present embodiment, the host information is used to determine a host from a plurality ofmedical devices 100. Subsequently, the steps of determining the host based on the host information, and interacting with the user based on the interaction timing determined by the host are exactly the same as the steps in fig. 6. It will be appreciated that in this embodiment, the determination of the single-machine feature and the multi-machine feature is also omitted, and the user is identified directly by the host information as needing to activate theinteractive systems 110 of multiplemedical devices 100 simultaneously. Compared with the embodiment of fig. 6, the embodiment is simpler, saves the information amount of the user voice information, and also achieves the implementation effect similar to that of fig. 6.
Similar logic, please refer to the embodiment of FIG. 8 a. In this embodiment, the identity characteristic comprises a ranking characteristic. The method comprises the following specific steps:
and S100f, acquiring voice information in the environment.
S201f, determining that the voice information comprises the identity feature.
S202f, analyzing the ranking of the ranking features in the identity features.
S203f, judging whether the local computer is the host computer according to the comparison result.
And S300f, starting the localinteractive system 110 to interact with the user based on the interaction sequence determined by the host.
Specifically, in the embodiment of fig. 8a, the identity feature directly includes the ordering feature. This allows the user to activate theinteractive systems 110 of multiplemedical devices 100 simultaneously by calling out the identity feature in the voice message. And, one host has been selected by comparing the sorting characteristics in the activatedinteractive systems 110. Thereafter, the interaction sequence determined by the host, similar to that of FIG. 8, is employed to interact with the user. That is, themedical devices 100 that are elected as the master based on the ranking features interact directly with the user, while the remainingmedical devices 100 interact indirectly with the user through the master. The embodiment also saves the information amount in the voice information of the user, and can obtain more intelligent interaction effect.
One embodiment is shown in FIG. 10. FIG. 10 is a flowchart illustrating a voice interaction method for coexistence of multiple medical devices according to another embodiment of the present disclosure. In this embodiment, the identity characteristics include a single machine characteristic, a multiple machine characteristic, and a sound source distance condition. The method comprises the following steps:
and S10h, acquiring the environmental voice information and the sound source distance value.
Specifically, in the embodiment, when the identity feature is pre-assigned, a sound source distance condition is also preset. The sound source distance condition may include a comparison of a sound source distance threshold and a sound source distance. The sound source distance threshold is a constant value related to distance, and the comparison of the sound source distance requires the local device to compare the sound source distance values obtained by the rest of themedical devices 100 in the environment. Themedical device 100 also detects the distance of the user through the sensor and measures the sound source distance value of the user who utters sound when acquiring the voice information in the spatial environment. The number of sensors capable of detecting the user sound source distance value is large, the distance measuring mode of the sensors is not particularly limited in the application, and the method can be used in the voice interaction method when multiple medical devices coexist in the embodiment as long as the user sound source distance value can be detected. It will be appreciated that the source distance value is also a constant value that is distance dependent.
S21h, determining that the distance value to the sound source meets the sound source distance condition, and obtaining the identity characteristic through pre-allocation.
Specifically, after detecting the sound source distance value of the user, themedical device 100 compares the detected sound source distance value with a preset sound source distance threshold value; or themedical device 100 compares the detected sound source distance value with the sound source distance values detected by the rest of themedical devices 100 in the environment. Because the sound source distance value and the sound source distance threshold value are compared between two distance-related numerical constants, the magnitude of the two numerical values can be obtained more quickly and directly, and thus, the comparison result can be obtained. And the comparison of the sound source distance values between the plurality ofmedical devices 100 can also quickly obtain the comparison result. When it is determined that the distance value to the sound source is smaller than the threshold value of the distance to the sound source, or the distance value of the sound source obtained by the local machine is smaller than the distance values of the sound sources of the rest ofmedical devices 100 in the environment, it may be determined that the distance value of the sound source obtained by the local machine satisfies the condition of the distance to the sound source, and the process proceeds to the subsequent step.
S22h, judging that the identity characteristics exist in the voice information.
And S30h, starting the localinteractive system 110 to interact with the user.
Specifically, after determining that the distance value to the sound source is smaller than the sound source distance threshold value, themedical device 100 determines that the distance between the user and the local device is within a sufficiently close range. When a user moves within a sufficiently close range from a certainmedical apparatus 100 in a spatial environment, the user may be considered to be directional voice information emitted to themedical apparatus 100. Or to find themedical device 100 closest to the user among a plurality ofmedical devices 100. At this time, themedical device 100 automatically determines that the identity feature exists in the voice message and starts theinteractive system 110 to enter a stand-alone interactive mode to interact with the user.
The embodiment provides convenience for users to interact in a space environment by using the method. When the number of themedical devices 100 in the spatial environment is large, the user should remember the details of the single machine feature, the multiple machine features, the sorting feature, and the like corresponding to eachmedical device 100, so that errors are difficult to avoid. Although the prompt may be provided by a list, a graphic on themedical device 100, etc., a more concise and faster start-up operation may be obtained by the embodiment of fig. 10 when the user only needs to interact with a specificmedical device 100 in the spatial environment. It will be appreciated that by calling out voice information, which may not be an identity but any voice information, after the user moves within a sufficiently close range of themedical device 100, theinteractive system 110 of themedical device 100 can be activated very conveniently without the cumbersome operations of memorizing or looking up a table, looking up graphic marks, etc. that may exist.
Of course, the sound source distance condition may include both the sound source distance threshold and the comparison of the sound source distance. In the same space environment, if themedical device 100 determines that the sound source distance value of the medical device is minimum, but the minimum sound source distance value is not smaller than the sound source distance threshold, it can be considered that the user is not the starting operation performed on themedical device 100, and two determination conditions are introduced at the same time, so that the accuracy of the user directivity can be improved, and the occurrence rate of misoperation can be reduced.
This simpler stand-alone interaction mode can also be implemented using the embodiment of fig. 11. In the embodiment of fig. 11, the identity characteristic also includes a volume condition. The voice interaction method during coexistence of multiple medical devices comprises the following steps:
and S10i, acquiring the environment voice information and the volume value.
S21i, determining that the volume value meets the volume condition, and obtaining the identity characteristic through pre-allocation.
S22i, judging that the identity characteristics exist in the voice information.
And S30i, starting the localinteractive system 110 to interact with the user.
Specifically, the idea of the method of this embodiment is similar to the idea of fig. 10, and the difference is that a manner of measuring the distance of the user is not adopted, but whether the user is in a sufficiently close range is determined directly by the volume of the voice information called by the user. When it is determined that the volume level of the voice message exceeds the volume threshold, it is typically determined that the user is within a sufficiently close range from themedical device 100 and, thus, the identity of the user is recognized as being present in the voice message, enabling the native standalone interaction mode to interact with the user. Or themedical device 100 determines that the volume value of the user voice information obtained locally is the highest in the environment, and may also determine that the identity feature exists in the user voice information and start thelocal interaction system 110 to interact with the user. It will be appreciated that the effect achieved by the embodiment of the method of fig. 11 is also similar to that achieved by the method of fig. 10. Further, the method embodiment of fig. 11 also uses theinteractive system 110 that is originally provided on themedical device 100 to obtain the volume value of the voice information of the user without measuring the distance to the user, so that the need of providing sensors is eliminated, the number of sensors used in themedical device 100 is simplified, and the cost is saved.
It will be appreciated that there is also an embodiment in which the sound source distance condition and the volume condition are combined to determine whether the user's voice information is the identity feature. Because different users may have different glottis sizes when interacting with themedical device 100 based on the method, some users with loud sounds may unintentionally trigger the nearbymedical device 100 to enter the stand-alone interaction mode. Or some users may not notice to keep a sufficient separation distance from the nearestmedical device 100 while calling out the multi-phone feature, so that themedical device 100 enters the stand-alone interaction mode and simultaneously starts a plurality ofmedical devices 100 corresponding to the multi-phone feature to enter the interaction state. This case is more prominent when a sound source distance threshold or a volume threshold is employed as a judgment condition. Therefore, the sound source distance condition and the volume condition can be set to be combined to judge whether the voice information of the user is the identity characteristic. That is, the user needs to trigger the function of the sound source distance condition and the sound volume condition in the identity feature within a certain distance range and under the condition that the sound volume value is high enough, so as to avoid the occurrence of the false triggering phenomenon.
Of course, as with some of the embodiments described above, in the embodiments of fig. 10 and 11, there are also situations where a single-machine feature is not distinguished from a multiple-machine feature. As long as the sound source distance value and/or the volume value of the voice message meet the condition, theinteraction system 110 of themedical device 100 can be started to interact correspondingly, and the single machine or the multiple machines are not needed to be distinguished. It can be understood that, in the case that the sound source distance condition and the sound volume condition are the highest values after comparison, only theinteractive system 110 of onemedical device 100 is started to interact with the user, and no logic confusion occurs.
Please refer to the embodiment of fig. 12. Fig. 12 is a flowchart illustrating another embodiment of the voice interaction method when multiple medical devices coexist as shown in fig. 1. In the embodiment of fig. 12, themedical device 100, after initiating the interaction with the user by thenative interaction system 110, further comprises:
s40, generating and presenting feedback information to show that the localinteractive system 110 has been activated.
Specifically, after the user activates theinteractive system 110 of themedical device 100 through the identity, there is a need to verify whether the identity of the own outgoing call is effectively received by themedical device 100. The feedback information may be visual feedback information or auditory feedback information. Because theinteractive system 110 does not easily detect whether the targetmedical device 100 is activated or not in a quick, batch manner through voice feedback if the user is simultaneously fed back. At this time, the above-mentioned interaction sequence can be adopted to provide the auditory feedback information in turn, thereby facilitating the reception of the user. More conveniently, themedical device 100 may generate and display the visual feedback information such as the feedback image and the feedback light through the visual feedback devices such as the display screen and the indicator light arranged on the local machine, and start the visual feedback for the user, thereby displaying that theinteraction system 110 of the local machine has started the interaction function. That is, when a plurality ofmedical devices 100 exist in the spatial environment, themedical device 100 with the interactive function activated can perform visual feedback on the user through information such as patterns and lights, and provide convenience for the user to quickly confirm whether the identity features sent by the user are effectively received by the targetmedical device 100.
It can be understood that, when the user finds that the identity feature called out by the user is not received by all the targetmedical devices 100, that is, the user cannot start the targetmedical devices 100 through the identity feature, the user can supplement the start of the targetmedical devices 100 by calling out the identity feature again, and then perform batch interaction, thereby improving the reliability of the method.
Please refer to the embodiment of fig. 13. Fig. 13 is a flowchart illustrating a voice interaction method when multiple medical devices coexist as shown in fig. 1 according to another embodiment. In the embodiment of fig. 13, the method comprises:
and S10j, acquiring voice information in the environment.
S20j, determining that the identity characteristics exist in the voice information, wherein the identity characteristics are obtained through pre-allocation.
And S30j, starting the localinteractive system 110 to interact with the user.
And S51j, acquiring the voice information in the environment.
And S52j, determining that the voice information contains the group quitting information.
And S53j, exiting the local interactive system and stopping interacting with the user.
Specifically, the present embodiment provides an exit mechanism after the nativeinteractive system 110 is initiated to interact with the user. After the user determines that the user calls out the non-targetmedical equipment 100 due to the identity characteristic in error, the user can designate one or moremedical equipment 100 to exit the current interaction link through the called-out quit information. By executing the method of the embodiment, convenience and rapidness are provided for the user to correct the detail of the targetmedical equipment 100 corresponding to the voice operation after theinteractive system 110 of themedical equipment 100 is started. Or, when the user performs batch interaction, if a part of themedical devices 100 in the multi-machine interaction mode needs to be further interactively operated, themedical devices 100 which do not need to be further interacted can be excluded from the sequence of batch interaction by calling out the group quitting information. Thus, the present embodiment provides the user with the convenience of quickly performing batch interactions while performing secondary selections of themedical devices 100 currently being interacted with. It will be appreciated that the embodiment of FIG. 13 may also be performed for a stand-alone interaction mode, thereby ending the user interaction process with the currently interactingmedical device 100 and initiating the next interaction.
In one embodiment, the dequeue message may include a preset idle time threshold. That is, themedical device 100 does not interact with the user within a certain time period, and the time period exceeds the preset idle time period, it can be determined that the user has not interacted with themedical device 100, and the current interaction state is automatically exited, so that resources are saved.
In one embodiment, please refer to FIG. 13 a. The method comprises the following steps:
and S10k, acquiring voice information in the environment.
S20k, determining that the identity characteristics exist in the voice information, wherein the identity characteristics are obtained through pre-allocation.
And S30k, starting the localinteractive system 110 to interact with the user.
S40k, acquiring voice information in the environment;
s50k, determining that networking information exists in the voice information;
s60k, starting theinteraction systems 110 of the restmedical devices 100 matched with the networking information in the environment based on the networking information;
and S70k, controlling the localinteractive system 110 to interact with the user based on the interaction time sequence of the networking information.
Specifically, this embodiment may be understood as the reverse networking embodiment of fig. 13. Fig. 13 is a situation in which the current interaction state is exited after theinteraction system 110 of themedical device 100 enters the interaction state. The embodiment provides convenience for supplementing themedical device 100 needing interaction after themedical device 100 receives networking information of the user. It is understood that the identity feature in step S20k is not limited to a stand-alone feature or a multi-machine feature. As soon as the nativeinteractive system 110 enters the interactive state, theinteractive systems 110 of the remainingmedical devices 100 within the environment that match the networking information can be activated by obtaining the networking information of the user. In this way, the user needs to supplement the rest of themedical devices 100 with which he wants to interact for a larger batch of interactions, while keeping themedical devices 100 currently entering the interaction state.
Accordingly, the interaction timing of the augmentedmedical device 100 to which the networking information corresponds also needs to be matched with the networking information. I.e. themedical device 100 currently in the interactive state needs to re-assign the interaction timing with the appendedmedical device 100. The redistributed interaction time sequence can be arranged after the current interaction time sequence of themedical device 100 supplemented by the networking information on the premise of keeping the current interaction time sequence in sequence based on the priority, the user specification and the like, so as to form the redistributed interaction time sequence. The newly assigned interaction timing sequence may also be created by rearranging the order of priority, user designation, etc. of the supplementalmedical device 100 along with themedical device 100 currently in the interaction state without preserving the current interaction timing sequence. The interaction time sequences all belong to the interaction time sequence of the networking information.
In another embodiment, the interaction timing sequence of the networking information may be performed by setting a host. That is, in the current scenario where the host already exists or the host is currently a single machine interaction scenario, themedical device 100 supplemented with the networking information is automatically defined as a non-host, and the user still continues to use the current host or recognizes the single machine as the host, and performs interaction based on the interaction timing sequence of the host information. At this time, the interaction timing of the networking information is equal to the interaction timing determined by the host. Of course, the host may also be re-identified after the supplement, that is, the ranking features of themedical devices 100 entering the interaction state are compared again, and then the host is determined according to the comparison result, where the interaction timing sequence of the networking information is still equal to the interaction timing sequence determined by the host.
Turning back to fig. 2, fig. 2 is a schematic diagram of amedical device 100 according to the present application. In the embodiment of fig. 2, themedical apparatus 100 further comprises aprocessor 101, aninput device 102, anoutput device 103 and astorage device 104. Theprocessor 101, theinput device 102, theoutput device 103 and thestorage device 104 are connected to each other, wherein thestorage device 104 is configured to store a computer program, the computer program includes program instructions, and theprocessor 101 is configured to call the program instructions to execute the voice interaction method when multiple medical devices coexist.
Specifically, theprocessor 101 calls the program instructions stored in thestorage device 104 to perform the following operations:
obtaining voice information in an environment;
determining that the identity feature exists in the voice information, wherein the identity feature is obtained through pre-allocation;
the nativeinteractive system 110 is initiated to interact with the user.
It can be understood that, themedical device 100 of the present application may execute the above-mentioned voice interaction method when multiple medical devices coexist because theprocessor 101 calls the program of thestorage device 104, so as to provide convenience for a user to start thelocal interaction system 110 to interact with the user through the identity feature corresponding to the local device in a scene where multiplemedical devices 100 exist in a spatial environment. Meanwhile, the defect that the non-targetmedical equipment 100 arouses interaction and causes confusion of interaction logic in the same space environment in the process of interacting with the targetmedical equipment 100 by a user due to the similarity or similarity of voice instructions is avoided.
Thestorage 104 may include a volatile memory device (volatile memory), such as a random-access memory (RAM); thestorage device 104 may also include a non-volatile memory device (non-volatile memory), such as a flash memory device (flash memory), a solid-state drive (SSD), etc.; thestorage device 104 may also comprise a combination of storage devices of the kind described above.
Theprocessor 101 may be a Central Processing Unit (CPU). TheProcessor 101 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Referring to fig. 14, fig. 14 is a schematic diagram of amedical system 200 provided in the present application. In the embodiment of fig. 14, themedical system 200 includes:
an obtainingmodule 201, configured to obtain voice information in an environment;
ananalysis module 202, configured to determine that the identity feature exists in the voice message, where the identity feature is obtained through pre-allocation;
and thecontrol module 203 is used for starting the localinteractive system 110 to interact with the user.
It is understood that themedical system 200 of the present application is also used to perform the voice interaction method of the present application when multiple medical devices coexist. Specifically, the obtainingmodule 201 starts to monitor the voice information in the spatial environment. Theanalysis module 202 analyzes the voice information monitored by theacquisition module 201, and when it is determined that the voice information has the identity characteristic, theanalysis module 202 sends an instruction to thecontrol module 203, so that thecontrol module 203 starts the local interactive system to interact with the user. Therefore, when a plurality ofmedical systems 200 exist in the spatial environment, themedical system 200 can start the function of interacting with the user by determining whether the voice information exhaled by the user contains the identity characteristics obtained by pre-allocation of the corresponding local machine, so that the condition that the user does not mistaken the voice instruction of the local machine as the instruction of the local machine and starts the corresponding function of the local machine to cause ambiguity of the voice instruction of the user is avoided.
In one embodiment, themedical system 200 further comprises apairing module 204, and thepairing module 204 is configured to obtain a pre-assigned identity. It is understood that, as described in the foregoing method embodiments, the user may complete the pre-allocation of the identity features of thepairing module 204 at the factory stage, or after themedical system 200 is placed in the spatial environment and a plurality ofmedical systems 200 exist in the spatial environment, the pre-allocation of the identity features may be completed through thepairing module 204 based on the specific distribution and functions of themedical systems 200 in the spatial environment.
In one embodiment, thepairing module 204 includes a stand-alone feature and a multi-machine feature in obtaining the pre-assigned identity feature. When it is determined that the identity feature exists in the voice message, theanalysis module 202 needs to determine that the identity feature included in the voice message is a single-machine feature or a multi-machine feature.
Subsequently, when thecontrol module 203 starts the localinteractive system 100 to interact with the user, and when theanalysis module 202 determines that the identity feature is the stand-alone feature, thecontrol module 203 starts the localinteractive system 110 to interact with the user;
when theanalysis module 202 determines that the identity feature is a multi-machine feature, thecontrol module 203 starts the localinteractive system 110 and interacts with the user based on the interaction timing sequence.
Referring to fig. 15, themedical system 200 further includes aranking module 205. Theranking module 205 is used to determine the timing of interactions with the user by thenative interaction system 110.
In one embodiment, theranking module 205 determines the timing of the interaction of the nativeinteractive system 110 with the user based on the timing information acquired by theacquisition module 201. Specifically, theanalysis module 202 is configured to, when it is determined that the identity feature exists in the voice information, analyze and obtain the multi-machine feature and the timing information included in the voice information;
thecontrol module 203 is configured to start the localinteractive system 110 to interact with the user based on the interaction timing, and thesorting module 205 is configured to control the localinteractive system 110 to sequentially interact with the user based on the interaction timing of the timing information.
In one embodiment, theranking module 205 determines the timing of the interaction of the nativeinteractive system 110 with the user based on the ranking features obtained by thepairing module 204. Specifically, thepairing module 204 includes a sorting feature in acquiring the pre-allocated multi-machine feature;
after the obtainingmodule 201 obtains the voice information in the environment, the analyzingmodule 202 determines that the identity feature is a multi-machine feature when determining that the identity feature exists in the voice information;
theranking module 205 is further configured to analyze a prioritization of the ranking features among the multi-machine features;
thecontrol module 203 controls the plurality ofmedical systems 200 to sequentially interact with the user based on the interaction timing of the ranking features when theinteraction system 110 is started to interact with the user based on the interaction timing.
In one embodiment, themedical system 200 further includes adecision module 206. Thedecision module 206 is used to determine whether the native machine is to be used as a host for interaction with a user.
In one embodiment, the determiningmodule 206 determines whether the native computer is interacting with the user as the host computer based on the host computer information acquired by the acquiringmodule 201. Specifically, when the obtainingmodule 201 obtains the voice information in the environment, the voice information called by the user includes the characteristics of multiple computers and the host information;
theanalysis module 202 is configured to determine that the identity feature is a multi-machine feature, and thejudgment module 206 is configured to judge whether the local machine is a host machine based on host machine information;
thecontrol module 203 is used for starting the localinteractive system 110 to interact with the user based on the interaction sequence determined by the host.
In one embodiment, theranking module 205 determines whether the native machine is interacting with the user as the host machine based on the ranking features in the multi-machine features obtained by thepairing module 204. Specifically, thepairing module 204 obtains pre-allocated multi-machine features including a sorting feature;
theanalysis module 202 is configured to determine that the identity feature is a multi-machine feature when it is determined that the identity feature exists in the voice information;
theranking module 205 is further configured to analyze a ranking of ranking feature priorities in the identity features; the judgingmodule 206 is configured to determine a priority ranking corresponding to a ranking characteristic of the local computer, and determine whether the local computer is a host computer;
thecontrol module 203 is configured to start the localinteractive system 110 to interact with the user based on the interaction timing determined by the host after theinteractive system 110 interacts with the user based on the interaction timing.
In one embodiment, theanalysis module 202 is configured to determine that the voice message further includes timing information when determining that the identity feature exists in the voice message;
thecontrol module 203 is configured to start the interaction between the localinteractive system 110 and the user, and interact with the user based on the interaction timing of the timing information.
In one embodiment, the identity features allocated by thepairing module 204 include ranking features, and theranking module 205 is configured to analyze the ranking of the ranking features when it is determined that the identity features exist in the voice message;
thecontrol module 203 is configured to start the nativeinteractive system 110 to interact with the user based on the interaction timing sequence of the ranking features.
In one embodiment, theanalysis module 202 is configured to determine that the voice message further includes host information when determining that the identity feature exists in the voice message;
the judgingmodule 206 is configured to judge whether the host is the host based on the host information;
thecontrol module 203 is used for starting the localinteractive system 110 to interact with the user, and interacting with the user based on the interaction time sequence determined by the host.
Wherein the identity characteristics assigned and obtained by thepairing module 204 comprise sorting characteristics,
thesorting module 205 is configured to compare the sorting of the sorting features of the local device when it is determined that the identity features exist in the voice message;
the judgingmodule 206 is configured to judge whether the native computer is the host computer based on the comparison result;
thecontrol module 203 is used for starting the localinteractive system 110 to interact with the user, and interacting with the user based on the interaction time sequence determined by the host.
For one embodiment, thecontrol module 203 initiating the interaction of the nativeinteractive system 110 with the user based on the host determined interaction timing comprises:
if the judgingmodule 206 judges that the local computer is the host computer, thecontrol module 203 starts the localcomputer interaction system 110 to interact with the user;
if the determiningmodule 206 determines that the local computer is not the host computer, thecontrol module 203 starts the localcomputer interaction system 110 to indirectly interact with the user through the host computer.
In one embodiment, themedical system 200 further comprises a soundsource ranging module 207, wherein the soundsource ranging module 207 is configured to detect a sound source distance value. Specifically, thepairing module 204 obtains pre-allocated identity features including sound source distance conditions;
when the obtainingmodule 201 obtains the voice information in the environment, the soundsource ranging module 207 is configured to obtain a sound source distance value;
theanalysis module 202 is configured to determine that a distance value to a sound source satisfies a sound source distance condition;
theanalysis module 202 is used to determine the presence of identity features in the voice message. As can be appreciated, thecontrol module 203 subsequently initiates voice interaction of the localinteractive system 110 with the user based on the identity characteristic.
In one embodiment, themedical system 200 further comprises avolume detection module 208, and thevolume detection module 208 is configured to detect a volume value. Specifically, thepairing module 204 obtains the pre-assigned identity characteristics including volume conditions;
when the obtainingmodule 201 obtains the voice information in the environment, thevolume detecting module 208 is configured to obtain a volume value of the voice information in the environment;
theanalysis module 202 is configured to determine that a volume value in the voice message satisfies the volume condition;
theanalysis module 202 is further configured to determine that the identity feature exists in the voice message. As can be appreciated, thecontrol module 203 subsequently initiates voice interaction of the localinteractive system 110 with the user based on the identity characteristic.
In one embodiment, thepairing module 204 obtains the pre-assigned sound source distance condition including a sound source distance threshold, and/or
The distance value to the local sound source is determined to be greater than the sound source distance value of any of the remaining medical devices in the environment based on theanalysis module 202.
Wherein the volume condition for thepairing module 204 to obtain the pre-assignment includes a volume threshold, or
It is determined based on theanalysis module 202 that the local volume value is greater than the volume value of any remaining medical devices in the environment.
In one embodiment, themedical system 200 further includes afeedback module 209. Thefeedback module 209 is used to generate and present feedback information to show that the localinteractive system 110 has been activated. Specifically, after thecontrol module 203 starts the localinteractive system 110 to interact with the user, thefeedback module 209 generates and displays a feedback image, a feedback light and other signals for the user by communicating with a display screen, an indicator light and other visual feedback devices, so as to inform the user that the localinteractive system 110 has entered an interactive state. Alternatively, thefeedback module 209 provides audible feedback to the user through theinteractive system 110.
In one embodiment, after thecontrol module 203 starts the localinteractive system 110 to interact with the user, the obtainingmodule 201 is configured to obtain the voice information in the environment;
theanalysis module 202 is configured to determine that group quitting information exists in the voice information;
thecontrol module 203 is also operable to exit the nativeinteractive system 110 and cease interaction with the user.
In one embodiment, themedical system 200 further includes anetworking module 210. After thecontrol module 203 starts the localinteractive system 110 to interact with the user, the obtainingmodule 201 is used for continuously obtaining the voice information in the environment;
theanalysis module 202 is configured to determine that networking information exists in the voice information;
thenetworking module 210 is configured to start theinteractive systems 110 of the rest of themedical devices 100 in the environment, which are matched with the networking information, based on the networking information;
thecontrol module 203 is further configured to control thelocal interaction system 110 to interact with the user based on the interaction timing sequence of the networking information.
Themedical system 200 of the present application may be located on a monitor or central station. And the monitor and the central station can also comprise a plurality of main bodies, such as a host, front-end equipment, a remote server and the like. The distribution of the rest functional modules in themedical system 200 of the present application on a plurality of main bodies is not particularly limited, except that the obtainingmodule 201, the rangingmodule 207, and thevolume detecting module 208 need to be disposed on the front-end device to obtain the user voice information, the sound source distance information, and the volume value. Can be arranged on any body of the front end, the middle end or the back end to operate. The monitor or the central station has the function of starting interaction with the user by determining whether the voice information called out by the user contains the identity characteristics of the corresponding local machine or not under the condition that a plurality ofmedical systems 200 exist in a space environment because themedical system 200 executes the voice interaction method when a plurality of medical devices coexist, so that the situation that the voice instruction of the user is not mistaken as the instruction of the local machine aiming at the voice instruction of the local machine and the corresponding function of the local machine is started, and the voice instruction of the user is not clear is avoided.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.

Claims (24)

  1. A voice interaction method when multiple medical devices coexist, comprising the following steps:
    obtaining voice information in an environment;
    determining that identity characteristics exist in the voice information, wherein the identity characteristics are obtained through pre-allocation;
    and starting the local interactive system to interact with the user.
  2. The method of claim 1, wherein the identity feature comprises a single-machine feature and a multi-machine feature, and the determining that the identity feature exists in the voice message comprises:
    determining that the voice information comprises a single machine characteristic or a multi-machine characteristic;
    the method for starting the local interactive system to interact with the user comprises the following steps:
    when the voice feature information is determined to be a single-machine feature, starting a local interactive system to interact with a user;
    and when the identity characteristic is determined to be a multi-machine characteristic, starting a local interactive system and interacting with the user based on the interactive time sequence.
  3. The method of claim 2, wherein the determining that the identity characteristic exists in the voice message comprises:
    determining that the identity feature is a multi-machine feature;
    determining that the voice information also comprises time sequence information;
    the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
    and interacting with the user based on the interaction time sequence of the time sequence information.
  4. The method of claim 2, wherein the multi-machine feature comprises a ranking feature, and the determining that the identity feature exists in the voice message comprises:
    determining that the identity feature is a multi-machine feature;
    analyzing the ranking of the ranking features in the multi-machine features;
    the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
    and interacting with the user based on the interaction time sequence of the sequencing characteristics.
  5. The method of claim 2, wherein the determining that the identity characteristic exists in the voice message comprises:
    determining that the identity feature is a multi-machine feature;
    determining that the voice information further comprises host information;
    determining whether the local computer is a host computer or not based on the host computer information;
    the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
    and interacting with the user based on the interaction time sequence determined by the host.
  6. The method of claim 2, wherein the multi-modality feature comprises a ranking feature, and wherein the determining that the identity feature is present in the voice message comprises:
    determining that the identity feature is a multi-machine feature;
    analyzing and comparing the sequence of the sequence characteristics of the local computer;
    judging whether the local computer is a host computer or not according to the comparison result;
    the starting of the local interactive system and the interaction with the user based on the interaction time sequence comprise:
    and interacting with the user based on the interaction time sequence determined by the host.
  7. The method of claim 1, wherein the determining that the identity characteristic exists in the voice message comprises:
    determining that the voice information comprises time sequence information;
    the method for starting the local interactive system to interact with the user comprises the following steps:
    and interacting with the user based on the interaction time sequence of the time sequence information.
  8. The method of claim 1, wherein the identity feature comprises a ranking feature, and wherein the determining that the identity feature is present in the voice information comprises:
    analyzing the ranking of the ranking features;
    the method for starting the local interactive system to interact with the user comprises the following steps:
    and interacting with the user based on the interaction time sequence of the sequencing characteristics.
  9. The method of claim 1, wherein the determining that the identity characteristic exists in the voice message comprises:
    determining that the voice information further comprises host information;
    determining whether the local computer is a host computer or not based on the host computer information;
    the method for starting the local interactive system to interact with the user comprises the following steps:
    and interacting with the user based on the interaction time sequence determined by the host.
  10. The method of claim 1, wherein the identity feature comprises a ranking feature, and wherein the determining that the identity feature is present in the voice information comprises:
    analyzing and comparing the sequence of the sequence characteristics of the local computer;
    judging whether the local computer is a host computer or not according to the comparison result;
    the method for starting the local interactive system to interact with the user comprises the following steps:
    and interacting with the user based on the interaction time sequence determined by the host.
  11. The method of any one of claims 5, 6, 9 or 10, wherein the initiating the native interaction system to interact with the user comprises:
    if the local computer is judged to be the host computer, starting a local computer interaction system to interact with a user;
    and if the local computer is judged not to be the host computer, starting the local computer interaction system to indirectly interact with the user through the host computer.
  12. The method for voice interaction during coexistence of multiple medical devices according to claim 1, wherein the identity feature comprises a sound source distance condition or a volume condition, and the obtaining of voice information in the environment comprises:
    acquiring environmental voice information and simultaneously acquiring a sound source distance value or a volume value of the voice information;
    the determining that the identity feature exists in the voice information comprises:
    determining that the distance value to the sound source satisfies the sound source distance condition, or
    Determining that the volume value satisfies the volume condition;
    and judging that the identity characteristics exist in the voice information.
  13. The method for voice interaction during coexistence of multiple medical devices according to claim 1, wherein after the initiating of the interaction between the native interactive system and the user, the method further comprises:
    obtaining voice information in an environment;
    determining that group quitting information exists in the voice information;
    and exiting the local interactive system and stopping interacting with the user.
  14. The method for voice interaction during coexistence of multiple medical devices according to claim 1, further comprising, after initiating interaction between the native interactive system and the user:
    obtaining voice information in an environment;
    determining that networking information exists in the voice information;
    starting interactive systems of other medical equipment matched with the networking information in the environment based on the networking information;
    and controlling the local interaction system to interact with the user based on the interaction time sequence of the networking information.
  15. The method for voice interaction during coexistence of multiple medical devices according to claim 1, wherein after the initiating of the interaction between the native interactive system and the user, the method further comprises:
    and generating and displaying feedback information to display that the local interactive system is started.
  16. A medical device comprising a processor, an input device, an output device, and a storage device, wherein the processor, the input device, the output device, and the storage device are interconnected, wherein the storage device is configured to store a computer program, the computer program comprises program instructions, and the processor is configured to invoke the program instructions to perform the voice interaction method when multiple medical devices according to any one of claims 1-15 coexist.
  17. A medical system, comprising:
    the acquisition module is used for acquiring voice information in the environment;
    the analysis module is used for determining that the identity characteristics exist in the voice information, and the identity characteristics are obtained through pre-distribution;
    and the control module is used for starting the local interactive system to interact with the user.
  18. The medical system of claim 17, further comprising a pairing module for obtaining a pre-assigned identity feature.
  19. The medical system of claim 18, further comprising a ranking module for determining a timing of interactions with a user by the native interaction system.
  20. The medical system of claim 18, further comprising a determination module configured to determine whether the native machine is interacting with a user as a host.
  21. The medical system of claim 18, further comprising a sound source ranging module configured to detect a sound source distance value.
  22. The medical system of claim 18, further comprising a volume detection module to detect a volume value.
  23. The medical system of claim 18, further comprising a networking module for enabling interaction systems of remaining medical devices within the environment that match the networking information.
  24. The medical system of claim 18, further comprising a feedback module for generating and presenting feedback information to show that a local interactive system has been activated.
CN201980092353.XA2019-04-262019-04-26Voice interaction method for coexistence of multiple medical devices, medical system and medical deviceActiveCN113454732B (en)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
PCT/CN2019/084442WO2020215295A1 (en)2019-04-262019-04-26Voice interaction method when multiple medical devices coexist, medical system, and medical device

Publications (2)

Publication NumberPublication Date
CN113454732Atrue CN113454732A (en)2021-09-28
CN113454732B CN113454732B (en)2023-11-28

Family

ID=72941277

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201980092353.XAActiveCN113454732B (en)2019-04-262019-04-26Voice interaction method for coexistence of multiple medical devices, medical system and medical device

Country Status (2)

CountryLink
CN (1)CN113454732B (en)
WO (1)WO2020215295A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6278975B1 (en)*1995-10-252001-08-21Johns Hopkins UniversityVoice command and control medical care system
WO2011003353A1 (en)*2009-07-092011-01-13广州广电运通金融电子股份有限公司Visualizaed self-service terminal, remote interactive self-help bank system and service method
US20140044243A1 (en)*2012-08-082014-02-1324/7 Customer, Inc.Method and apparatus for intent prediction and proactive service offering
CN104216689A (en)*2013-05-292014-12-17上海联影医疗科技有限公司Control method and device of medical system and medical facilities
WO2015090131A1 (en)*2013-12-192015-06-25中山大学深圳研究院Ims-based digital home interactive medical system
CN105206275A (en)*2015-08-312015-12-30小米科技有限责任公司Device control method, apparatus and terminal
US20180025727A1 (en)*2016-07-192018-01-25Toyota Jidosha Kabushiki KaishaVoice interactive device and utterance control method
WO2018102980A1 (en)*2016-12-062018-06-14吉蒂机器人私人有限公司Speech interaction method, device and system
CN109621194A (en)*2019-01-252019-04-16王永利Electrical acupuncture control method, electrical acupuncture controlling terminal and electrical acupuncture equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9730671B2 (en)*2014-10-032017-08-15David Thomas GeringSystem and method of voice activated image segmentation
CN205459559U (en)*2015-12-312016-08-17重庆剑涛科技有限公司Multi -functional medical care system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6278975B1 (en)*1995-10-252001-08-21Johns Hopkins UniversityVoice command and control medical care system
WO2011003353A1 (en)*2009-07-092011-01-13广州广电运通金融电子股份有限公司Visualizaed self-service terminal, remote interactive self-help bank system and service method
US20140044243A1 (en)*2012-08-082014-02-1324/7 Customer, Inc.Method and apparatus for intent prediction and proactive service offering
CN104216689A (en)*2013-05-292014-12-17上海联影医疗科技有限公司Control method and device of medical system and medical facilities
WO2015090131A1 (en)*2013-12-192015-06-25中山大学深圳研究院Ims-based digital home interactive medical system
CN105206275A (en)*2015-08-312015-12-30小米科技有限责任公司Device control method, apparatus and terminal
US20180025727A1 (en)*2016-07-192018-01-25Toyota Jidosha Kabushiki KaishaVoice interactive device and utterance control method
WO2018102980A1 (en)*2016-12-062018-06-14吉蒂机器人私人有限公司Speech interaction method, device and system
CN109621194A (en)*2019-01-252019-04-16王永利Electrical acupuncture control method, electrical acupuncture controlling terminal and electrical acupuncture equipment

Also Published As

Publication numberPublication date
CN113454732B (en)2023-11-28
WO2020215295A1 (en)2020-10-29

Similar Documents

PublicationPublication DateTitle
US11615794B2 (en)Voice recognition system, server, display apparatus and control methods thereof
US11355107B2 (en)Voice smart device wake-up method, apparatus, device and storage medium
US11483657B2 (en)Human-machine interaction method and device, computer apparatus, and storage medium
US10452886B2 (en)Control method, control device, and electronic device
US20170123644A1 (en)Interface display method and device
CN107180631A (en)Voice interaction method and device
WO2013143573A1 (en)Pairing medical devices within a working environment
CN115392332A (en)AI model deployment method, system and storage medium
EP3531263A1 (en)Gesture input processing method and electronic device supporting the same
WO2020135334A1 (en)Television application theme switching method, television, readable storage medium, and device
US10764678B2 (en)Throwable microphone with virtual assistant interface
US20240143143A1 (en)Systems and Methods for Identifying an Active Speaker in a Virtual Meeting
CN110674034A (en)Health examination method and device, electronic equipment and storage medium
US20210250264A1 (en)Method, device and medium for handing network connection abnormality of terminal
CN107357694A (en)Error event reporting system and its method during startup self-detection
US20180240361A1 (en)Hearing and speech impaired electronic device control
CN111627443B (en) Recommended methods, devices, equipment and media for elevators
CN113454732A (en)Voice interaction method for coexistence of multiple medical devices, medical system and medical device
US12159529B2 (en)Communication methods and apparatuses, electronic devices and computer readable storage media
EP3418882A1 (en)Display apparatus having the ability of voice control and method of instructing voice control timing
US10133911B2 (en)Method and device for verifying fingerprint
CN109076271A (en)It is used to indicate the indicator of the state of personal assistance application
CN112837689B (en)Conference system, data communication system, and voice information processing method
WO2020248504A1 (en)State indication method and apparatus for student tablet computer, and student tablet computer and storage medium
JP2020035467A (en) Voice analysis device, voice analysis method, voice analysis program, and voice analysis system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp