Movatterモバイル変換


[0]ホーム

URL:


WO2025125991A1 - Techniques for quality assessment of outcomes of algorithms - Google Patents

Techniques for quality assessment of outcomes of algorithms
Download PDF

Info

Publication number
WO2025125991A1
WO2025125991A1PCT/IB2024/062229IB2024062229WWO2025125991A1WO 2025125991 A1WO2025125991 A1WO 2025125991A1IB 2024062229 WIB2024062229 WIB 2024062229WWO 2025125991 A1WO2025125991 A1WO 2025125991A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
algorithm
data set
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/062229
Other languages
French (fr)
Inventor
Ryan Orin MELMAN
Christopher Bennett
Ryan GRAY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cochlear Ltd
Original Assignee
Cochlear Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cochlear LtdfiledCriticalCochlear Ltd
Publication of WO2025125991A1publicationCriticalpatent/WO2025125991A1/en
Pendinglegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Definitions

Landscapes

Abstract

A method includes generating a model that is representative of first data used to train an algorithm and generating a decision metric for determining if a similarity between second data input to the algorithm and the first data is sufficient for the algorithm to generate an output that is valid based on patterns of the first data identified in the model accounting for the second data.

Description

Techniques For Quality Assessment Of Outcomes Of Algorithms
CROSS REFERENCE TO RELATED APPLICATION
[0001] This patent application claims priority to U.S. provisional patent application 63/610,202, filed December 14, 2023, which is incorporated by reference herein in its entirety.
TECHN ICAL FIELD
[0002] The present disclosure relates to systems, methods, and computer readable storage media for generating and utilizing quality assessments of outcomes generated by algorithms.
BACKGROUN D
[0003] Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
[0004] The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as "implantable medical devices," now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components. BRI EF SU MMARY
[0005] According to a first embodiment disclosed herein, a method comprises generating a model that is representative of first data used to train an algorithm, and generating a decision metric for determining if a similarity between second data input to the algorithm and the first data is sufficient for the algorithm to generate an output that is valid based on patterns of the first data identified in the model accounting for the second data.
[0006] According to a second embodiment disclosed herein, a computing system comprises one or more processing units that generate a model that comprises information from first data used to develop an algorithm. The computing system generates a trust estimator that uses the model and an outcome of the algorithm generated based on second data to provide a representation of a relationship between the first data and the second data for estimating whether the outcome of the algorithm is trustworthy.
[0007] According to a third embodiment disclosed herein, a non-transitory computer readable storage medium comprises computer readable instructions stored thereon for causing a computing system to: generate a quality assessment of a relationship between a first data set and a second data set using a model that comprises features of the first data set; and determine a similarity between the first data set and the second data set using a trust estimator that processes the quality assessment to assess a trustworthiness of an outcome that an algorithm generates in response to the second data set. The algorithm is developed using the first data set.
[0008] According to a fourth embodiment disclosed herein, a computer implemented method for estimating a trustworthiness of an output of an algorithm that has been trained using training data comprises: generating a representation of a relationship between the training data and input data using a model that comprises a description of the training data, wherein the algorithm uses the input data to generate the output; and generating a trust value for the output of the algorithm using a decision metric based on the output of the algorithm and based on the representation of the relationship between the training data and the input data. BRIEF DESCRIPTION OF DRAWINGS
[0009] Figure 1A depicts a schematic diagram of an exemplary cochlear implant that can be configured to implement aspects of the techniques presented herein, according to some exemplary embodiments.
[0010] Figure IB depicts a functional block diagram of the cochlear implant of Figure 1A.
[0011] Figure 1C is a diagram illustrating an example of an auditory prosthesis that can include one or more embodiments disclosed herein.
[0012] Figure ID is a functional block diagram of an exemplary totally implantable cochlear implant.
[0013] Figure 2A is a diagram that depicts examples of training configurations of a system that can generate a training embedding model and a decision metric during a training stage.
[0014] Figure 2B is a diagram that depicts examples of clinical configurations of a system that can be used to generate a quality assessment of an outcome of an algorithm during a clinical application stage.
[0015] Figure 3A is a diagram that depicts examples of training configurations of a system that generates a training embedding model and a decision metric using an outcome of a pretrained target algorithm during a training stage.
[0016] Figure 3B is a diagram that depicts examples of clinical configurations of a system that can be used to generate a quality assessment of an outcome of an algorithm using the outcome of the algorithm during a clinical application stage.
[0017] Figure 4A is a diagram that depicts an example of a training configuration of a system that can generate a principal components analysis (PCA) transform and residuals for evaluating the trustworthiness of an outcome of an algorithm during a training stage.
[0018] Figure 4B is a graph that depicts outcomes of an example of a target algorithm and examples of trust values for the outcomes.
[0019] Figure 4C depicts graphs of probability curves that are examples of trust values generated by the trust estimator of FIG. 4A.
[0020] Figure 4D is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of an algorithm using a PCA transform during a clinical application stage.
[0021] Figure 5A is a diagram that depicts an example of a training configuration of a system that can train a K nearest neighbors algorithm for evaluating the trustworthiness of an outcome of a target algorithm during a training stage. [0022] Figure 5B is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of a target algorithm using a K nearest neighbors algorithm during a clinical application stage.
[0023] Figure 6A is a diagram that depicts an example of a training configuration of a system using statistical properties of pre-processed training data for generating a model during a training stage.
[0024] Figure 6B is a diagram that depicts a graphical representation of a model that includes statistical properties of pre-processed and selected features from a data set.
[0025] Figure 6C is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of a target algorithm using statistical properties of pre-processed data during a clinical application stage.
[0026] Figure 7A is a diagram that depicts an example of a training configuration of a system that can train generative adversarial neural networks for evaluating trustworthiness of an outcome of a target algorithm during a training stage.
[0027] Figure 7B is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of a target algorithm using a discriminator network during a clinical application stage.
[0028] Figure 8 is a diagram that illustrates an example of a computing system within which one or more of the disclosed embodiments can be implemented.
DETAILED DESCRIPTION
[0029] Merely for ease of description, the techniques presented herein are primarily described herein with reference to an illustrative medical device, namely a cochlear implant system. However, it is to be appreciated that the techniques presented herein may also be used with a variety of other medical devices that, while providing a wide range of therapeutic benefits to recipients, patients, or other users, may benefit from the teachings herein used in other medical devices. For example, any techniques presented herein described for one type of hearing prosthesis, such as a cochlear implant system, corresponds to a disclosure of another embodiment of using such teaching with another hearing prostheses, including bone conduction devices (percutaneous, active transcutaneous and/or passive transcutaneous), middle ear auditory prostheses, direct acoustic stimulators, and also utilizing such with other electrically simulating auditory prostheses (e.g., auditory brain stimulators), etc. The techniques presented herein may also be used with vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation, etc.
[0030] The teachings detailed herein can be implemented in or with sensory prostheses, such as hearing implants. Other types of sensory prostheses can include retinal implants. Accordingly, any teaching herein with respect to a sensory prosthesis corresponds to a disclosure of utilizing those teachings in/with a hearing implant and in/with a retinal implant, unless otherwise specified, providing the art enables such. Moreover, with respect to any teachings herein, such corresponds to a disclosure of utilizing those teachings with a cochlear implant, a bone conduction device (active and passive transcutaneous bone conduction devices, and percutaneous bone conduction devices) and a middle ear implant, providing that the art enables such, unless otherwise noted. To be clear, any teaching herein with respect to a specific sensory prosthesis corresponds to a disclosure of utilizing those teachings in/with any of the aforementioned hearing prostheses, and visa-versa. Corollary to this is at least some teachings detailed herein can be implemented in somatosensory implants and/or chemosensory implants. Accordingly, any teaching herein with respect to a sensory prosthesis corresponds to a disclosure of utilizing those teachings with/in a somatosensory implant and/or a chemosensory implant.
[0031] While the teachings detailed herein are described for the most part with respect to hearing prostheses, in keeping with the above, it is noted that any disclosure herein with respect to a hearing prosthesis corresponds to a disclosure of another embodiment of utilizing the associated teachings with respect to any of the other devices or prostheses noted herein, whether a species of a hearing prosthesis, or a species of a sensory prosthesis, such as a retinal prosthesis. In this regard, any disclosure herein with respect to evoking a hearing percept corresponds to a disclosure of evoking other types of neural percepts in other embodiments, such as a visual/sight percept, a tactile percept, a smell precept or a taste percept, unless otherwise indicated and/or unless the art does not enable such. Any disclosure herein of a device, system and/or method that is used to or results in ultimate stimulation of the auditory nerve corresponds to a disclosure of an analogous stimulation of the optic nerve utilizing analogous components, methods, and/or systems.
[0032] Figure (FIG.) 1A is a schematic diagram of an exemplary conventional cochlear implant 100 configured to implement aspects of the techniques presented herein. FIG. IB is a block diagram of the conventional cochlear implant 100 of FIG. 1A. For ease of illustration, FIGS. 1A and IB will be described together. The cochlear implant 100 comprises an external component 102 and an internal/implantable component 104. The external component 102 is directly or indirectly attached to the body of the recipient and typically comprises an external coil 106 and, generally, a magnet (not shown in FIGS. 1A-1B) fixed relative to the external coil 106. The external component 102 also comprises one or more input elements/devices 113 for receiving input signals at a sound processing unit 112. In this example, the one or more input devices 113 include sound input devices 108 (e.g., microphones positioned by auricle 110 of the recipient, telecoils, etc.) configured to capture/receive input signals, one or more auxiliary input devices 109 (e.g., audio ports, such as a Direct Audio Input (DAI), data ports, such as a Universal Serial Bus (USB) port, cable port, etc.), and a wireless transmitter/receiver (transceiver) 111, each located in, on, or near the sound processing unit 112.
[0033] The sound processing unit 112 also includes, for example, at least one power source 107, a radio-frequency (RF) transceiver 121, and a processing module 125. The processing module 125 comprises a number of elements, including an environmental classifier 131, a sound processor 133, and an individualized own voice detector 134. Each of the environmental classifier 131, the sound processor 133, and the individualized own voice detector 134 can be formed by one or more processors (e.g., one or more Digital Signal Processors (DSPs), one or more processing cores, etc.), firmware, software, etc. arranged to perform operations described herein. That is, the environmental classifier 131, the sound processor 133, and the individualized own voice detector 134 can each be implemented as firmware elements, partially or fully implemented with digital logic gates in one or more application-specific integrated circuits (ASICs), partially or fully in software, etc.
[0034] In the examples of FIGS. 1A and IB, the sound processing unit 112 is a behind-the- ear (BTE) sound processing unit configured to be attached to, and worn adjacent to, the recipient's ear. However, it is to be appreciated that sound processing unit 112 can have other arrangements, such as an off the ear (OTE) processing unit (e.g., a component having a generally cylindrical shape and which is configured to be magnetically coupled to the recipient's head), etc., a mini or micro-BTE unit, an in-the-canal unit that is configured to be located in the recipient's ear canal, a body-worn sound processing unit, etc.
[0035] In the exemplary embodiment of FIGS. 1A and IB, the implantable component 104 comprises an implant body (main module) 114, a lead region 116, and an intra-cochlear stimulating assembly 118, all configured to be implanted under the skin/tissue (tissue) 105 of the recipient. The implant body 114 generally comprises a hermetically-sealed housing 115 in which RF interface circuitry 124 and a stimulator unit 120 are disposed. The implant body 114 also includes an internal/implantable coil 122 that is generally external to the housing 115, but which is connected to the RF interface circuitry 124 via a hermetic feedthrough (not shown in FIG. IB).
[0036] As noted, stimulating assembly 118 is configured to be at least partially implanted in the recipient's cochlea 137. Stimulating assembly 118 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 126 that collectively form a contact or electrode array 128 for delivery of electrical stimulation (current) to the recipient's cochlea 137. Stimulating assembly 118 extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 120 via lead region 116 and a hermetic feedthrough (not shown in FIG. IB). Lead region 116 includes a plurality of conductors (wires) that electrically couple the electrodes 126 to the stimulator unit 120.
[0037] As noted, the cochlear implant 100 includes the external coil 106 and the implantable coil 122. The coils 106 and 122 are typically wire antenna coils each comprised of multiple turns of electrically insulated single-strand or multi-strand wire. Generally, a magnet is fixed in position relative to each of the external coil 106 and the implantable coil 122, but the magnet may rotate or change orientation. In some embodiments, the external component 102 and/or the implantable component 104 can include magnet assemblies that each have more than one magnet component. The magnets fixed relative to the external coil 106 and the implantable coil 122 facilitate the operational alignment of the external coil with the implantable coil. This operational alignment of the coils 106 and 122 enables the external component 102 to transmit data, as well as possibly power, to the implantable component 104 via a closely-coupled wireless link formed between the external coil 106 and the implantable coil 122. In certain examples, the closely-coupled wireless link is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive and inductive transfer, can be used to transfer the power and/or data from an external component to an implantable component and, as such, FIG. IB illustrates only one exemplary arrangement.
[0038] As noted above, sound processing unit 112 includes the processing module 125. The processing module 125 is configured to convert input audio signals into stimulation control signals 136 for use in stimulating a first ear of a recipient (i.e., the processing module 125 is configured to perform sound processing on input audio signals received at the sound processing unit 112). Stated differently, the sound processor 133 (e.g., one or more processing elements implementing firmware, software, etc.) is configured to convert the captured input audio signals into stimulation control signals 136 that represent electrical stimulation for delivery to the recipient. The input audio signals that are processed and converted into stimulation control signals can be audio signals received via the sound input devices 108, signals received via the auxiliary input devices 109, and/or signals received via the wireless transceiver 111.
[0039] In the embodiment of FIG. IB, the stimulation control signals 136 are provided to the RF transceiver 121, which transcutaneously transfers the stimulation control signals 136 (e.g., in an encoded manner) to the implantable component 104 via external coil 106 and implantable coil 122. That is, the stimulation control signals 136 are received at the RF interface circuitry 124 via implantable coil 122 and provided to the stimulator unit 120. The stimulator unit 120 is configured to utilize the stimulation control signals 136 to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient's cochlea via one or more stimulating contacts 126. In this way, cochlear implant 100 electrically stimulates the recipient's auditory nerve cells, bypassing absent or defective hair cells that normally transduce acoustic vibrations into neural activity, in a manner that causes the recipient to perceive one or more components of the input audio signals.
[0040] Figure (FIG.) 1C is a diagram illustrating an example of an auditory prosthesis 150 that can include one or more embodiments disclosed herein. The auditory prosthesis 150 of FIG. 1C is an example of a cochlear implant. As more specific examples, auditory prothesis 150 can be a mostly implantable cochlear implant (MICI) or a totally implantable cochlear implant (TICI).
[0041] Auditory prothesis 150 includes an internal/implantable component 154. In some embodiments, auditory prothesis 150 can also have an external component (not shown) that is positioned by an auricle 159 of the recipient and that is configured to be attached to, and worn adjacent to, the recipient's ear. However, the external component can have other arrangements, such as an off the ear (OTE) processing unit (e.g., a component configured to be magnetically coupled to the recipient's head), an in-the-canal unit that is configured to be located in the recipient's ear canal 156, etc.
[0042] The implantable component 154 comprises an implant body 160, a lead region 116, and an elongated intra-cochlear stimulating assembly 118, all configured to be implanted under the skin/tissue 155 of the recipient. The implant body 160 comprises a hermetically sealed housing that houses various components. The housing of implant body 160 operates as a protective barrier between the components within the housing of implant body 160 and the recipient's tissue and bodily fluid. [0043] Stimulating assembly 118 is configured to be at least partially implanted in the recipient's cochlea 162. Stimulating assembly 118 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 126 that collectively form a contact or electrode array 128 for delivery of electrical stimulation (current) to the recipient's cochlea 162. Stimulating assembly 118 extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to a stimulator unit in implant body 160 via lead region 116 and a hermetic feedthrough (not shown in FIG. 1C). Lead region 116 includes a plurality of conductors (wires) that electrically couple the stimulating contacts 126 to the stimulator unit.
[0044] FIG. ID is a functional block diagram of an exemplary totally implantable cochlear implant 170. Because the cochlear implant 170 is totally implantable, all components of cochlear implant 170 are configured to be implanted under skin/tissue 175 of a recipient. Because all components are implantable, cochlear implant 170 operates, for at least a finite period of time, in an "invisible hearing" mode without the need of an external device. An external device 172 can be used to, for example, charge an internal power source (battery) 177. External device 172 can be a dedicated charger or a conventional cochlear implant sound processor.
[0045] Cochlear implant 170 includes an implant body (main implantable component) 174, one or more input elements for capturing/receiving input audio signals (e.g., using one or more implantable microphones 178 and a wireless transceiver 181), an implantable coil 182, and an elongated intra-cochlear stimulating assembly as described above. The microphone 178 and/or the implantable coil 182 can be positioned in, or electrically connected to, the implant body 174. The implant body 174 further comprises the battery 177, RF (radio frequency) interface circuitry 184, a processing module 185, and a stimulator unit 180. The processing module 185 can be similar to processing modules described previously, and includes environmental classifier 191, sound processor 193, and individualized own voice detector 195.
[0046] In the embodiment of FIG. ID, the one or more implantable microphones 178 are configured to receive input audio signals. The processing module 185 is configured to convert received input audio signals into stimulation control signals 196 for use in stimulating a first ear of a recipient. Stated differently, sound processor 193 is configured to convert the input audio signals into stimulation control signals 196 that represent electrical stimulation for delivery to the recipient. [0047] In the embodiment of FIG. ID, the processing module 185 is implanted in the recipient. As such, in the embodiment of FIG. ID, the stimulation control signals 196 do not traverse an RF link, but instead are provided directly to the stimulator unit 180. The stimulator unit 180 is configured to utilize the stimulation control signals 196 to generate electrical stimulation signals that are delivered to the recipient's cochlea via one or more stimulation channels that include lead region 116 and stimulating assembly 118 having electrode array 128.
[0048] In addition to the sound processing operations, as described further below, the environmental classifier 191 is configured to determine an environmental classification of the sound environment associated with the input audio signals and the individualized own voice detector 195 is configured to perform individualized own voice detection (OVD).
[0049] The stimulating assembly of a cochlear implant can be implanted into the cochlea of a recipient during cochlear implant surgery. In some rare instances, the electrode array of a cochlear implant can become buckled, folded, or kinked during or after cochlear implant surgery. A cochlear implant can be measured after cochlear implant surgery to generate trans-impedance matrices (TIMs). A trans-impedance matrix (TIM) measurement is a measurement of the propagation of electric fields inside the cochlea of a recipient. Features of a TIM can be used to estimate the placement of an electrode array of a cochlear implant in a recipient or whether the electrode array has been buckled, kinked, or folded. Trans- impedance matrices (TIMs) generated from the inner ears of many recipients of cochlear implants can be used to train a machine learning (ML) algorithm. The trained ML algorithm can then be used to determine if an electrode array of a cochlear implant that has been implanted in a recipient has buckled, folded, or kinked using TIMs generated from the electrode array. In some cases, the amount of training data containing TIMs generated from electrode arrays implanted in cochlear implant recipients is insufficient to train an ML algorithm to correctly recognize every instance of a buckled, folded, or kinked electrode array.
[0050] According to some embodiments disclosed herein, a model is generated that describes training data used in the development of an algorithm, such as a machine learning (ML) algorithm or another type of algorithm. The model can be used to identify how close clinical input data is to the training data, and thus the model can be used to define the trustworthiness of the output of the algorithm. The model allows the algorithm to be developed using reduced training data sets in situations when collecting substantial representative data for a specific output, such as the detection of a rare event in a data set, is infeasible. As a specific example that is not intended to be limiting, the clinical input data can include TIMs generated from measuring electrode arrays of many cochlear implant recipients, and the ML algorithm can be trained to detect an electrode array of a cochlear implant that has buckled, folded, or kinked. According to other examples that are not intended to be limiting, the clinical input data can include data generated from measuring implantable medical devices of a particular type, and the ML algorithm can be trained with training data to detect features of measurements obtained using the same type of implantable medical devices. The measurements obtained using the implantable medical devices can, as examples, include electrophysiological and/or tissue-related responses to stimulus, such as action potentials and/or impedance measurements. The model can be optionally combined with pre-processing, interim algorithm artifacts, or the algorithm output to enhance the trust definition.
[0051] In some embodiments, a quality assessment is generated that provides a high quality rating of the output of the algorithm when the algorithm is correct and a low quality rating of the output of the algorithm when the algorithm is incorrect. The low quality rating is provided when the algorithm output has a high probability of being incorrect. As examples, the quality assessment can be a probability rating or a rating that is based on a log-likelihood function. The quality assessment can be used to determine if the output of the algorithm is trustworthy or untrustworthy. When the algorithm is operating on edge cases in the input data that cause the algorithm to generate incorrect assessments, the quality assessment generates an indication that further assessment is recommended.
[0052] The quality assessment can include a training stage and a clinical application stage. During the training stage, a model of a training data set is constructed, and a decision metric (e.g., including thresholds and functions) is defined that creates a trust output. According to some embodiments, a method can be performed during the training stage. The method includes generating a training embedding model representative of training data used to develop an algorithm. The method also includes generating a decision metric for determining if a similarity between input data to the algorithm and the training data is sufficient for the algorithm to generate an output that is valid based on patterns described by the training embedding model accounting for the input data. According to other embodiments, the method can include generating the decision metric to indicate a representation of a relationship between input data to an algorithm and training data used to develop the algorithm for estimating whether an output generated by the algorithm is valid based on the patterns described by the model accounting for the input data and an output of the algorithm.
[0053] After the training embedding model and the decision metric have been generated, the training embedding model and the decision metric are applied to input data (e.g., data including clinical measurement vectors) during the clinical application stage to provide a trust assessment of the quality of an output of the algorithm for evaluating the trustworthiness of the output of the algorithm. The algorithm can be treated as a black box algorithm during the training and clinical application stages. As an example, a method can be performed during the clinical application stage that includes generating an assessment of a relationship between input data to the algorithm and training data used to train the algorithm using the training embedding model. The training embedding model includes features of the training data. The method also includes determining a similarity between the input data and the training data for evaluating a trustworthiness of the output of the algorithm with the decision metric that processes the assessment.
[0054] As examples, two configurations of the quality assessment can be used during the training and clinical application stages. In the first of these configurations, a direct training data assessment is performed. During the direct training data assessment, the algorithm that the quality assessment is paired with is not used during the development of the quality assessment. This quality assessment can determine how similar the input data is to the training data. In the second of these configurations, the algorithm output is included as one of the inputs in conjunction with the training data embedding model when the decision metric estimates the reliability of the algorithm output.
[0055] Figure 2A is a diagram that depicts examples of training configurations of a system that can generate a training embedding model and a decision metric during a training stage. The system of FIG. 2A includes an optional augmentation stage 202, an optional preprocessing stage 203, a training embedding model 204, and a decision metric 205. FIG. 2A also illustrates a training data set 201, a switch 207, and a trust value 206 generated by the decision metric 206.
[0056] The system of FIG. 2A can be used in multiple different training configurations that are illustrated graphically in FIG. 2A by switch 207. The switch 207 can be adjusted to select one of the training configurations. Any one of these training configurations can be used during the training stage.
[0057] According to some examples, the augmentation stage 202 can be optionally used in various training configurations to expand the training data set 201 using one or more data augmentation transformations. In these examples, the switch 207 is adjusted to prevent the training data set 201 from being provided directly to training embedding model 204. Instead, augmentation stage 202 performs data augmentation on the training data set 201 to generate an augmented training data set. In these examples, the augmentation stage 202 is used to extend the training data set 201 used to develop the training embedding model 204 beyond the training data set 201 used in the development of a pre-trained algorithm. The switch 207 can be optionally adjusted to provide the augmented training data set to the pre-processing stage 203 or directly to the training embedding model 204 according to various training configurations.
[0058] An example of a data augmentation transformation that can be performed at the augmentation stage 202 involves adding gaussian white noise, with a mean of zero, to measurements in the training data set 201. Another example of a data augmentation transformation that can be performed at the augmentation stage 202 involves re-quantizing measurements in the training data set 201 at variable bit depths. Yet another example of a data augmentation transformation that can be performed at the augmentation stage 202 involves selective fouling or removal of a part of the measurement data in the training data set 201. Yet another example of a data augmentation transformation that can be performed at the augmentation stage 202 involves adding scaling to the measurements to account for gain inaccuracies in the training data set 201.
[0059] In the embodiment of FIG. 2A, data augmentation at augmentation stage 202 is an optional procedure, because some forms of data augmentation (e.g., adding white noise to the training data set 201 when using a principal components analysis embedding technique) may have no impact on the derived principal components of the training data set 201. If the training data set 201 is augmented at augmentation stage 202, then both the original training data set 201 and the augmented training data set are used for generating the training embedding model 204 to increase the size of the training set in order to improve the generalizability of the embedding in the training embedding model 204.
[0060] According to other examples, the pre-processing stage 203 can be optionally used in various training configurations to transform the augmented training data set in order to reduce the dimensionality of the data, mitigate the impact of measurement noise in the data, select for fiducial information in the data, or transform the data to a more meaningful representation (e.g., frequency domain). In these examples, the switch 207 can be adjusted to prevent the augmented training data set generated at augmentation stage 202 from being provided directly to training embedding model 204. Instead, pre-processing stage 203 transforms the augmented training data set using pre-processing techniques to generate transformed training data that is provided to the training embedding model 204.
[0061] Examples of the pre-processing techniques that can be performed by the preprocessing stage 203 on the augmented training data set to generate the transformed training data include smoothing or filtering the augmented training data set, aggregating current augmented training data with previous augmented training data or output states, or normalizing the augmented training data set to a fixed dynamic range. Other examples of the pre-processing techniques that can be performed by the pre-processing stage 203 include performing a Fourier Transform, performing phase angle estimation, performing Eigen decomposition, generating principal or independent components, or performing interpolation of missing data using the augmented training data set to generate the transformed training data. Principal component analysis (PCA) can, for example, be used to generate weights that are applied to the augmented training data set to generate the transformed training data.
[0062] The transformed training data, the augmented training data set, and/or the original training data set 201 are provided to training embedding model 204, depending on the training configuration of FIG. 2A as controlled by the switch 207. The training configuration of FIG. 2A can also be set based on whether the augmentation stage 202 and the preprocessing stage 203 are enabled or disabled.
[0063] The training embedding model 204 is then generated by embedding the transformed training data, the augmented training data set, and/or the original training data set 201 into the training embedding model 204, depending on the training configuration of FIG. 2A. The training embedding model 204 is representative of the training data set 201 used to develop a pre-trained algorithm. The training embedding model 204 is designed to provide an estimation of co-entropy between an input data set (e.g., clinical measurement data) and the training data set 201 used to develop the pre-trained algorithm.
[0064] The embedding of the data into the training embedding model 204 is performed by a training embedding algorithm. Examples of the training embedding algorithm include principal component analysis (PCA), Huffman encoding of an input vector, compression of input data using pre-trained dictionaries, probability density functions (PDF) of fiducial features, and a K nearest neighbors algorithm. A K nearest neighbors algorithm can be utilized to select the K most similar measurement vectors from the transformed training data, the augmented training data set, and/or the original training data set 201. [0065] The output of training embedding model 204 is provided to the decision metric 205. In the training configurations of FIG. 2A, the decision metric 205 is generated using the training embedding model 204. Examples of the decision metric 205 include expert knowledge of valid input ranges or morphologies, correlation coefficients, residual magnitude, compressed size of an input vector (i.e., when utilizing a pre-defined compression dictionary), discriminator neural networks, logistic regression of the output of the pre-trained algorithm versus training embeddings or decision metrics, and product of PDF fiducial features. The output of the decision metric 205 can be a single or multiple values that indicate a trust metric 206 (also referred to as a trust output) of the output of the pre-trained algorithm.
[0066] Figure 2B is a diagram that depicts examples of clinical configurations of a system that can be used to generate a quality assessment of an outcome of an algorithm during a clinical application stage. The system of FIG. 2B includes the optional pre-processing stage 203, the training embedding model 204, the decision metric 205, and a pre-trained target algorithm 213. The pre-trained target algorithm 213 has been previously trained using the entire training data set 201. The pre-trained target algorithm 213 can be a machine learning (ML) algorithm, an expert system, or any other type of trainable algorithm.
[0067] FIG. 2B also illustrates a clinical data set 211, a switch 212, and the trust output 206 generated by the decision metric 205. The system of FIG. 2B can be used in different clinical configurations that are illustrated graphically in FIG. 2B by switch 212. The switch 212 can be adjusted to select one of the clinical configurations. Any one of these clinical configurations can be used during the clinical application stage.
[0068] In the example of FIG. 2B, the clinical data set 211 is provided to the pre-processing stage 203. The pre-processing stage 203 transforms the clinical data set 211 using preprocessing techniques to generate transformed data that is provided to the training embedding model 204. During the pre-processing stage 203, any of the pre-processing techniques discussed above with respect to FIG. 2A can be used to generate the transformed data. In the example of FIG. 2B, the pre-trained target algorithm 213 generates an outcome 214 (also referred to as an output) based on the clinical data set 211 and/or the transformed data generated by the pre-processing stage 203, depending on the state of the switch 212.
[0069] The training embedding model 204 in FIG. 2B can be generated by any of the training embedding algorithms mentioned above with respect to FIG. 2A. The training embedding model 204 generates an estimation of a similarity between the clinical data set 211 and the training data set 201. As discussed above, the training data set 201 was used to develop the pre-trained target algorithm 213 and was embedded in the training embedding model 204 during the training stage. The training embedding model 204 then outputs the estimation of the similarity between the clinical data set 211 and the training data set 201 to the decision metric 205.
[0070] In the example of FIG. 2B, the decision metric 205 is a representation of the relationship between the clinical data set 211 and the training data set 201 that is utilized to determine if the clinical data set 211 is sufficiently similar to the training data set 201 for the algorithm 213 to produce a valid outcome 214. The decision metric 205 uses the estimation of the similarity between the clinical data set 211 and the training data set 201 generated by the training embedding model 204 to generate a trust output 206. The trust output 206 generated by the decision metric 205 indicates whether the clinical data set 211 is sufficiently similar to the training data set 201 for the target algorithm 213 to produce a valid outcome 214. The decision metric 205 generates the trust output 206 based the estimation of the similarity between the clinical data set 211 and the training data set 201 generated by the training embedding model 204. The trust output 206 is a quality assessment of the outcome 214 of the algorithm 213 that indicates whether the outcome 214 is correct or incorrect. The decision metric 205 can include any of the decision metrics discussed above with respect to FIG. 2A.
[0071] Figure 3A is a diagram that depicts examples of training configurations of a system that generates a training embedding model and a decision metric using an outcome of a pretrained target algorithm during a training stage. The system of FIG. 3A includes the optional augmentation stage 202, the optional pre-processing stage 203, the training embedding model 204, the decision metric 205, and the pre-trained target algorithm 213. The augmentation stage 202, pre-processing stage 203, and the training embedding model 204 function as described above with respect to FIG. 2A. FIG. 3A also illustrates the training data set 201, switches 207 and 310, the trust output 206 generated by the decision metric 205, and the outcome 214 of the pre-trained target algorithm 213. The pre-trained target algorithm 213 has been previously trained using the entire training data set 201.
[0072] The system of FIG. 3A can be used in multiple different training configurations that are illustrated graphically in FIG. 3A by switches 207 and 310. The different training configurations provided by adjusting switch 207 are discussed above with respect to FIG. 2A. Switch 310 can also be adjusted according to additional training configurations to provide either the augmented training data set generated by augmentation stage 202 or the transformed training data generated by pre-processing stage 203 to the input of the pretrained target algorithm 213. Any one or more of these training configurations can be used during the training stage.
[0073] The pre-trained target algorithm 213 then processes either the augmented training data set generated by augmentation stage 202 or the transformed training data generated by pre-processing stage 203 to generate an outcome 214. The outcome 214 is provided to an input of the decision metric 205 through path 311. In the training configurations of FIG. 3A, the decision metric 205 is generated using the training embedding model 204 and the outcome 214. Thus, in each of the training configurations of FIG. 3A, the outcome 214 of the algorithm 213 is used to train the decision metric 205 to generate the trust metric 206. The algorithm 213 may in some embodiments receive continuous input training data prior to generating categorical outcomes 214. Examples of the decision metric 205 are disclosed above with respect to FIG. 2A.
[0074] Figure 3B is a diagram that depicts examples of clinical configurations of a system that can be used to generate a quality assessment of an outcome of an algorithm during a clinical application stage using the outcome of the algorithm. The system of FIG. 3B includes the optional pre-processing stage 203, the training embedding model 204, the decision metric 205, and the pre-trained target algorithm 213. The pre-processing stage 203 and the training embedding model 204 function as described above with respect to FIG. 2B.
[0075] FIG. 3B also illustrates a clinical data set 211, switch 212, the trust output 206 generated by the decision metric 205, and the outcome 214 of the algorithm 213. As with the example of FIG. 2B, switch 212 in the system of FIG. 3B can be adjusted to select one of two clinical configurations. Any one of these clinical configurations can be used during the clinical application stage. As with the example of FIG. 2B, the pre-trained target algorithm 213 in the system of FIG. 3B generates the outcome 214 based on the clinical data set 211 and/or the transformed data generated by the pre-processing stage 203, depending on the state of the switch 212.
[0076] The outcome 214 is provided to an input of the decision metric 205 through path 311. As discussed above, the decision metric 205 is a representation of the relationship between the clinical data set 211 and the training data set 201. In the example of FIG. 3B, the decision metric 205 compares the clinical data set 211 to the training data set 201 as indicated by the training embedding model 204 and evaluates the outcome 214 of the target algorithm 213 to provide an estimate of the validity of the outcome 214. The decision metric 205 can, for example, be used to estimate whether the outcome 214 is valid or trustworthy based on the outcome 214 of the algorithm and how well patterns in the training data set 201 that are identified in the training embedding model 204 account for the clinical input data 211. The decision metric 205 uses the estimation of the validity of the outcome 214 to generate the trust output 206. As a result, the trust output 206 is a quality assessment of the outcome 214 of the algorithm 213 that indicates a trustworthiness of the outcome 214. The decision metric 205 can include any of the decision metrics discussed above with respect to FIG. 2A.
[0077] Figure 4A is a diagram that depicts an example of a training configuration of a system that can generate a principal components analysis transform and residuals for evaluating the trustworthiness of an outcome of an algorithm during a training stage. The system of FIG. 4A includes an extract principal components stage 402, a comparison 403 of the explained variance, a principal component weights stage 404, a principal components analysis (PCA) transform stage 405, an inverse transform stage 407, a residuals stage 406, and a trust estimator 445. FIG. 4A also illustrates a training data set 401 and a target algorithm 410. The target algorithm 410 can be a machine learning (ML) algorithm that has been pre-trained with the training data set 401 or any other type of algorithm.
[0078] The training data set 401 is provided to the target algorithm 410. The target algorithm 410 generates outcomes using the training data set 401. Examples of the outcomes of the algorithm 410 are illustrated as positives and negatives in graph 420 shown in Figure 4B. These examples are not intended to be limiting. As a more specific example that is also not intended to be limiting, the algorithm 410 may generate the positives in graph 420 to indicate electrode arrays of the cochlear implants that have buckled or folded based on the corresponding TIMs in the training data set 401, and the algorithm 410 may generate the negatives in graph 420 to indicate electrode arrays of the cochlear implants that have not buckled or folded based on the corresponding TIMs in the training data set 401.
[0079] The training data set 401 is also provided to the PCA transform stage 405, to the residuals stage 406, and to the extract principal components stage 402. The extract principal components stage 402 extracts the principal components from the training data set 401 using a principal components analysis (PCA) algorithm. The principal components extracted from the training data set 401 at stage 402 correspond to the vectors in the training data set 401 that explain the most variance in the training data set 401.
[0080] The principal components extracted from the training data set 401 are then compared at comparison 403 to determine if the explained variance is greater than a percentage k%. If the explained variance is not greater than k% at comparison 403, then additional principal components are extracted from the training data set 401 at extract principal components stage 402 to increase the number of extracted principal components, and the additional principal components extracted at stage 402 are compared again at comparison 403.
[0081] If the explained variance is greater than k% at comparison 403, then principal component weights are generated at principal component weights stage 404 for the principal components extracted from the training data set 401 at stage 402. The principal component weights generated at stage 404 are a description of the vectors that explain most of the variance in the training data set 401. The principal component weights generated at stage 404 are then provided to the PCA transform stage 405.
[0082] The PCA transform stage 405 then multiplies each of the principal component weights received from stage 404 by a corresponding value in the training data set 401 and then adds the results of these multiplications together to generate a reconstruction. As an example, the PCA transform stage 405 can multiply each of the principal components by a gain for a corresponding TIM in the training data set 401 to generate a result, and then add all of the results together to generate the reconstruction.
[0083] The reconstruction generated by PCA transform stage 405 is provided to inverse transform stage 407. The inverse transform stage 407 then inverts the reconstruction generated by PCA transform stage 405 to generate an inverted reconstruction, which is provided to the residuals stage 406. Residuals stage 406 subtracts the inverted reconstruction received from inverse transform stage 407 from the corresponding values in the training data set 401 to generate residuals. The PCA transform stage 405 with the principal component weights, the inverse transform stage 407, and/or the residuals stage 406 function as the training embedding model 204 in this embodiment.
[0084] A trust estimator 445 then generates trust values 446 for the outcomes of the target algorithm 410 based on the values of the residuals generated by residuals stage 406 and based on the outcomes of the target algorithm 410. The trust values 446 indicate the trustworthiness of the corresponding outcomes of the target algorithm 410. Thus, the trust estimator 445 uses the outcomes of the algorithm 410 and the residuals to generate trust values 446 that indicate the reliability of the outcomes of the algorithm 410. The trust values 446 may indicate which outcomes of the algorithm 410 deviate substantially from expected outcomes and which outcomes of the algorithm 410 correlate to the expected outcomes. The trust estimator 445 functions as the decision metric 205 in this embodiment. [0085] The graph 420 shown in Figure 4B also illustrates examples of the trust values 446. The trust values 446 can be true or false values. The true values are shown in the left half of graph 420, and the false values are shown in the right half of graph 420. In the example of FIG. 4B, the false values are most likely to be generated in the middle range of the residuals around 0.4.
[0086] As an example, true negatives in graph 420 indicate that the corresponding negative outcomes of the target algorithm 410 are likely to be correct. As another example, true positives in graph 420 indicate that the corresponding positive outcomes of the target algorithm 410 are likely to be correct. As another example, false positives in graph 420 indicate that the corresponding positive outcomes of the target algorithm 410 are likely to be incorrect.
[0087] Figure 4C illustrates graphs 421-422 of probability curves that are examples of the trust values 446 generated by the trust estimator 445. Graph 421 illustrates examples of the probabilities of correctness of positive outcomes generated by the algorithm 410 (i.e., trust of positive outcomes) based on the residuals. Graph 422 illustrates examples of the probabilities of correctness of negative outcomes generated by the algorithm 410 (i.e., trust of negative outcomes) based on the residuals.
[0088] Figure 4D is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of an algorithm using a PCA transform during a clinical application stage. The system of FIG. 4D includes principal components analysis (PCA) transform stage 405, inverse transform stage 407, residuals stage 406, and trust estimator 445. FIG. 4D also illustrates a clinical data set 450, target algorithm 410, the outcome 451 of the target algorithm 410, and the trust values 452. The target algorithm 410 generates the outcome 451 by processing the clinical data set 450. The outcome 451 indicates positive or negative results.
[0089] In the clinical configuration of FIG. 4D, PCA transform stage 405 multiplies each of the principal component weights by a corresponding value in clinical data set 450 and then adds the results of these multiplications together to generate a reconstruction. The reconstruction generated by PCA transform stage 405 is provided to the inverse transform stage 407. The inverse transform stage 407 then inverts the reconstruction generated by PCA transform stage 405 to generate an inverted reconstruction, which is provided to residuals stage 406. Residuals stage 406 subtracts the inverted reconstruction from the corresponding values in the clinical data set 450 to generate residuals. The residuals may, for example, correspond to the differences between the observed values and the estimated values in clinical data set 450.
[0090] The outcome 451 of the algorithm 410 and the residuals generated at residuals stage 406 are then passed to a trust estimator 445. The trust estimator 445 compares the probability curves for the trust values generated in the training configuration of FIG. 4A to the residuals generated at residual stage 406 in the clinical configuration of FIG. 4D for each outcome 451 to generate trust values 452 for the residuals. Each of the trust values 452 for the residuals indicates the trustworthiness of a corresponding outcome 451 of the target algorithm 410. The trust values 452 can be, for example, true or false values. The trust values 452 indicate whether the clinical data set 450 is sufficiently similar to the training data set 401 for the target algorithm 410 to produce a valid outcome 451. The graphs 421- 422 of the probability curves for the trust values generated during the training stage of FIG. 4A and shown in FIG. 4C illustrate examples that can be used by the trust estimator 445.
[0091] Figure 5A is a diagram that depicts an example of a training configuration of a system that can train a K nearest neighbors algorithm for evaluating the trustworthiness of an outcome of a target algorithm during a training stage. The system of FIG. 5A includes a K nearest neighbors stage 503 and an errors and correlation stage 502. FIG. 5A also illustrates an augmented training data set 501, a training data set 504 that excludes an input base, and graphs 511-512. The augmented training data set 501 can be generated by the augmentation stage 202 using data augmentation on an original training data set, as disclosed herein with respect to FIG. 2A.
[0092] The augmented training data set 501 and the training data set 504 are provided to inputs of the K nearest neighbors stage 503. K nearest neighbors stage 503 compares every item in the training data sets 501 and 504 to every other item in the training data sets 501 and 504. For each item in the training data sets 501 and 504, K nearest neighbors stage 503 performs a K nearest neighbors algorithm relative to every other item in the training data sets 501 and 504 to identify the errors and correlations for that one item. K nearest neighbors stage 503 performs the K nearest neighbors algorithm for every item in the training data sets 501 and 504 relative to the other items in the training data sets to generate the errors and correlations.
[0093] At the errors and correlation stage 502, a probability distribution function (PDF) is created of the errors and correlations for the items in the training data sets that were generated in K nearest neighbors stage 503. The errors can, as an example, correspond to residuals that indicate the differences between observed values and estimated or expected values in the training data sets. The correlations indicate how well each item correlates to the training data sets. The correlations can, as an example, be Pearson correlations. In the example of FIGS. 5A-5B, an outcome of a target algorithm is not used during the development of the errors and correlations.
[0094] In some exemplary implementations, the training data sets 501 and 504 can include training matrices (e.g., TIMs for determining modiolus proximity) of electrode arrays of cochlear implants that have been implanted in recipients. Graphs 511 and 512 are plots generated by errors and correlation stage 502 according to an example in which the training data sets include training matrices. Graph 511 shows examples of the number of training matrices having various numbers of errors, and graph 512 shows examples of the number of training matrices having various correlation values, which are shown as absolute values of the correlations. In the examples shown in graph 511, all of the errors are less than a noise floor. In the examples shown in graph 512, all of the correlations are less than a threshold.
[0095] Figure 5B is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of a target algorithm using a K nearest neighbors algorithm during a clinical application stage. The system of FIG. 5B includes the K nearest neighbors stage 503, the errors and correlation stage 502, and trust estimator 545. FIG. 5B also illustrates a clinical data set 521, training data set 522, the target algorithm 531, the outcome 532 of the target algorithm 531, a trust value 523, and graphs 521-522.
[0096] The target algorithm 531 can be a pre-trained machine learning (ML) algorithm or any other type of algorithm. The clinical data set 521 is provided to an input of the target algorithm 531. The target algorithm 531 generates an outcome 532 by processing the clinical data set 521.
[0097] The clinical data set 521 and the training data set 522 are provided to the K nearest neighbors stage 503. The training data set 522 can include the training data sets 501 and 504 used in the training configuration of FIG. 5A or only the original training data set. For each item in the clinical data set 521, K nearest neighbors stage 503 performs a K nearest neighbors algorithm on every other item in the clinical data set 521 to identify the errors and correlations for that one item in the clinical data set 521. The errors and correlation stage 502 creates a probability distribution function (PDF) of the errors and correlations for the items in the clinical data set 521 that were generated in K nearest neighbors stage 503. As an example, the correlations can be based on how well the clinical data set 521 can be compressed to match a compressed version of the training data set 522. The K nearest neighbors stage 503 and/or the errors and correlation stage 502 function as the training embedding model 204 in this embodiment.
[0098] A trust estimator 545 compares the PDF of the errors and correlations for the items in the clinical data set 521 with the errors and correlations in the training data set 522 to generate trust values 523 that indicate trustworthiness of the outcome 532 of the algorithm
531. The trust values 523 are generated based on how similar the errors and correlations in the clinical data set 521 are to the errors and correlations in the training data set 522. Each trust value 523 indicates whether one or more items in the clinical data set 521 is sufficiently similar to the training data set 522 for the target algorithm 531 to produce a valid outcome
532. The trust estimator 545 functions as the decision metric 205 in this embodiment. Figure 5B also shows the graphs 511 and 512 as examples of the errors and correlations.
[0099] Figure 6A is a diagram that depicts an example of a training configuration of a system using statistical properties of pre-processed training data for generating a model during a training stage. The system of FIG. 6A includes a pre-processing and feature selection stage 603, a model 604, and a trust estimator 616. FIG. 6A also illustrates training data sets 601-602. FIG. 6B illustrates a graphical representation of the model 604 including statistical properties of pre-processed and selected features.
[0100] During the training stage of FIG. 6A, the training data sets 601 and 602 are provided to the pre-processing and feature selection stage 603. The pre-processing and feature selection stage 603 pre-processes signals (e.g., complex signals) in the training data sets 601- 602 and then selects fiducial features from the pre-processed signals as pre-processed training data features. Examples of pre-processing that can be performed by the preprocessing and feature selection stage 603 on the training data sets 601-602 to generate the pre-processed signals include estimating noise, phase angle, magnitude, frequency components, covariates, and/or autocorrelations of one or more of the signals in the training data sets.
[0101] The pre-processing and feature selection stage 603 can use expert knowledge of the feature space of the pre-processed training data features combined with statistical analysis of the pre-processed training data features to define feasible expected ranges for the pre- processed training data features. The pre-processing and feature selection stage 603 then adds the pre-processed training data features (e.g., within the feasible expected ranges) to a model 604 that functions as the training embedding model 204 in this embodiment. Expert knowledge regarding the training data sets can also be added as additional features to the model 604. The pre-processing and feature selection stage 603 can also compare the pre- processed training data features and then add the interactions between the pre-processed training data features that were generated from the comparisons to the model 604. FIG. 6B illustrates a graphical representation of an example of model 604 that includes the statistical properties of the pre-processed training data features generated by the pre-processing and feature selection stage 603.
[0102] The pre-processed training data features selected by the pre-processing and feature selection stage 603 and any additional features (e.g., from expert knowledge) and interactions between these features are compared to the probability mass function of each of these features and combined in the model 604. A trust estimator 616 then generates an estimate of the likelihood of the combined set of features occurring in the training data sets 601-602. The probability of occurrence of all of these features regarding the training data sets 601-602 can be combined in the model 604 and estimated by trust estimator 616, e.g. via a Bayesian estimator.
[0103] The trust estimator 616 can assign a trust equivalent to each of the features in the model 604. The trust equivalent indicates the probability of occurrence of that feature based on probability mass functions or distributions of the features in the training data sets 601-602. In the example of FIGS. 6A-6C, the outcome of a target algorithm is not used during the development of the model 604.
[0104] Rarer features in the training data sets 601-602 have less influence on the training of the algorithm used in the pre-processing and feature selection stage 603. Thus, any determination made by trust estimator 616 with a combination of the features with rare occurrence in the training data sets 601-602 has a low trust in the algorithmic assessment of these features. These statistically derived features can be augmented by expert defined interaction probability or value limits when generating the model 604.
[0105] As a specific example that is not intended to be limiting, the training data sets 601- 602 may include biological signals (e.g., auditory signals) from recipients of medical devices, such as cochlear implants. The biological signals in the training data sets 601-602 can, for example, be generated in response to external stimuli. The graphical representation of the model 604 shown in FIG. 6B is a histogram showing examples of 4 features from the training data sets along the diagonal of the graph and the results of comparisons of each of these 4 features to every other one of the 4 features arranged in corresponding rows and columns of the histogram. Thus, the model 604 can include features from the training data sets as well as how the features interact with each other. [0106] Figure 6C is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of a target algorithm using statistical properties of pre-processed data during a clinical application stage. The system of FIG. 6C includes the pre-processing and feature selection stage 603, model 604, a target algorithm 614, and a trust estimator 616. FIG. 6C also illustrates clinical data sets 611-612, the outcome 615 of the target algorithm 614, and one or more trust values 617. The target algorithm 614 can be a pre-trained ML algorithm or any other type of algorithm.
[0107] During the clinical application stage of FIG. 6C, the clinical data sets 611-612 are provided to the pre-processing and feature selection stage 603. The pre-processing and feature selection stage 603 pre-processes signals in the clinical data sets 611-612 and then selects fiducial features from the pre-processed signals as pre-processed clinical data features. The pre-processing and feature selection stage 603 can perform pre-processing operations on the signals in the clinical data sets 611-612, such as estimating noise, phase angle, magnitude, and frequency components of one or more of the signals.
[0108] The pre-processing and feature selection stage 603 can use expert knowledge of the feature space of the pre-processed clinical data features combined with statistical analysis of the pre-processed clinical data features to define feasible expected ranges for the pre- processed clinical data features. The pre-processed clinical data features within the feasible expected ranges are then provided to target algorithm 614 and the trust estimator 616. The target algorithm 614 then processes the pre-processed clinical data features within the feasible expected ranges to generate an outcome 615.
[0109] The trust estimator 616 assigns a trust value 617 to the pre-processed clinical data features received from stage 603 based on the features in the model 604 generated during the training stage of FIG. 6A. The trust estimator 616 assigns a trust value 617 to the pre- processed clinical data features that indicates an estimate of the probability of occurrence of that set of pre-processed clinical data features based on the probability mass functions, distributions, covariances, and/or joint probability mass functions of the features in the training data sets 601-602 in the model 604. As a result, the trust value 617 for the pre- processed clinical data features indicates how similar these features are to the features from the training data sets that are represented in the model 604. As a specific example, the trust estimator 616 can compare the pre-processed clinical data features to a histogram of features in the model 604 to generate the corresponding trust value 617 for the pre- processed clinical data features. [0110] Figure 7A is a diagram that depicts an example of a training configuration of a system that can train generative adversarial networks for evaluating trustworthiness of an outcome of a target algorithm during a training stage. The system of FIG. 7A includes an input selector 702, a discriminator network 703, and a generative network 706. FIG. 7A also illustrates an augmented training data set 701, a discriminator loss 704, a generator loss
705, and a trust value 707. The augmented training data set 701 can be generated by the augmentation stage 202 using data augmentation on an original training data set, as disclosed herein with respect to FIG. 2A.
[0111] The system of FIG. 7A has generative adversarial networks that include two neural networks. The first one of these two neural networks is generative network 706. Generative network 706 produces data that matches the real training data as closely as possible, without necessarily having access to the training data. The second one of these two neural networks is discriminator network 703. Discriminator network 703 differentiates between the real training data and the data produced by the generative network 706. The goals of the generative adversarial networks of FIG. 7A is to have generative network 706 produce an output indistinguishable from the training data and to have discriminator network 703 correctly identify valid training data.
[0112] The generative network 706 produces data that matches (i.e., is indistinguishable from) the training data in the augmented training data set 701 as closely as possible in response to random noise and in response to the generator loss 705, without having access to the training data. The data produced by the generative network 706 is provided to the input selector 702. The augmented training data set 701 is also provided to the input selector 702. The input selector 702 provides the augmented training data set 701 and/or the data produced by the generative network 706 to the discriminator network 703.
[0113] The discriminator network 703 differentiates between the augmented training data set 701 and the data produced by the generative network 706 in response to the discriminator loss 704. The output of the discriminator network 703 indicates the differences between the augmented training data set 701 and the data produced by the generative network 706. The discriminator network 703 generates the discriminator loss 704 and the generator loss 705. The discriminator network 703 also generates a trust value 707 that indicates a probability that the differences identified by the discriminator network 703 are real.
[0114] The generator loss 705 indicates the error in the output of the generative network
706. The generative network 706 uses the generator loss 705 in backpropagation to adjust the weights in the nodes in its neural network in each iteration of the training stage of FIG. 7A to reduce the error. The discriminator loss 704 indicates the error in the output of the discriminator network 703. The discriminator network 703 uses the discriminator loss 704 in backpropagation to adjust the weights in the nodes in its neural network in each iteration of the training stage of FIG. 7A to reduce the error. After enough iterations of the training stage, the training stage is complete, and the discriminator network 703 can be used in a clinical application, as disclosed herein, for example, with respect to FIG. 7B.
[0115] Figure 7B is a diagram that depicts an example of a clinical configuration of a system that can be used to generate a quality assessment of an outcome of a target algorithm using a discriminator network during a clinical application stage. The system of FIG. 7B includes the discriminator network 703 and a target algorithm 724. FIG. 7B also illustrates clinical data set 721, the outcome 725 of the target algorithm 724, and one or more trust values 723. The target algorithm 724 can be a pre-trained ML algorithm or any other type of algorithm. The target algorithm 724 processes the clinical data set 721 to generate the outcome 725.
[0116] In the embodiment of FIG. 7B, the discriminator network 703 is used as the training embedding model 204 and the decision metric 205 after the discriminator network 703 has been trained by the training stage of FIG. 7A. In the clinical application stage of FIG. 7B, the clinical data set 721 is provided to the discriminator network 703. If the discriminator network 703 has been successfully trained by the training stage of FIG. 7A, then the discriminator network 703 can be utilized alongside target algorithm 724 to generate a trust value 723 that is an estimate of how likely the algorithm 724 has previously encountered data that is similar to the clinical data set 721. In an alternative embodiment, the output of the discriminator network 703 can be provided as an input to the target algorithm 724.
[0117] As a specific example that is not intended to be limiting, target algorithm 724 can be trained to generate outcomes 725 that indicate whether TIMs generated from electrode arrays in cochlear implants are indicative of buckling or folding. In this example, the discriminator network 703 is trained using TIMs generated from electrode arrays of cochlear implants during the training stage of FIG. 7A. During the clinical application stage, the discriminator network 703 generates a trust value 723 that indicates whether the outcomes 725 regarding the TIMs are trustworthy based on whether the TIMs in the clinical data set 721 are similar enough to the TIMs in the training data set.
[0118] Figure 8 is a diagram that illustrates an example of a computing system 800 within which one or more of the disclosed embodiments can be implemented. For example,
T1 computing system 800 can be used to generate and provide control signals to one or more of the medical devices or systems disclosed herein with respect to FIGS. 1A-7B.
[0119] Computing systems, environments, or configurations that can be suitable for use with examples described herein include, but are not limited to, personal computers, server computers, hand-held devices, laptop devices, multiprocessor systems, microprocessorbased systems, programmable consumer electronics (e.g., smart phones), network computers, minicomputers, mainframe computers, tablets, distributed computing environments that include any of the above systems or devices, and the like. The computing system 800 can be a single virtual or physical device operating in a networked environment over communication links to one or more remote devices. The remote device can be an auditory prosthesis (e.g., the device or system of any one of FIGS. 1A-1D), a personal computer, a server, a router, a network personal computer, a peer device, or other common network node.
[0120] Computing system 800 includes at least one processing unit 802 and memory 804. The processing unit 802 includes one or more hardware or software processors (e.g., Central Processing Units) that can obtain and execute instructions. The processing unit 802 can communicate with and control the performance of other components of the computing system 800. The memory 804 is one or more software-based or hardware-based computer- readable storage media operable to store information accessible by the processing unit 802.
[0121] The memory 804 can store instructions executable by the processing unit 802 to implement applications or cause performance of operations described herein, as well as store other data. The memory 804 can be volatile memory (e.g., random access memory or RAM), non-volatile memory (e.g., read-only memory or ROM), or combinations thereof. The memory 804 can include transitory memory or non-transitory memory. The memory 804 can also include one or more removable or non-removable storage devices. In examples, the memory 804 can include non-transitory computer readable storage media, such as RAM, ROM, EEPROM (Electronically-Erasable Programmable Read-Only Memory), flash memory, optical disc storage, magnetic storage, solid state storage, or any other memory media usable to store information for later access. In examples, the memory 804 encompasses a modulated data signal (e.g., a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal), such as a carrier wave or other transport mechanism and includes any information delivery media. By way of example, and not limitation, the memory 804 can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio-frequency, infrared and other wireless media or combinations thereof.
[0122] In the illustrated example, the system 800 further includes a network adapter 806, one or more input devices 808, and one or more output devices 810. The system 800 can include other components, such as a system bus, component interfaces, a graphics system, a power source (e.g., a battery), among other components.
[0123] The network adapter 806 is a component of the computing system 800 that provides network access to network 812. The network adapter 806 can provide wired or wireless network access and can support one or more of a variety of communication technologies and protocols, such as Ethernet, cellular, Bluetooth, near-field communication, and RF (radio frequency), among others. The network adapter 806 can include one or more antennas and associated components configured for wireless communication according to one or more wireless communication technologies and protocols.
[0124] The one or more input devices 808 are devices over which the computing system 800 receives input from a user. The one or more input devices 808 can include physical ly- actuatable user-interface elements (e.g., buttons, switches, or dials), touch screens, keyboards, mice, pens, and voice input devices, among others input devices.
[0125] The one or more output devices 810 are devices by which the computing system 800 is able to provide output to a user. The output devices 810 can include, displays, speakers, and printers, among other output devices.
[0126] Any embodiment or any feature disclosed herein can be combined with any one or more other embodiments and/or other features disclosed herein, unless explicitly indicated otherwise. Any embodiment or any feature disclosed herein can be explicitly excluded from use with any one or more other embodiments and/or other features disclosed herein, unless explicitly indicated otherwise. It is noted that any method detailed herein also corresponds to a disclosure of a device, computer readable storage medium, and/or system configured to execute one or more or all of the method actions associated with the device, computer readable storage medium, and/or system as detailed herein. It is further noted that any disclosure of a device, computer readable storage medium, and/or system detailed herein corresponds to a method of making and/or using that device, computer readable storage medium, and/or system, including a method of using that device, system, or computer readable storage medium, according to the functionality detailed herein.
[0127] The foregoing description of the exemplary embodiments of the present invention has been presented for the purpose of illustration. The foregoing description is not intended to be exhaustive or to limit the present invention to the examples disclosed herein. In some instances, features of the present invention can be employed without a corresponding use of other features as set forth. Many modifications, substitutions, and variations are possible in light of the above teachings, without departing from the scope of the present invention.

Claims

Claims What is claimed is:
1. A method comprising: generating a model that is representative of first data, the first data used to train an algorithm; and generating a decision metric for determining if a similarity between second data input to the algorithm and the first data is sufficient for the algorithm to generate an output that is valid based on patterns of the first data that are identified in the model accounting for the second data.
2. The method of claim 1, wherein the algorithm is trained by the first data to detect features of measurements obtained using implantable medical devices to generate the output.
3. The method of claim 1, wherein the measurements obtained using the implantable medical devices comprise electrophysiological or tissue-related responses to stimulus.
4. The method of any one of claims 1-3 further comprising: augmenting the first data to generate third data using data augmentation, wherein generating the model further comprises causing the model to be representative of the third data.
5. The method of any one of claims 1-4 further comprising: pre-processing the first data to generate pre-processed data by at least one of smoothing or filtering the first data, aggregating the first data, normalizing the first data to a fixed dynamic range, reducing dimensionality of the first data, mitigating an impact of measurement noise in the first data, selecting for fiducial information in the first data, or transforming the first data to another representation, wherein generating the model further comprises causing the model to be representative of the pre-processed data.
6. The method of any one of claims 1-5, wherein generating the model that is representative of the first data comprises: generating the model by performing a K nearest neighbors algorithm that compares each item in the first data to every other item in the first data to identify errors and correlations.
7. The method of any one of claims 1-6, wherein generating the model that is representative of the first data comprises: selecting features from the first data; comparing the features to identify interactions between the features; and adding the interactions between the features to the model.
8. The method of any one of claims 1-7, wherein generating the decision metric comprises: generating the decision metric to produce a trust value indicative of a probability of occurrence of features in the second data based on probability mass functions or distributions of the features in the first data.
9. The method of any one of claims 1-8, wherein generating the model that is representative of the first data comprises: training a discriminator network that differentiates between the first data and third data produced by a generative network that causes the third data to match the first data, wherein the generative network and the discriminator network are adversarial networks.
10. The method of any one of claims 1-9, wherein generating the decision metric comprises: causing the decision metric to indicate a representation of a relationship between the second data input to the algorithm and the first data for estimating if the output generated by the algorithm is trustworthy using the output of the algorithm.
11. The method of any one of claims 1-10, wherein generating the model that is representative of the first data comprises: generating weights for principal components extracted from the first data that are a description of vectors that account for variance in the first data using a principal components analysis.
12. The method of any one of claims 1-11, wherein the method is implemented by a computing system comprising at least one processing unit and memory.
13. A computing system comprising: one or more processing units that generate a model that comprises information from first data used to develop an algorithm, wherein the computing system generates a trust estimator that uses the model and an outcome of the algorithm generated based on second data to provide a representation of a relationship between the first data and the second data for estimating whether the outcome of the algorithm is trustworthy.
14. The computing system of claim 13, wherein the computing system causes the trust estimator to provide the representation of the relationship between the first data and the second data based on patterns in the first data accounting for the second data.
15. The computing system of any one of claims 13-14, wherein the one or more processing units generate the model based on principal components weights that are derived from the first data using a principal components analysis and that are a description of vectors that account for a majority of variance in the first data.
16. The computing system of claim 15, wherein the one or more processing units generate the model by multiplying the principal component weights by values in the first data to generate results and then summing the results to generate a reconstruction.
17. The computing system of claim 16, wherein the one or more processing units generate the model by inverting the reconstruction to generate an inverted reconstruction and subtracting the inverted reconstruction from the values in the first data to generate residuals, and wherein the trust estimator generates a trust value for the outcome of the algorithm based on the residuals.
18. The computing system of any one of claims 13-17, wherein the algorithm is a machine learning algorithm.
19. The computing system of any one of claims 13-18, wherein the one or more processing units generate the model using first measurements from first implantable medical devices used to develop the algorithm, and wherein the outcome of the algorithm is generated based on a second measurement from a second implantable medical device.
20. A non-transitory computer readable storage medium comprising computer readable instructions stored thereon for causing a computing system to: generate a quality assessment of a relationship between a first data set and a second data set using a model that comprises features of the first data set; and determine a similarity between the first data set and the second data set using a trust estimator that processes the quality assessment to assess a trustworthiness of an outcome that an algorithm generates in response to the second data set, wherein the algorithm is developed using the first data set.
21. The non-transitory computer readable storage medium of claim 20, wherein the computer readable instructions further cause the computing system to: augment or transform a third data set to generate the first data set.
22. The non-transitory computer readable storage medium of any one of claims 20-21, wherein the computer readable instructions further cause the computing system to: compare features of the second data set against probability density or mass functions for the first data set using the trust estimator to calculate a probability of occurrence of the features in the second data set.
23. The non-transitory computer readable storage medium of any one of claims 20-22, wherein the computer readable instructions further cause the computing system to: generate the quality assessment of the relationship between the first data set and the second data set using the model by performing a K nearest neighbors algorithm on the second data set to identify errors and correlations in the second data set; create a probability distribution function of the errors and the correlations in the second data set; and compare the probability distribution function of the errors and the correlations with the first data set using the trust estimator to assess the trustworthiness of the outcome of the algorithm.
24. The non-transitory computer readable storage medium of any one of claims 20-23, wherein the computer readable instructions further cause the computing system to: generate the quality assessment of the relationship between the first data set and the second data set using a discriminator network, wherein the discriminator network is trained on the first data set using a generative network; and determine the similarity between the first data set and the second data set using the discriminator network that processes the quality assessment to assess the trustworthiness of the outcome of the algorithm.
25. The non-transitory computer readable storage medium of any one of claims 20-24, wherein the computer readable instructions further cause the computing system to: cause the trust estimator to process the quality assessment of the relationship between the first data set and the second data set based on patterns in the first data set accounting for the second data set.
26. The non-transitory computer readable storage medium of any one of claims 20-25, wherein the second data set comprises at least one of electrical measurements or medical images of electrode arrays of cochlear implants in recipients.
27. A computer implemented method for estimating a trustworthiness of an output of an algorithm that has been trained using training data, the computer implemented method comprising: generating a representation of a relationship between the training data and input data using a model that comprises a description of the training data, wherein the algorithm uses the input data to generate the output; and generating a trust value for the output of the algorithm using a decision metric based on the output of the algorithm and based on the representation of the relationship between the training data and the input data.
28. The computer implemented method of claim 27, wherein generating the representation of the relationship between the training data and the input data using the model further comprises: multiplying weights of principal components derived from the training data by values in the input data to generate results; summing the results to generate a reconstruction; and subtracting an inversion of the reconstruction from the values in the input data to generate residuals.
29. The computer implemented method of claim 28, wherein generating the trust value for the output of the algorithm using the decision metric further comprises: comparing trust probability curves generated in a training configuration of the decision metric to the residuals to generate the trust value for the output.
30. The computer implemented method of any one of claims 27-29, wherein generating the trust value for the output of the algorithm using the decision metric further comprises: generating the trust value for the output of the algorithm based on patterns in the training data accounting for the input data.
31. The computer implemented method of any one of claims 27-30, wherein the input data comprises transimpedance matrices generated from electrode arrays of cochlear implants implanted in recipients.
32. The computer implemented method of any one of claims 27-31, wherein generating the representation of the relationship between the training data and the input data using the model comprises generating the representation of the relationship using the model that comprises the description of the training data from first implantable medical devices, and wherein the algorithm generates the output based on a measurement from a second implantable medical device.
PCT/IB2024/0622292023-12-142024-12-04Techniques for quality assessment of outcomes of algorithmsPendingWO2025125991A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US202363610202P2023-12-142023-12-14
US63/610,2022023-12-14

Publications (1)

Publication NumberPublication Date
WO2025125991A1true WO2025125991A1 (en)2025-06-19

Family

ID=96056605

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/IB2024/062229PendingWO2025125991A1 (en)2023-12-142024-12-04Techniques for quality assessment of outcomes of algorithms

Country Status (1)

CountryLink
WO (1)WO2025125991A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP3786966A1 (en)*2019-08-262021-03-03F. Hoffmann-La Roche AGAutomated validation of medical data
CN115295134B (en)*2022-09-302023-03-24北方健康医疗大数据科技有限公司Medical model evaluation method and device and electronic equipment
US20230177059A1 (en)*2020-02-122023-06-08American Express Travel Related Services Company, Inc.Computer-based systems for data entity matching detection based on latent similarities in large datasets and methods of use thereof
WO2023122229A2 (en)*2021-12-242023-06-29BeeKeeperAI, Inc.Systems and methods for data validation and transform, data obfuscation, algorithm validation, and data amalgamation in a zero-trust environment
US20230215531A1 (en)*2020-06-162023-07-06Nuvasive, Inc.Intelligent assessment and analysis of medical patients

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP3786966A1 (en)*2019-08-262021-03-03F. Hoffmann-La Roche AGAutomated validation of medical data
US20230177059A1 (en)*2020-02-122023-06-08American Express Travel Related Services Company, Inc.Computer-based systems for data entity matching detection based on latent similarities in large datasets and methods of use thereof
US20230215531A1 (en)*2020-06-162023-07-06Nuvasive, Inc.Intelligent assessment and analysis of medical patients
WO2023122229A2 (en)*2021-12-242023-06-29BeeKeeperAI, Inc.Systems and methods for data validation and transform, data obfuscation, algorithm validation, and data amalgamation in a zero-trust environment
CN115295134B (en)*2022-09-302023-03-24北方健康医疗大数据科技有限公司Medical model evaluation method and device and electronic equipment

Similar Documents

PublicationPublication DateTitle
EP3076866B1 (en)Detecting neuronal action potentials using a sparse signal representation
US12432509B2 (en)Objective determination of acoustic prescriptions
US12081946B2 (en)Individualized own voice detection in a hearing prosthesis
EP3082948B1 (en)Detecting neuronal action potentials using a convolutive compound action potential model
EP3104931B1 (en)Determination of neuronal action potential amplitude based on multidimensional differential geometry
WO2017203486A1 (en)Electrode selection
WO2025125991A1 (en)Techniques for quality assessment of outcomes of algorithms
WO2024003688A1 (en)Implantable sensor training
WO2020225732A1 (en)Techniques for stimulation artefact elimination
WO2015136429A1 (en)Excitation modeling and matching
US20230364421A1 (en)Parameter optimization based on different degrees of focusing
US20240335661A1 (en)Phase coherence-based analysis of biological responses
WO2024095098A1 (en)Systems and methods for indicating neural responses
WO2024213976A2 (en)Stimulation control
WO2025062267A1 (en)Systems and methods for estimating skin flap thickness
WO2025094136A1 (en)New processing techniques
WO2025210451A1 (en)Data-derived device parameter determination
WO2025094006A1 (en)Regulation of noise reduction
WO2025109443A1 (en)Noise reduction filter calibration for an implantable device
WO2024246666A1 (en)Electrocochleography-based classification
WO2023223137A1 (en)Personalized neural-health based stimulation
WO2024134329A1 (en)Systems and methods with backup communication links

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:24903063

Country of ref document:EP

Kind code of ref document:A1


[8]ページ先頭

©2009-2025 Movatter.jp