Movatterモバイル変換


[0]ホーム

URL:


CN118660667A - Management of psychosis or psychiatric conditions using digital or augmented reality with personalized exposure progression - Google Patents

Management of psychosis or psychiatric conditions using digital or augmented reality with personalized exposure progression
Download PDF

Info

Publication number
CN118660667A
CN118660667ACN202280090215.XACN202280090215ACN118660667ACN 118660667 ACN118660667 ACN 118660667ACN 202280090215 ACN202280090215 ACN 202280090215ACN 118660667 ACN118660667 ACN 118660667A
Authority
CN
China
Prior art keywords
subject
threshold
biometric
category
disorder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280090215.XA
Other languages
Chinese (zh)
Inventor
A·阿拉姆
B·哈吉斯
C·扎列斯基
E·安德森
G·米特斯
J·蔡
M·泰勒
R·韦斯伯格
S·扎德
T·格林内尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bechville LLC
Frontact Corp
Original Assignee
Sumitomo Pharmaceuticals Co Ltd
Bechville LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sumitomo Pharmaceuticals Co Ltd, Bechville LLCfiledCriticalSumitomo Pharmaceuticals Co Ltd
Priority claimed from PCT/US2022/051549external-prioritypatent/WO2023102125A1/en
Publication of CN118660667ApublicationCriticalpatent/CN118660667A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

Translated fromChinese

提供了用于实现暴露进展的系统和方法,该暴露进展提高被检者对被检者的精神病或精神状况进行管理的能力。暴露进展包括按分层顺序的多个类别。各类别与一个或多于一个体验相关联,并且各体验与显现挑战的数字现实场景相关联。在一些实施例中,至少部分地基于被检者在一个或多于一个挑战中具有的成功水平来动态地生成或修改分层顺序,由此产生不仅针对各被检者定制而且具有暴露实践的个性化定时和/或性质的暴露进展。

Systems and methods for implementing an exposure progression are provided that improve a subject's ability to manage the subject's psychosis or psychiatric condition. The exposure progression includes a plurality of categories in a hierarchical order. Each category is associated with one or more experiences, and each experience is associated with a digital reality scene that presents a challenge. In some embodiments, the hierarchical order is dynamically generated or modified based at least in part on the level of success that the subject has in one or more challenges, thereby producing an exposure progression that is not only customized for each subject but also has personalized timing and/or nature of the exposure practice.

Description

Management of psychosis or mental condition using digital or augmented reality with personalized exposure progression
Cross Reference to Related Applications
The present application claims priority from U.S. provisional patent application No. 63/284,862, entitled "Management of Psychiatric or Mental Conditions Using Digital or Augmented Reality with Personalized Exposure Progression", filed on 1 at 12/2021, which is incorporated herein by reference in its entirety for all purposes. In addition, the present application claims priority from U.S. provisional patent application No. 63/415,876, entitled "Management of Psychiatric or Mental Conditions Using Digital or Augmented Reality with Personalized Exposure Progression", filed on 10/13, 2022, which is incorporated herein by reference in its entirety for all purposes.
Technical Field
The present disclosure relates to systems, methods, and devices for preparing a digital reality or augmented reality based solution to manage a mental illness or condition exhibited by a subject.
Background
The need to access mental health care facilities and services that improve the mental health of patients has remained high. However, there is no evidence that this increased access to mental health care facilities also results in a reduced prevalence of mental health problems. In fact, in recent years, psychological health problems have increased in patients. See Mojtabai et al.,"Trends in Psychological Distress,Depressive Episodes and Mental-Health Treatment-Seeking in the United States:2001-2012,"Journal of Affective Disorders,174,pg.556.
Furthermore, the increasing demand for mental health care facilities has led to a proportional increase in the demand for health care practitioners and professionals to provide services at the mental health care facilities. As a result, the stress (both psychological and physiological) experienced by health care practitioners and professionals increases, which can prevent them from providing optimal service. See Ruiz-Fernandez et al.,2020,"Compassion Fatigue,Burnout,Compassion Satisfaction and Perceived Stress in Healthcare Professionals During the COVID-19Health Crisis in Spain,"Journal of Clinical Nursing,29(21-22),pg.4321-4430.
Traditional solutions to improve mental health are laborious and require a lot of resources for all involved participants. For example, traditional solutions often require time consuming and expensive face-to-face conferences between the clinician and the patient. Furthermore, these face-to-face conferences do not easily allow the clinician to view the patient with exposure to potential mental health problems of the patient, given the privacy and privacy of the face-to-face conference with the clinician.
Furthermore, conventional solutions lack satisfactory efficacy for treating certain mental health problems. For example, while conventional face-to-face cognitive and/or behavioral exposure techniques have generally demonstrated some efficacy, they lack significant efficacy, particularly for post-traumatic stress disorder (PTSD), social Anxiety Disorder (SAD), and panic disorder. See Carpenter et al.,2018,"Cognitive Behavioral Therapy for Anxiety and Related Disorders:A Meta-analysis of Randomized Placebo-controlled Trails,"Depression and Anxiety,25(56),pg.502.
At the same time, interactive computer-implemented gaming and services are expanding. However, existing solutions to many services that utilize computer-implemented gambling to improve mental health have been unsatisfactory. One reason for such dissatisfaction is the requirement that the therapist be present with the patient during a computer-implemented therapeutic gambling session. See Freeman et al.,2017,"Virtual Reality in the Assessment,Understanding,and Treatment of Mental Health Disorders,"Psychological Medicine,47(14),pg.2393. this requirement places a heavy burden on the time, space and financial resources available to both the patient and the healthcare practitioner.
Accordingly, there is a need for systems, methods, and apparatus for improving the mental health of subjects without overburdening the subjects or their medical practitioners.
Disclosure of Invention
In view of the foregoing background, there is a need in the art for systems and methods for preparing a regimen for an improved subject's ability to manage a mental disorder or condition exhibited by the subject.
The present disclosure provides improved systems and methods for achieving exposure progress (exposure progression) that increases a subject's ability to manage or improve a subject's mental disease or condition.
An aspect of the present disclosure is directed to providing systems, methods, and apparatus for enabling exposure progress. Exposure progression is a series of events in digital reality configured to enhance the subject's ability to manage the subject's mental disease or condition. For example, in some embodiments, the series of events includes various experiences presented to the subject, such as two or more experiences, three or more experiences, five or more experiences, or 10 or more experiences, etc. Thus, in some such embodiments, the methods of the present disclosure are implemented at a computer system associated with a subject. The computer system includes one or more processors and a display for presenting at least a digital reality scene. In some embodiments, the computer system includes one or more speakers or headphones for presenting the auditory aspects of the digital reality scene. Further, the computer system includes a plurality of sensors and a memory coupled to the one or more processors. The memory includes one or more programs configured to be executed by the one or more processors.
Thus, the method includes obtaining a plurality of categories for the subject. In some embodiments, each respective category of the plurality of categories relates to improving a particular ability of the subject to manage psychosis or mental condition. In some embodiments, the plurality of categories includes an exposure category, a cognitive remodeling therapy (CBT) category, a positive idea category, a general category, or a combination thereof. Non-limiting examples of exposure categories include a first social interaction and/or interaction anxiety category, a second public performance anxiety category, a third fear observed category, a fourth ingestion anxiety (e.g., anxiety associated with food consumption) category, a fifth confidence anxiety category, or a combination thereof. Another non-limiting example of a CBT category includes a sixth cognitive reconstruction category, a seventh usefulness category, an eighth dissociation category, or a combination thereof. Each respective category is associated with a corresponding plurality of suggested experiences. Thus, each of the corresponding plurality of suggested experiences is a task or challenge in digital reality configured to enhance a particular ability of the subject to manage psychosis or mental condition. Non-limiting examples of experiences associated with the first social interactions and/or interaction anxiety categories include exercises to improve interaction anxiety by leaving alone (e.g., at a wedding, at a park, etc.) and having to interact with strangers. Another non-limiting example of an experience associated with a CBT category includes a cognitive reconstruction exercise configured to identify catastrophic ideas and hypotheses about how a subject will be perceived that lead to anxiety in the subject.
Further, each respective category is associated with at least one respective gate criterion of the plurality of gate criteria. In some embodiments, the gate criteria are preconditions that must be met in order for the respective category to be considered complete for the subject. A non-limiting example of a precondition is a requirement that at least two of the corresponding plurality of suggested experiences be successfully completed by the examinee before allowing the examinee to invoke a given category. Another non-limiting example of a precondition is a requirement that the subject must meet a threshold number of interactions with the digital reality object when interacting with the experience. As such, each respective suggested experience in the corresponding plurality of suggested experiences is associated with a corresponding digital reality scene that presents a corresponding challenge designed for the respective suggested experience of the respective category.
In some embodiments, each suggested experience is further associated with at least one biometric measurement of a plurality of biometric thresholds and a threshold, which allows the method to correlate capturing at least one biometric data element from the subject during the digital reality scenario. For example, in some embodiments, the at least one biometric measurement includes a vocal feature associated with the subject (e.g., entropy of a vocal signal obtained from the subject), or a spatial feature associated with the subject when interacting with the digital reality scene (e.g., movement of the subject's hand when interacting with the digital reality scene). Thus, in some embodiments, the disclosed methods include presenting a first digital reality scene on a display that presents a first challenge designed for a first suggested experience of a first class (which is based on a selection of a subject among a plurality of classes). In some embodiments, in coordination with the presentation of the first digital reality scene, the disclosed method includes obtaining a first plurality of data elements. In some embodiments, the first plurality of data elements comprises a first set of biometric data elements. The first set of biometric data elements is obtained from a subset of sensors of the plurality of sensors. In some embodiments, the subset of sensors includes one sensor, at least two sensors, or at least four sensors. Thus, the subset of sensors includes at least one biometric sensor configured to capture at least one biometric data element associated with the subject when the subject is completing the first challenge in the first digital reality scene. In some embodiments, the at least one biometric sensor comprises a first biometric sensor that is a heart rate sensor, a heart rate variability sensor, a blood pressure sensor, an electrical skin activity sensor, an electrical skin response sensor, an electroencephalogram sensor, an eye tracking sensor, a recorder, a microphone, a thermometer, or any combination thereof. Thus, in some such embodiments, the disclosed methods include determining whether the subject's performance during presentation meets or exceeds various biometric thresholds, when each of at least one respective gate criterion associated with the first class is met, or both. In some such embodiments, based on the determination comprising the evaluation of at least one biometric data element, the disclosed method includes determining a second category of the plurality of categories for the subject to proceed next that will optimally enhance the subject's ability to manage psychosis or mental condition based at least in part on the results of the determination as to whether the subject's performance during presentation meets or exceeds various biometric measurement thresholds. Thus, the disclosed method achieves a progression of exposure from a first category to a second category by selecting for the subject a second category for improving the subject's ability to manage the subject's mental disease or condition, rather than a third category that is not optimal for the subject as compared to the second category.
In some embodiments, obtaining multiple ones of the multiple categories is initially arranged in an initial instance of the exposure progression, the initial instance being set by a system administrator, a subject, a health care worker (e.g., a healthcare practitioner) associated with the subject, a model (e.g., a computational model), or a combination thereof.
In some embodiments, the method further comprises: before presenting the first digital reality scene, a diagram representing an initial instance of exposure progress is presented on a display. The graph includes a plurality of nodes and a plurality of edges. In some embodiments, for each respective node of the plurality of nodes, the graph further includes a corresponding plurality of experience graphics displayed adjacent to the respective node. In some embodiments, each respective node of the plurality of nodes corresponds to a respective category of the plurality of categories. Further, in some embodiments, each respective node is associated with a corresponding plurality of suggested experiences. Further, each respective node is associated with at least one respective gate criterion of the plurality of gate criteria. For each respective node of the plurality of nodes, each respective experience graphic of the corresponding plurality of experience graphics corresponds to a respective suggested experience of the corresponding plurality of suggested experiences and is associated with at least one biometric threshold of the plurality of biometric thresholds. In addition, in some embodiments, each respective node of the plurality of nodes is connected to at least one other node in the graph by an edge of the plurality of edges. Further, in some embodiments, each respective edge of the plurality of edges represents progress within the graph between the respective initial node and the respective subsequent node in the graph when the subject successfully completes a desired number of corresponding challenges associated with the respective initial node in the graph.
In some embodiments, for each respective node of the plurality of nodes, the graph further includes a corresponding plurality of branches. Further, each respective experience graph of the corresponding plurality of experience graphs is connected to a respective node through a branch of the corresponding plurality of branches.
In some embodiments, determining the second class includes: an evaluation is made as to whether the immediately following category of the first category in the initial instance of exposure progression is appropriate for the subject to proceed next. In some embodiments, determining the second class further comprises: in the case where the immediately following category in the initial instance of the exposure progress is suitable for the subject to proceed next, the immediately following category in the initial instance of the exposure progress is presented as the second category for the subject to execute.
In some embodiments, determining the second class further comprises: in the case where the immediately following category in the initial instance of the exposure progress is not suitable for the subject to proceed next, the category other than the immediately following category in the initial instance of the exposure progress is recommended as the second category for the subject to proceed next.
In some embodiments, the first biometric sensor is configured to capture biometric data elements associated with a physiological or psychological state of the subject at a predetermined sampling rate.
In some embodiments, the predetermined sampling rate is between 40 milliseconds (ms) and 160 ms. In some embodiments, the predetermined sampling rate is adjustable or fixed.
In some embodiments, the first biometric sensor is a heart rate sensor, heart rate variability sensor, blood pressure sensor, galvanic skin activity sensor, galvanic skin response sensor, electroencephalogram sensor, eye tracking sensor, recorder, microphone, or thermometer.
In some embodiments, the first subset of biometric data elements is captured by the first biometric sensor in response to a particular trigger (such as a particular trigger event configured to initiate capture of the first subset of biometric data elements, etc.).
In some embodiments, the first biometric sensor is a heart rate sensor. In some such embodiments, the first subset of biometric data elements is used to determine a heart beat rate of the subject.
In some embodiments, the first biometric sensor is a heart rate variability sensor. In some such embodiments, the first subset of biometric data elements is used to determine the interval between the heart beats of the subject, thereby providing an assessment of heart rate variability.
In some embodiments, the first biometric sensor is an eye tracking sensor and the first subset of biometric data elements is used to determine fixation of gaze of the subject, smooth movement of the subject, saccades of the subject, blinks of the subject, scan path length of the subject, eye openness of the subject, pupil dilation of the subject, eye position of the subject, excessive vigilance exhibited by the subject, gaze avoidance exhibited by the subject, or any combination thereof.
In some embodiments, the first biometric sensor is an eye tracking sensor. In some such embodiments, the first subset of biometric data elements is used to determine fixation of gaze of the subject. In some such embodiments, the fixation of gaze is defined based on spatial and temporal criteria with respect to a region of interest in the first digital reality scene.
In some embodiments, the first biometric sensor is an eye tracking sensor. In some such embodiments, the first subset of biometric data elements is used to determine excessive alertness exhibited by the subject. In some such embodiments, excessive alertness is defined as a time to a first fixed during a particular challenge in a first digital reality scenario.
In some embodiments, the first biometric sensor is an eye tracking sensor. In some such embodiments, the first subset of biometric data elements is used to determine gaze avoidance exhibited by the subject. In some such embodiments, gaze avoidance is defined as the number of subjects exhibited during a particular challenge in the first digital reality scene divided by the total number of subjects exhibited in the first digital reality scene.
In some embodiments, the first biometric sensor is a recorder. In some such embodiments, emotion analysis or emotion analysis is performed on the first subset of biometric data elements to assess whether the first challenge completed successfully.
In some embodiments, the first subset of biometric data elements is transcribed (e.g., by a computational model) to create a transcription. In some such embodiments, the transcription is extracted to produce words. In some such embodiments, emotion analysis is performed on the extracted words.
In some embodiments, the first subset of biometric data elements is used to determine a fundamental frequency, a speech speed, a pause, a silence duration, a speech intensity, a speech start time, a pitch disturbance, a loudness disturbance, a speech interruption, a pitch jump, a speech quality, a sound quality, or a combination thereof to evaluate whether the first challenge was successfully completed by meeting at least one biometric threshold associated with a first suggested experience for the first challenge.
In some embodiments, the first subset of biometric data elements captured by the first biometric sensor is stored, thereby allowing playback of the first subset of biometric data elements after the first digital reality scene is completed.
In some embodiments, one or more specific keywords are used in the analysis of the first subset of biometric data elements to prevent fraud.
In some embodiments, the first subset of biometric data elements is preprocessed to remove background noise before being used to evaluate whether the first challenge is successfully completed by meeting at least one biometric threshold associated with a first suggested experience for the first challenge.
In some embodiments, the first subset of biometric data elements is captured by the first biometric sensor in a state where the automatic noise cancellation feature is enabled.
In some embodiments, the method further comprises electronically receiving a second plurality of data elements associated with the subject. In some such embodiments, the second plurality of data elements includes a second set of biometric data elements associated with an initial psychosis or mental condition of the subject. Further, a corresponding threshold baseline characteristic is formed from the second plurality of biometric data elements.
In some embodiments, the method further comprises obtaining a second set of biometric data elements from a first biometric sensor of the at least one biometric sensor upon or prior to initiating presentation of the first digital reality scene. A corresponding threshold baseline characteristic is formed from the second set of biometric data elements.
In some embodiments, obtaining the second set of biometric data elements is performed during introduction or coaching of the challenge.
In some embodiments, presentation or tutorial challenges are presented in a digital reality scenario. The digital reality scene includes a happy place (e.g., a digital space configured to calm or restore a subject to normal, such as by providing educational content and/or soothing content).
In some embodiments, the method further comprises: in the event that a gate criterion of at least one respective gate criterion associated with the first class is not met, the first digital reality scenario will be presented, a first plurality of data elements obtained, and the satisfaction of one or more thresholds (e.g., at least one biometric threshold, a corresponding first threshold baseline characteristic, a gate criterion, a combination thereof) is repeated one or more times for other challenges associated with the first class.
In some embodiments, the method further comprises: in the event that it is determined that the first challenge is not successfully completed by failing to satisfy at least one biometric threshold associated with a first suggested experience for the first challenge, recommending a challenge for another suggested experience of the corresponding plurality of suggested experiences of the first category for the subject to proceed next. The recommendation challenges are presented in text, graphics, audio, or a combination thereof.
In some embodiments, the recommended challenges present challenges that are equal or less challenging than the first challenges of the first category. In some embodiments, the recommended challenges are the same first challenges designed for the first category of first recommendation experiences. Furthermore, in some embodiments, the recommended challenges are challenges designed for different suggestion experiences of the first category. In some embodiments, the recommended challenges are challenges designed for suggested experiences of different ones of the multiple categories. Further, in some embodiments, the recommended challenges are challenges other than any of a plurality of categories.
In some embodiments, the method further comprises: in response to selection of the recommended challenge, repeatedly rendering the first digital reality scene, obtaining a first plurality of data elements, and determining satisfaction of one or more thresholds (e.g., at least one biometric threshold, corresponding first threshold baseline characteristics, gate criteria, combinations thereof) for the recommended challenge.
In some embodiments, the challenges are unique positive challenges tailored to the first category, generic positive challenges accessible from each of the plurality of categories, unique cognitive reconstruction challenges tailored to the first category, or generic cognitive reconstruction challenges accessible from each of the plurality of categories.
In some embodiments, the method further comprises: a second digital reality scene is presented on the display that presents the challenge in response to the selection of the challenge.
In some embodiments, the method further comprises: in coordination with presenting the second digital reality scene, a third plurality of data elements is obtained from a subset of sensors of the plurality of sensors. The third plurality of data elements includes a third plurality of biometric data elements associated with the subject. In addition, a third plurality of biometric data elements is captured while the subject is completing a second digital reality scene that presents a challenge. In some such embodiments, the method further comprises: the change or improvement is determined by comparing the third plurality of biometric data elements to the corresponding threshold baseline characteristic or to the first plurality of biometric data elements from which the first plurality of data elements was obtained.
In some embodiments, the method further comprises: subjective assessment options such as assessment are presented on the display before the second category is determined. In some such embodiments, the method further comprises: and performing subjective evaluation in response to the selection of the subjective evaluation option. The determination of the second class is based at least in part on the results of the subjective assessment.
In some embodiments, the subjective assessment is based on a clinical global impression improvement scale (CGI), a patient global impression improvement scale (PGI), a Liebowitz social anxiety scale, LSAS, or a combination thereof. In some embodiments, the subjective assessment is based on consideration CGI, PGI, LSAS or a minimum clinically significant difference (MCID) of a combination thereof. In some embodiments, the subjective assessment is based on Generalized Anxiety Disorder (GAD), such as GAD-2, GAD-7, and the like. In some embodiments, the subjective assessment is based on a Patient Health Questionnaire (PHQ), such as PHQ-2, PHQ-9, and the like.
In some embodiments, the method further comprises: for a digital reality scene that presents challenges designed for a suggested experience of a second category, repeatedly presenting the first digital reality scene, obtaining a first plurality of data elements, and determining satisfaction of one or more thresholds (e.g., at least one biometric threshold, corresponding first threshold baseline characteristics, gate criteria, combinations thereof). In some such embodiments, the method further comprises: the second category is repeatedly determined for the second category.
In some embodiments, in obtaining the plurality of categories, the plurality of suggested experiences associated with the first category are initially arranged in an initial first sub-progression. In some embodiments, the initial first sub-progression is set by a system administrator, a subject, a healthcare worker associated with the subject, a model, or a combination thereof.
In some embodiments, the method further comprises: it is evaluated whether the immediately following advice experience of the first advice experience in the initial first sub-progression is suitable for the subject to proceed next. In some such embodiments, the method further comprises: in the event that the immediate follow-up advice experience is deemed suitable for the subject to proceed next, a digital reality scene is presented that presents challenges designed for the immediate follow-up advice experience in the initial first sub-progression. Furthermore, in some such embodiments, the method further comprises: for challenges designed for a immediately subsequent suggested experience in an initial exposure first sub-progression, repeatedly obtaining a first plurality of data elements and determining satisfaction of one or more thresholds (e.g., at least one biometric threshold, corresponding first threshold baseline characteristics, gate criteria, combinations thereof).
In some embodiments, the method further comprises: in the event that the immediate advice experience is deemed unsuitable for the subject to proceed next, advice experiences other than the immediate advice experience are recommended for the subject to proceed next.
In some embodiments, satisfaction of one or more thresholds (e.g., at least one biometric threshold, corresponding first threshold baseline characteristics, gate criteria, combinations thereof) is determined. Furthermore, the method comprises the following steps: a determination is made as to whether the first set of biometric data elements meets a second biometric threshold of the at least one biometric threshold.
In some embodiments, one of the first biometric threshold and the second biometric threshold of the at least one biometric threshold is a desired minimum change in the number of utterances from the baseline of utterances of the subject. Thus, during a first digital reality scenario exhibiting a first challenge designed for a first advice experience associated with a first category, the other of the first and second biometric thresholds is a desired minimum change in confidence from a confidence baseline of the subject, a desired minimum change in decibel level from a decibel level baseline of the subject, a desired minimum change in pitch from a pitch baseline of the subject, or a combination thereof.
In some embodiments, the at least one biometric data element captured while obtaining the first plurality of data elements includes a fourth set of biometric data elements captured by a second biometric sensor of the at least one biometric sensor. In some embodiments, the fourth set of biometric data elements is different from the first set of biometric data elements. Further, in some such embodiments, determining satisfaction of one or more thresholds (e.g., at least one biometric threshold, a corresponding first threshold baseline characteristic, a gate criterion, a combination thereof) includes determining whether a comparison of the fourth set of biometric data elements to the third baseline characteristic satisfies a third biometric threshold of the at least one biometric threshold.
In some embodiments, one of the first and third biometric thresholds is a desired minimum change in the number of words compared to a word baseline of the subject, a desired minimum change in the number of utterances compared to an utterance baseline of the subject, a desired minimum change in confidence compared to a confidence baseline of the subject, a desired minimum change in decibel level compared to a decibel level baseline of the subject, a desired minimum change in pitch compared to a pitch baseline of the subject, or a combination thereof. Further, during the presentation of the first digital reality scenario for the first challenge designed for the first suggested experience associated with the first category, the other one of the first and third biometric thresholds is a desired minimum change in length of eye contact from the eye contact baseline of the subject.
In some embodiments, each category of the plurality of categories is associated with a unique name.
In some embodiments, the plurality of sensors includes a heart rate sensor. In some such embodiments, the corresponding threshold baseline characteristic is an initial heart rate of the subject.
In some embodiments, the plurality of sensors includes a blood pressure sensor. In some such embodiments, the corresponding threshold baseline characteristic is the subject's systolic pressure or the subject's diastolic pressure.
In some embodiments, the display is a head mounted display.
In some embodiments, the at least one respective gate criterion includes a ranking gate criterion associated with a hierarchical ranking of each of the plurality of categories.
In some embodiments, the at least one respective gate criterion includes a healthcare practitioner gate criterion associated with approval from a healthcare practitioner associated with the subject of a category of the plurality of categories corresponding to the at least one respective gate criterion.
In some embodiments, the at least one respective gate criterion includes an arrangement gate criterion associated with an order of one or more of the plurality of categories.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder or a subclinical diagnosed mental disorder.
In some embodiments, the mental illness or condition includes being stressed in, fearing of, or overwhelming in a social setting.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is an anxiety disorder, a mood disorder, a psychotic disorder, an eating disorder, an impulse control disorder, an addictive disorder, a personality disorder, a obsessive-compulsive disorder, or a post-traumatic stress disorder.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is anxiety disorder. In some such embodiments, the anxiety disorder includes separation anxiety disorder, selective mutism, specific phobia, social anxiety disorder, panic disorder, agoraphobia, generalized anxiety disorder, substance-induced anxiety disorder, or anxiety disorder due to a medical condition of the subject.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. In some such embodiments, the clinically diagnosed mental disorder is a mood disorder, the central mood disorder comprising depression, bipolar disorder, or a cyclic mood disorder.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is a psychotic disorder. In some such embodiments, the psychotic disorder comprises schizophrenia, delusional disorder, or hallucination disorder.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is a eating disorder, wherein the eating disorder comprises anorexia nervosa, bulimia nervosa, or binge eating disorder.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is an impulse control disorder, and wherein the impulse control disorder comprises pyrosis, theft, or compulsive gambling disorder.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is an addictive disorder, wherein the addictive disorder comprises an alcohol use disorder or a substance abuse disorder.
In some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder. Furthermore, the clinically diagnosed mental disorder is personality disorder, wherein the personality disorder includes an anti-social personality disorder, a compulsive personality disorder, or a paranoid personality disorder.
In some embodiments, the corresponding digital reality scene is a virtual reality scene.
In some embodiments, the corresponding digital reality scene is an augmented reality scene.
In some embodiments, the corresponding digital reality scene is a mixed reality scene.
In some embodiments, the gate criteria associated with one of the plurality of categories specifies a condition to be met by the subject before proceeding to another of the plurality of categories.
In some embodiments, the respective gate criteria for a respective category of the plurality of categories is set by a system administrator, a subject, a model, a healthcare worker associated with the subject, or a combination thereof.
In some embodiments, the respective gate criteria for a first category of the plurality of categories is set by a system administrator or a healthcare worker associated with the subject, and the respective gate criteria for a second category of the plurality of categories is set by the subject.
In some embodiments, the respective biometric thresholds for respective suggested experiences of the plurality of suggested experiences associated with respective ones of the plurality of categories are set by a system administrator, a subject, a healthcare worker associated with the subject, a model, or a combination thereof.
In some embodiments, the respective biometric threshold of the respective suggested experience of the plurality of suggested experiences associated with the first category of the plurality of categories is set by a system administrator, a subject, a healthcare worker associated with the subject, a model, or a combination thereof.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is an absolute parameter, a relative parameter, a normalized parameter, or any combination thereof.
In some embodiments, the respective biometric threshold of the respective suggested experience of the plurality of suggested experiences associated with the first category is an eye contact threshold, a heart rate threshold, a confidence threshold, a decibel level threshold, a pitch threshold, an utterance threshold, a word threshold, an emotion analysis criterion, or a combination thereof that presents a corresponding challenge designed for the first suggested experience associated with the first category.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is an eye contact threshold. Further, the eye contact threshold includes a minimum length of eye contact, an increment of eye contact, or both a minimum length of eye contact and an increment of eye contact.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is a heart rate threshold. Further, the heart rate threshold includes a maximum heart rate, a decrease in heart rate, or both a maximum heart rate and a decrease in heart rate.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is a confidence threshold. Further, the confidence threshold includes an absolute confidence threshold, a relative confidence threshold, or both an absolute confidence threshold and a relative confidence threshold.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is a decibel level threshold. Further, the decibel level threshold comprises a lower decibel level threshold, an upper decibel level threshold, a desired increase in decibel level, a desired decrease in decibel level, or any combination thereof.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is a pitch threshold. Further, the pitch threshold includes a lower pitch threshold, an upper pitch threshold, a desired increase in pitch, a desired decrease in pitch, or any combination thereof.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is an utterance threshold. The utterance threshold includes a minimum number of utterances, a maximum number of utterances, a desired increase in the number of utterances, a desired decrease in the number of utterances, or a combination thereof.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is a word threshold. Further, in some such embodiments, the word threshold includes a minimum number of words, a maximum number of words, a desired increase in the number of words, a desired decrease in the number of words, or a combination thereof.
In some embodiments, the respective biometric threshold of a respective suggested experience of the plurality of suggested experiences associated with the first category is an emotion analysis criterion. In some such embodiments, the emotion analysis criteria includes an excited emotion threshold and an overexcited emotion threshold.
In some embodiments, the method further comprises: the determination of whether the emotion analysis criterion is met or not is made by, for each expression in the list of expressions of the characteristics considered to be predetermined emotion, a cosine similarity measure or dot product of one or more utterances of the subject made during a corresponding challenge designed for the first suggested experience associated with the first category.
In some embodiments, the predetermined emotion is comma, anger, anxiety, embarrassment, boring, calm, confusion, craving, aversion, moving affective pain, mania, excitement, fear, phobia, interest, happiness, boredom, reminiscence, relaxation, sadness, satisfaction, or surprise.
In some embodiments, the plurality of categories includes one or more exposure categories, one or more Cognitive Behavioral Therapy (CBT) categories, one or more positive concept categories, or a combination thereof.
In some embodiments, the first class is a positive idea class and the second class is a CBT class. In some embodiments, the first class is an exposure class and the second class is a CBT class.
Another aspect of the present disclosure relates to providing a non-transitory computer readable storage medium storing one or more programs. The one or more programs include instructions, which when executed by a computer system, cause the computer system to perform the methods of the present disclosure.
Yet another aspect of the present disclosure is directed to providing for the use of a computer system for improving a subject's ability to manage a mental disorder or condition exhibited by the subject. The computer system includes one or more processors, a display, and a memory coupled to the one or more processors. In some embodiments, the computer system includes an audio speaker and/or microphone. The memory includes one or more programs configured to be executed by the one or more processors for implementing the methods of the present disclosure.
Yet another aspect of the present disclosure relates to providing an apparatus for effecting progress of exposure. In some embodiments, the device is configured to enhance the subject's ability to manage the subject's mental disease or condition. Furthermore, the apparatus includes one or more processors and a memory coupled to the one or more processors. The memory includes one or more programs configured to be executed by the one or more processors. One or more programs are configured to cause a computer system to perform the methods of the present disclosure. In some embodiments, the apparatus includes a display and/or audio circuitry. In some embodiments, the apparatus includes an objective lens in optical communication with the two-dimensional pixelated detector.
Drawings
The document of this patent contains at least one drawing in color.
Fig. 1 illustrates a block diagram of an embodiment of a system for displaying a digital reality scene, according to an embodiment of the disclosure.
Fig. 2A and 2B collectively illustrate a digital reality host system for facilitating a digital reality experience according to an embodiment of the disclosure.
Fig. 3 illustrates a client device for displaying a digital reality scene according to an embodiment of the disclosure.
Fig. 4A, 4B, 4C, 4D, 4E, 4F, 4G, 4H, 4I, 4J, 4K, 4L, 4M, 4N, 4O, 4P, 4Q, and 4R collectively illustrate exemplary methods for achieving exposure progress that improves a subject's ability to manage a subject's mental disease or condition, with alternative embodiments indicated by dashed boxes, according to some embodiments of the present disclosure.
Fig. 5A illustrates an exemplary digital reality scenario for social challenge training according to an embodiment of the present disclosure.
Fig. 5B illustrates an exemplary digital reality scenario for social challenge training according to an embodiment of the present disclosure.
Fig. 6A illustrates an exemplary digital reality scenario for introduction or educational training according to an embodiment of the present disclosure.
Fig. 6B illustrates an exemplary digital reality scenario for positive training according to an embodiment of the present disclosure.
Fig. 6C illustrates an exemplary digital reality scenario for cognitive reconstruction training according to an embodiment of the present disclosure.
Fig. 7A illustrates an exemplary digital reality scenario for presenting an initial category hierarchy in a diagram that includes suggesting subject experience progress, according to an embodiment of the disclosure.
Fig. 7B illustrates an exemplary digital reality scenario for presenting a recommended category hierarchy and allowing a user to personalize the category hierarchy, in accordance with an embodiment of the present disclosure.
Fig. 8A, 8B, 8C, and 8D collectively illustrate a user interface for obtaining an assessment or subjective assessment of a subject at a client device according to an embodiment of the present disclosure.
Fig. 9A illustrates another exemplary digital reality scenario for cognitive reconstruction training according to an embodiment of the present disclosure.
Fig. 9B illustrates yet another exemplary digital reality scenario for cognitive reconstruction training according to an embodiment of the present disclosure.
Fig. 9C illustrates yet another exemplary digital reality scenario for cognitive reconstruction training according to an embodiment of the present disclosure.
Fig. 10A illustrates an exemplary scenario according to some embodiments of the present disclosure.
Fig. 10B and 10C collectively illustrate another exemplary scenario according to some embodiments of the present disclosure.
Fig. 11A and 11B illustrate exemplary user interfaces of a DR scene or client application configured to present the progress of a subject within an interactive DR activity, according to some embodiments of the disclosure.
Fig. 12A and 12B collectively illustrate another exemplary scenario according to some embodiments of the present disclosure.
Fig. 13 illustrates another client device for displaying a digital reality scene according to an embodiment of the disclosure.
Fig. 14 illustrates various logic functions used in some embodiments of the present disclosure.
It should be understood that the drawings are not necessarily to scale, presenting a somewhat simplified representation of various features illustrative of the basic principles of the invention. The particular design features of the invention as disclosed herein, including, for example, particular dimensions, orientations, locations, and shapes, will be determined in part by the particular intended application and use environment.
In the drawings, reference numerals refer to the same or equivalent parts of the invention throughout the several views of the drawings.
Detailed Description
The present disclosure provides systems and methods for improving a subject's ability to manage psychosis or mental status by enabling personalized exposure progress for the subject. In some embodiments, the personalized exposure progress (referred to as an exposure regimen or regimen) is configured specifically for the subject such that the exposure progress is personalized for the subject. Further, in some embodiments, the personalized exposure progress is dynamically configured such that the exposure progress flexibly changes with the subject as the subject interacts with the systems and methods of the present disclosure. Thus, in some such embodiments, the systems, methods, and apparatus of the present disclosure create personalized exposure progress tailored to a subject by at least combining exposure to digital reality, biometric data captured during exposure to digital reality, historical achievement of the subject, or a combination thereof. Further, in some such embodiments, the personalized exposure progress is dynamically updated based on the timing and/or nature of the exposure experienced by the subject. This allows for subject-specific and condition-specific customization of the subject's personalized exposure progress.
In some embodiments, the personalized exposure progress is dynamically created or modified based at least in part on: an initial assessment of the subject, biometric data captured when the subject is completing one or more social challenges when exposed to a particular digital reality, a level of success the subject has in one or more challenges (e.g., exposure challenges, social challenges, CBT challenges, positive-going challenges, etc.), a subjective assessment of the subject and/or a subjective assessment by the subject after completion of one or more digital challenges, an assessment or confirmation by a health care worker (e.g., a healthcare practitioner) associated with the subject, an assessment or confirmation by a computational model, or a combination thereof. By enabling personalized exposure progress at least in part through digital reality, the systems, methods, and devices of the present disclosure increase the likelihood of higher emotional and physiological arousal, interaction, better clinical outcome for the subject, or a combination thereof, which increases the ability of the subject to manage their mental illness or condition.
Accordingly, the personalized exposure progression provided by the systems, methods, and devices of the present disclosure is designed to address a psychotic disorder or condition exhibited by a subject by increasing the subject's ability to manage the psychotic disorder or condition. in some such embodiments, the ability of the subject to manage psychosis or mental conditions is achieved by: education of the subject (e.g., with respect to one or more handling exercises through positive and/or CBT challenges, with respect to the frequency of occurrence of events associated with psychosis or mental conditions, with respect to treatment practices best suited for the subject and/or psychosis or mental conditions, with respect to a thought pattern exhibited by the subject, etc.), interaction with the subject (e.g., within a digital reality scenario), treatment of the subject (e.g., by achieving personalized exposure progress for the subject and/or believing that personalized exposure progress is completed by the subject), Or a combination thereof. As an example, in some embodiments, the systems, methods, and devices of the present disclosure are designed to address stressful and/or overwhelming sensations associated with social situations, such as excessive anxiety, and fear-avoiding situations, among others. As another non-limiting example, in some embodiments, the systems, methods, and devices of the present disclosure are designed to address mental or psychiatric conditions exhibited by a subject, such as concerns about daily events associated with generalized anxiety disorder (such as excessive anxiety, difficulty concentrating attention, etc.), and the like. As yet another non-limiting example, in some embodiments, the systems, methods, and devices of the present disclosure are designed to address persistent sadness, anxiety, vacation, or combinations thereof associated with major depressive disorder. As yet another non-limiting example, in some embodiments, the systems, methods, and devices of the present disclosure are designed to address dysphoria, lack of pleasure, apathy, irritability, anger, hypovolemia, lack of motivation, sleep disorders, insufficient energy, fatigue, behavioral disturbances and/or disruption detrimental to daily functioning, agitation, restlessness, or a combination thereof. Thus, the systems, methods, and devices of the present disclosure address psychosis or mental conditions by engaging in interactions with other people (such as other users or non-player characters, etc.) in various contexts such as social interactions, work, or schools, etc. (e.g., through digital reality scenarios) while addressing challenges (e.g., social challenges, concentration challenges, etc.) that are both performance-based and/or interaction-based. In some embodiments, the systems, methods, and devices of the present disclosure address psychosis or mental conditions by providing educational or therapeutic challenges (such as cognitive reconstruction training, cognitive reconstruction challenges, positive training, positive challenge and substitution/additional exposure exercises, etc.) using digital reality scenarios. However, the present disclosure is not limited thereto. Thus, in various embodiments, the systems, methods, and devices of the present disclosure allow a subject to select social challenge(s) that the subject wants to strive to improve based at least in part on: an initial assessment of the subject, biometric data captured while the subject is completing one or more social challenges, a level of success the subject has in one or more social challenges, a subjective assessment of and/or by the subject after completion of such social challenges, an assessment or confirmation of a healthcare worker associated with the subject, or a combination thereof. in some embodiments, a healthcare practitioner (e.g., clinician) is associated with the subject and participates in implementing a personalized exposure plan. In some embodiments, the healthcare practitioner overrides or modifies the personalized exposure plan selected by the subject.
The systems, methods, and devices of the present disclosure allow a subject to revisit completed challenges for repeated exposure practices.
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be apparent to one of ordinary skill in the art that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will be further understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first digital chart may be referred to as a second digital chart, and similarly, a second digital chart may be referred to as a first digital chart without departing from the scope of the present disclosure. Both the first digital chart and the second digital chart are digital charts, but they are not the same digital chart.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The description herein includes example systems, methods, techniques, sequences of instructions, and computer program products embodying an exemplary implementation. For purposes of explanation, numerous specific details are set forth in order to provide an understanding of various implementations of the inventive subject matter. It will be apparent, however, to one skilled in the art that the subject matter may be practiced without these specific details. Generally, well-known instruction instances, protocols, structures, and techniques have not been shown in detail.
For purposes of explanation, the description herein is described with reference to particular implementations. However, the illustrative discussions are not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Many modifications and variations are possible in light of the disclosed teachings. The implementations were chosen and described in order to best explain the principles and its practical application to thereby enable others skilled in the art to best utilize the implementations and various implementations with various modifications as are suited to the particular use contemplated.
In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions are made to achieve the designer's specific goals, such as compliance with business-related and business-related constraints, and that these specific goals will vary from one implementation to another and from one designer to another. Moreover, it will be appreciated that such a design effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill having the benefit of this disclosure.
As used herein, the term "if" may be interpreted to mean "when … …", "at … …", "in response to a determination of" or "in response to detection", etc., depending on the context. Similarly, the phrase "if determined to" or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determination … …", "in response to determination to" or "upon detection of [ a stated condition or event ]" and/or "in response to detection of [ a stated condition or event ]" or the like, depending on the context.
As used herein, the term "about" or "approximately" may mean within an acceptable error range for a particular value as determined by one of ordinary skill in the art, which may depend in part on how the value is measured or determined, such as the limitations of the measurement system. For example, "about" may mean within a standard deviation of 1 or greater than 1, as practiced in the art. "about" may mean a range of + -20%, + -10%, + -5%, or + -1% of a given value. In the case of particular values described in the present disclosure and claims, the term "about" means within an acceptable error range for the particular value. The term "about" may have a meaning as commonly understood by one of ordinary skill in the art. The term "about" may refer to ± 10%. The term "about" may refer to ± 5%.
As used herein, the term "equally spaced" means that the distance from a first feature to a corresponding second feature is the same for successive pairs of features, unless explicitly stated otherwise.
As used herein, the term "dynamically" means the ability to update a program while it is currently running.
In addition, the terms "client," "patient," "subject," and "user" are used interchangeably herein unless explicitly stated otherwise.
Moreover, the terms "avatar" and "player character" are used interchangeably herein unless explicitly stated otherwise.
In addition, the terms "therapy" and "treatment" are used interchangeably herein unless specifically stated otherwise.
Furthermore, as used herein, the term "parameter" refers to any coefficient or similarly any value (e.g., weight and/or hyper-parameter) of an internal or external element in an algorithm, model, regressor, and/or classifier, which may affect (e.g., modify, customize, and/or adjust) one or more inputs, outputs, and/or functions in the algorithm, model, regressor, and/or classifier. For example, in some embodiments, parameters refer to any coefficient, weight, and/or hyper-parameter that may be used to control, modify, customize, and/or adjust the behavior, learning, and/or performance of algorithms, models, regressors, and/or classifiers. In some examples, the parameters are used to increase or decrease the impact of an input (e.g., a feature) on an algorithm, model, regressor, and/or classifier. As a non-limiting example, in some embodiments, the parameters are used to increase or decrease the impact of a node (e.g., of a neural network), where the node includes one or more activation functions. Assigning parameters to specific inputs, outputs, and/or functions is not limited to any one paradigm for a given algorithm, model, regressor, and/or classifier, but may be used in any suitable algorithm, model, regressor, and/or classifier architecture for a desired performance. In some embodiments, the parameter has a fixed value. In some embodiments, the values of the parameters are manually and/or automatically adjustable. In some embodiments, the values of the parameters are modified by verification and/or training processes (e.g., by error minimization and/or back propagation methods) for algorithms, models, regressors, and/or classifiers. In some embodiments, the algorithms, models, regressors, and/or classifiers of the present disclosure include a plurality of parameters. In some embodiments, the plurality of parameters is n parameters, wherein :n≥2;n≥5;n≥10;n≥25;n≥40;n≥50;n≥75;n≥100;n≥125;n≥150;n≥200;n≥225;n≥250;n≥350;n≥500;n≥600;n≥750;n≥1000;n≥2000;n≥4000;n≥5000;n≥7500;n≥10000;n≥20000;n≥40000;n≥75000;n≥100000;n≥200000;n≥500000;n≥1×106;n≥5×106; or n+.1×107. In some embodiments, n is between 10000 and 1×107, between 100000 and 5×106, or between 500000 and 1×106.
Further, when reference numerals are given to "ith" representations, the reference numerals refer to general components, sets or embodiments. For example, a digital reality scene referred to as "digital reality scene i" refers to an i-th digital reality scene of a plurality of digital reality scenes (e.g., digital reality scene 40-i of a plurality of digital reality scenes 40). In this disclosure, unless explicitly stated otherwise, the description of the apparatus and system will include the implementation of one or more computers.
Fig. 1 depicts a block diagram of a distributed client-server system (e.g., distributed client-server system 100) according to some embodiments of the present disclosure. The system 100 facilitates preparation of a regimen for improving a subject's ability to manage a psychotic disorder or condition, such as a clinically diagnosed psychotic disorder, a subclinically diagnosed medical disorder, an undiagnosed condition (a condition that has not yet been diagnosed by a medical practitioner for the subject), and the like. In some embodiments, the system 100 facilitates preparation of a regimen for improving the overall health of the subject, such as maintaining and/or encouraging the overall health state or health activity of the subject, and the like. In some embodiments, the regimen improves the overall health of the subject by correlating the effects of the healthy lifestyle of the subject with the risk or impact of helping reduce certain mental and/or psychotic conditions. However, the present disclosure is not limited thereto. The system 100 generally includes a digital reality system (e.g., the digital reality system 200), and one or more client devices 300 (e.g., a first client device 300-1, a second client device 300-2, an..the R-th client device 300-R, etc.) in communication with the digital reality system through a communication network (e.g., the communication network 106). In some embodiments, each client device is associated with at least one subject (e.g., a subject).
The system 100 also includes a plurality of sensors, such as sensor 110-1, sensor 110-2, sensor 110-S of fig. 1, and the like. The plurality of sensors includes at least one biometric sensor configured to capture biometric data of the subject (e.g., sensor 110-1 and/or sensor 110-2 are or include biometric sensors). In some embodiments, one or more of the plurality of sensors are incorporated into or are components of the client device 300. In some embodiments, one or more sensors communicate with one or more client devices (e.g., via an ant+ or bluetooth or network interface of the client device). For example, in some embodiments, data captured by one or more sensors is sent to and/or aggregated in one or more client devices. In some embodiments, the one or more sensors communicate with a remote system (e.g., connect to the digital reality system 200 via the communication network 106) such that data captured by the one or more sensors may be transmitted to the remote system, which collects, stores, and/or processes the captured biometric data.
In some embodiments, the system 100 facilitates providing a regimen for a population of subjects, wherein at least one subject exhibits a psychotic disorder or condition. In some embodiments, the protocol is prepared at a digital reality system and then provided to the subject through a Graphical User Interface (GUI) displayed on the respective client device 300. In some embodiments, a healthcare practitioner (e.g., clinician) associated with the subject prepares the regimen at a client device (e.g., client device 300-1) and the subject proceeds with the regimen at another client device (e.g., client device 300-2). In some embodiments, the computing model prepares a solution at the digital reality system, and the subject performs the solution at a client device (e.g., client device 300-1). However, the present disclosure is not limited thereto.
Examples of communication network 106 include, but are not limited to, the World Wide Web (WWW), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), among other means of wireless communication. The wireless communication optionally uses any of a number of communication standards, protocols, and techniques, including: global system for mobile communications (GSM); enhanced Data GSM Environment (EDGE); high Speed Downlink Packet Access (HSDPA); high Speed Uplink Packet Access (HSUPA); support only evolution of data (EV-DO); HSPA; hspa+; dual cell HSPA (DC-HSPDA); long Term Evolution (LTE); near Field Communication (NFC); wideband code division multiple access (W-CDMA); code Division Multiple Access (CDMA); time Division Multiple Access (TDMA); bluetooth (Bluetooth); wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11 n); voice over internet protocol (VoIP); wi-MAX; protocols for email (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)); instant messaging (e.g., extensible messaging and presence protocol (XMPP)), session initiation protocol with extension (SIMPLE), instant messaging, and presence services (IMPS)) for instant messaging and presence; and/or Short Message Service (SMS); or any other suitable communication protocol, including communication protocols not yet developed at the date of filing herein.
In some embodiments, communication network 106 may optionally include the Internet, one or more Local Area Networks (LANs), one or more Wide Area Networks (WANs), other types of networks, or a combination of these networks.
It should be noted that the exemplary topology shown in fig. 1 is merely used to illustrate features of embodiments of the present disclosure in a manner that will be readily understood by those skilled in the art. Other topologies of the system 100 are possible. For example, in some embodiments, any of the illustrated devices and systems may actually constitute several computer systems linked together in a network, or may be virtual machines and/or containers in a cloud computing environment. Furthermore, the illustrated apparatus and system may communicate information wirelessly between each other rather than relying on the physical communication network 106.
Fig. 2A and 2B depict an exemplary digital reality system 200 for preparing a regimen for improving a subject's ability to manage a mental disorder or condition exhibited by the subject. In various embodiments, digital reality system 200 includes one or more processing units (CPUs) 202, a network or other communication interface 204, and a memory 212.
Memory 212 includes high-speed random access memory (such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, etc.), and optionally also non-volatile memory (such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices or other non-volatile solid state memory devices, etc.). Memory 212 may optionally include one or more storage devices located remotely from CPU(s) 202. The memory 212 or alternatively the non-volatile memory device(s) within the memory 212 include non-transitory computer-readable storage media. Access to memory 212 by other components of digital reality system 200, such as CPU(s) 202 and the like, is optionally controlled by a controller. In some embodiments, memory 212 may include mass storage located remotely with respect to CPU(s) 202. In other words, some of the data stored in the memory 212 may actually be hosted on devices that are external to the digital reality system 200, but that may be electronically accessed by the digital reality system 200 through the internet, an intranet, or other form of network 106 or electronic cable using the communication interface 204.
In some embodiments, the memory 212 of the digital reality system 200 for preparing a solution for improving the subject's ability to manage psychosis or mental conditions exhibited by the subject stores the following:
operating system 8 (e.g., ANDROID, iOS, DARWIN, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks, etc.), which includes processes for handling various basic system services;
An electronic address 10 associated with the digital reality system 200 (such as within a distributed computer system, etc.) for identifying the digital reality system 200;
an optional assessment module 12 for obtaining an assessment of the subject, the assessment providing for the identification of a plurality of suggested experiences by the assessment;
a user profile store 14 for retaining information associated with a community of users (e.g., users of client device 300 of fig. 1), including a user profile 16 for each user in the community of users, the user profile 16 including a corresponding wellness store (well-rolling store) 18 and a solution store 20 associated with a subject of the user profile 16;
Experience store 22, which includes multiple experiences (e.g., first experience 24-1, second experience 24-2, …, experience 24-I of fig. 2B), each experience 24 including a corresponding challenge 26;
a door store 30 including a plurality of door criteria (GATE CRITERIA) (e.g., first door criteria 32-1, second door criteria 32-2, …, door criteria G32-G of fig. 2B), each door criteria 32 representing, for example, a condition of user interaction;
An application server module 34 facilitating the provision of a plurality of digital reality scenes 40 to a community of client devices 300, the application server module 34 comprising a login module 36 for accessing the digital reality scenes 40, and a digital reality session engine 38 facilitating the provision of a plurality of digital reality scenes (e.g., the first digital reality scene 40-1, the second digital reality scenes 40-2, …, the digital reality scene H40-H of fig. 2B) for a community of users (e.g., client devices 300); and
An application model library 50 that retains one or more models for providing an evaluation of the characteristics of the respective inputs, such as providing a first evaluation of whether the completion of the corresponding challenge 26 meets one or more corresponding gate criteria 32, etc.
In some embodiments, the electronic address 10 is associated with the digital reality system 200. The electronic address 10 is used to uniquely identify at least the digital reality system 200 from other devices and components of the distributed system 100 (e.g., uniquely identify the digital reality system 200 from the client device 300-1, the client devices 300-2, …, or the third client device 300-R of fig. 1). For example, in some embodiments, the electronic address 10 is used to receive a request from a client device to participate in a respective digital reality scenario via a login module 36 of the application server module 34. As another non-limiting example, in some embodiments, electronic address 10 is utilized to receive a plurality of data elements including a set of biometric data elements obtained when a digital reality scene is presented to a user at remote client device 300. However, the present disclosure is not limited thereto.
In some embodiments, the assessment module 12 facilitates obtaining an assessment from a subject (such as a user of a respective client device) or a healthcare practitioner associated with the user, or the like. In some embodiments, the assessment module 12 includes one or more assessments that are communicated to respective client devices (e.g., via the communication network 106 of fig. 1). For example, in some embodiments, the assessment module 12 stores standardized assessments provided to each subject. The standardized evaluation provides a unified evaluation to each subject. In some embodiments, by utilizing standardized evaluations, evaluations obtained from multiple users may be normalized to optimize the identification of multiple suggested experiences (e.g., of experience 24 from experience store 22 of FIG. 2B) between different users. However, the present disclosure is not limited thereto.
In some embodiments, the evaluation includes a plurality of prompts answered by the subject. In some embodiments, the identification of multiple suggestion experiences is obtained for the subject through answers to multiple prompts provided by the subject. For example, the assessment of social anxiety psychosis or medical condition includes presenting a question to a user of a client device (e.g., client device 300-1) and providing a plurality of predetermined answers (e.g., none, mild, moderate, or severe). In some embodiments, user selection of the first answer from among the predetermined answers forms a basis for identifying the plurality of suggested experiences 24.
In some embodiments, the assessment module 12 includes one or more authorization criteria associated with approving the assessment obtained from the subject. For example, in some embodiments, a first subject provided to the first client device 300-1 is evaluated, wherein the first subject exhibits a mental disorder or condition. In some such embodiments, the evaluation is obtained from the first subject on condition that the first authorization criteria are met. In some embodiments, the first authorization criteria is associated with a first subject obtaining an assessed authorization from a healthcare practitioner associated with the subject. As an example, in some embodiments, the first authorization criteria require the healthcare practitioner and/or the computational model to verify certain aspects of the assessment, such as authenticity of the assessment, accuracy of the assessment, consistency of the assessment, competency of the assessment, rationality of the assessment, pass/fail of the assessment, global rating scale of the assessment, or a combination thereof, or the like. In some embodiments, by adding a level of authorization (such as human authorization and/or computational model authorization, etc.), the digital reality system 200 ensures that subjects exhibiting psychosis or mental conditions can improve their ability to manage psychosis or mental conditions when utilizing the systems and methods of the present disclosure, such as by ensuring that the subject is providing honest answers to the assessment, etc. Thus, in some such embodiments, the assessment module 12 prevents the subject from distorting the assessment, which would result in a regimen that may not improve the subject's ability to manage their mental illness or condition.
In some embodiments, the user profile store 14 maintains a plurality of user profiles 16. Each respective user profile 16 is associated with a corresponding user of the digital reality system 200, such as a user of a client device exhibiting a psychosis or mental condition and/or a healthcare practitioner associated with the user, etc. For example, in some embodiments, the respective user first customizes their profile (e.g., first user profile 16-1) at the client device by selecting a plurality of user login information, such as passwords, addresses (e.g., email addresses, physical addresses, etc.), personal names (e.g., given names, user names, etc.), and so forth. In some embodiments, the respective user provides or collects (e.g., using an optional GPS) one or more demographic characteristics (e.g., age of the user, weight of the user, height of the user, gender of the user, etc.) and/or one or more geographic characteristics (e.g., area associated with the user, physical address associated with the user, etc.) by the client device 300. However, the present disclosure is not limited thereto. In some embodiments, the user profile uniquely identifies the respective user in the digital reality scene 40. In this way, each user profile 16 allows the digital reality system 200 to retain login information, privacy information (e.g., which psychosis or mental condition the corresponding subject associated with the respective user profile 16 exhibits), and other preferences, and/or biographical data. In some embodiments, the login name associated with the respective user is the same as the user name displayed for the user. In other embodiments, the login names associated with the respective users are different from the user names displayed for the users (e.g., the user names displayed with the digital reality scene 40 are different from the associated user logins). In some embodiments, the user profile 16 includes some or all of the subject's corresponding medical records associated with the user profile 16. In some embodiments, digital reality system 200 stores a plurality of avatar information including a plurality of characteristics of each avatar user, and/or a contact list of contacts within digital reality scene 40. Accordingly, the systems, methods, and devices of the present disclosure allow for personalizing a digital reality scene based on information associated with a user using the user profile 16. As an example, in some embodiments, the subject provides an age of the subject, and the appearance and/or intensity level (e.g., difficulty level) of the non-player character associated with the digital reality scene is modified based on the age of the subject according to the age of the subject.
Additionally, in some embodiments, each user profile 16 includes a health store (e.g., the first user profile 16-1 includes a first health store 18-1, the second user profile 16-2 includes a second health store 18-2, …, the user profile A16-A includes a health store B18-B, etc.). In some embodiments, the wellness repository 18 retains a plurality of health information associated with the subject, such as indications of clinical diagnosis of a psychotic disorder or condition, a plurality of insurance information associated with an insurance provider of the corresponding subject, and electronic medical records, and the like. In some embodiments, the wellness repository 18 includes conditions of treatment administered to the subject, such as the results of previous treatments for psychosis or mental conditions, the results of previous regimens 20 provided to the subject, and the like.
In some embodiments, the wellness repository 18 includes a plurality of biometric data elements associated with respective users. For example, in some embodiments, when a digital reality scene is presented on a client device, a set of biometric data elements is obtained, and a plurality of biometric data elements (a first set of biometric data elements) from the set of biometric data elements are retained by the wellness repository 18. As a non-limiting example, in some embodiments, the plurality of biometric data elements retained by the wellness repository 18 include a heart rate of the subject (e.g., a baseline heart rate, one or more heart rate zones of the subject, etc.). In some embodiments, the plurality of biometric data elements retained by the wellness repository 18 include a subject's blood pressure (e.g., baseline systolic pressure, threshold diastolic pressure, etc.). Further, in some embodiments, the plurality of biometric data elements includes a plurality of spatiotemporal data elements describing spatial and temporal aspects of the user when interacting with the digital reality scene. Non-limiting examples of the plurality of spatiotemporal data elements include an area of a portion of the user's eye, a change in the position of the subject's eye when resolving the corresponding challenge 26, a count of occurrences of the user's eye at a predetermined reference position, and the like. In some embodiments, the plurality of biometric data elements retained by the wellness repository 18 include one or more vocal biometric data elements, such as vocal features associated with use, and the like. As another non-limiting example, in some embodiments, the plurality of biometric data elements retained by the wellness repository 18 include temporal voicing characteristics (e.g., root Mean Square (RMS) energy of the voicing characteristics), spectral voicing characteristics (e.g., centroid of spectrograms of the voicing characteristics, attenuation of the spectrograms, etc.), cepstral voicing characteristics (e.g., mel-frequency cepstral coefficients (MFCCs)), entropy of the voicing characteristics (e.g., spectral entropy, probability density function entropy, etc.), or a combination thereof.
In some embodiments, the wellness repository 18 includes one or more annotations. In some embodiments, each annotation is associated with a corresponding subject engaged in the digital reality scene 40 and/or one or more evaluations obtained from the subject. For example, in some embodiments, the one or more evaluations obtained from the subject stored by the wellness repository 18 include a first evaluation for obtaining an identification of a plurality of suggested experiences and/or a second evaluation based on the suggested experiences 24 made by the user. In some embodiments, the one or more annotations include a first annotation provided by a healthcare practitioner associated with the subject at the time of the subject's suggested experience. In some embodiments, the one or more annotations include a second annotation provided by the subject. In some embodiments, the one or more annotations include a third annotation provided due to a computing model associated with the digital reality system 200.
In some embodiments, each user profile includes a solution store (e.g., the first user profile 16-1 includes a first solution store 20-1, the second user profile 16-2 includes a second solution store 20-2, …, the user profile a 16-a includes a solution store C20-C, etc.) that retains information associated with the corresponding subject. As an example, in some embodiments, the information retained by the scheme repository of user profiles includes multiple sessions of corresponding users interacting with the digital reality system. In some embodiments, the plurality of sessions includes an exposure session in which the subject interacts with an exposure challenge, a cognitive behavioral therapy session in which the subject interacts with a cognitive behavioral therapy challenge, a positive-concept-based cognitive therapy in which the subject interacts with a positive-concept challenge, or a combination thereof. Thus, the user profile allows the systems, methods, and apparatus of the present disclosure to track various parameters associated with improving the ability of subjects to manage their mental illness or condition. In some embodiments, the various parameters retained by the protocol repository 20 associated with improving the subject's ability to manage their mental illness or condition include, but are not limited to, the condition of the respective exposure progress associated with the subject, the respective interactive digital chart representing the respective exposure progress associated with the subject, the availability of the respective exposure category or experience within the respective exposure progress associated with the subject, the availability of other exposure categories or experiences for placement into the respective exposure progress, the determination of whether one or more gate criteria 32 are met, the position of an avatar associated with the subject in the digital reality scene 40, or any combination thereof.
In some embodiments, by retaining the wellness repository 18 and the protocol repository 20 with each user profile 16, the digital reality system 200 allows each subject associated with a user profile to interact with the digital reality system 200 at a time and place desired by the subject without losing their progress in improving their ability to manage their mental illness or condition (e.g., through the progress of the protocol of the present disclosure).
In some embodiments, experience store 22 includes a plurality of experiences 24 (e.g., first experience 24-1, second experience 24-2, …, experience 24-D of FIG. 2B). Each experience 24 includes a digital reality task in the form of a challenge (e.g., first challenge 26-1, second challenge 26-2, …, challenge E26-E, etc. of fig. 2B) designed to enhance the ability of the subject to manage their mental illness or condition. For example, in some embodiments, each experience 24 is a challenge 26 that initiates a challenge to the subject to perform a digital reality task within digital reality scene 40.
In some embodiments, experiences 24 are grouped into categories, where each such category includes one or more experiences that involve initiating challenges to the subject to make the same or similar challenges. For example, in some embodiments, the first category includes one or more experiences 24 that relate to the general performance challenges of the subject, e.g., challenges to the subject to initiate a presentation to a viewer in the digital reality scene 40 or to story a group of digital reality scenes 40.
In some embodiments, the second category includes one or more experiences involving exposure challenges. For example, in some embodiments, the second category includes one or more experiences 24 designed to be trusted by the subject's practice, e.g., to initiate a challenge to the subject to the person in the digital reality scene 40 or to alert the person in the digital reality scene 40 of a challenge assigned to the subject. In some embodiments, the second category includes one or more experiences 24 that involve interactions with individuals, e.g., initiating challenges to the subject to look at someone's eyes in the digital reality scene 40. However, the present disclosure is not limited thereto.
Further, in some embodiments, the third category includes one or more experiences involving CBT challenges. For example, in some embodiments, the third category includes one or more experiences 24 designed to allow the user to self-agree on harmful or negative evidence of the idea spoken by the user, evaluate whether the self-recognized evidence is sufficient to reconstruct the idea (such as by modulating the expectation of injury to the subject and/or controlling the perception, etc.), reconstruct the idea to improve the mental disease or condition of the subject, or a combination thereof. In some embodiments, the third category includes one or more experiences 24 designed to identify negative ideas or statements provided by the subject and/or to disrupt native cognitive patterns associated with the formation of negative ideas or statements by the subject with new cognitive patterns associated with the formation of positive or adaptive ideas or statements.
Further, in some embodiments, the fourth category includes one or more experiences that involve positive conceptual challenges.
In some embodiments, the experience involves initiating a challenge to the subject to make a single challenge (e.g., a first challenge to make a presentation, a second challenge to trust, a third challenge to collect evidence, a fourth challenge to cognition reconstruction, a fifth challenge to cognition dissociation, a sixth challenge to set a goal, or a seventh challenge to complete a positive challenge, etc.). Thus, in some such embodiments, such experiences are associated with a single category. However, the present disclosure is not limited thereto. In some embodiments, the experience involves multiple challenges to the subject's challenges, such as, for example, presentation and confidence. Thus, in some embodiments, such experiences are associated with multiple categories (e.g., two categories, three categories, four categories, five categories, etc.).
In some embodiments, each respective challenge 26 is associated with a particular setting, such as a particular digital reality scene 40. For example, consider that the first experience 24-1 is a first challenge 26-1 (e.g., an exposure challenge) that assigns the subject a task to someone in the first digital reality scene 40-1 that is depicting a crowded public occasion, and the second experience 24-2 is a second challenge 26-2 (e.g., another exposure challenge) that assigns the subject a task to someone in the second digital reality scene 40-2 that is depicting a quiet private occasion. Thus, both the first experience 24-1 and the second experience 24-2 are associated with a trusted exposure challenge, but achieve the goal of improving the ability of subjects to manage their mental illness or condition at different granularity. As such, in some embodiments, the corresponding advice experience 24 provides a broad classification of content in a digital reality scenario designed to improve the ability of subjects to manage their mental illness or condition, and the challenge provides a granular implementation of the corresponding experience.
Furthermore, in some embodiments, each experience 24 of experience store 22 is provided by digital reality system 200 without being associated with a respective digital reality scene 40. This allows digital reality system 200 to design and configure a corresponding digital reality scene based on experience 24.
Furthermore, in some embodiments, the criteria store 30 facilitates retaining a plurality of criteria. In some embodiments, criteria of the plurality of criteria are used to determine whether a challenge associated with the experience was successfully completed by the subject, to identify a subsequent challenge for completion by the subject, to determine whether a category was successfully completed by the subject, to identify a subsequent category for completion by the subject, or any combination thereof. For example, in some embodiments, the criteria store 30 includes a plurality of gate criteria (e.g., gate criteria 32-1, gate criteria 32-2, …). In some embodiments, the gate criteria sets conditions for determining whether a category completed successfully, and/or for identifying one or more subsequent categories for completion by the subject. In some embodiments, the gate criteria sets preconditions for executing the category or conditions that must be met in order for the category to be considered complete. In some embodiments, the criteria store 30 also includes a plurality of biometric thresholds (e.g., biometric thresholds 33-1, …). In some embodiments, the gate criteria includes a biometric threshold set to determine whether a challenge associated with the experience completed successfully, and/or conditions to identify a subsequent challenge for the subject to complete. In some embodiments, the biometric threshold sets a prerequisite for performing gate criteria for challenges associated with the experience or a condition that must be met in order for the challenges associated with the experience to be considered complete.
Although fig. 2B illustrates gate criteria and biometric thresholds separately, it should be noted that gate criteria and biometric thresholds may be arranged in the criteria store in other ways. For example, in some embodiments, the category is associated with a gate criterion that includes one or more biometric thresholds for determining whether the challenge(s) of the one or more experiences associated with the category completed successfully. In such embodiments, the criteria store may or may not include a separate biometric threshold. Further, although FIG. 2B illustrates only gate criteria and biometric thresholds, it should be noted that the criteria store may include other criteria. For example, in some embodiments, the criteria store includes at least one criterion for emergency termination of a social challenge. Furthermore, the criteria (gate criteria, biometric threshold, or any other type) may be standard criteria applicable to multiple subjects or personalized criteria specifically designed for a single subject. In an embodiment, the criteria store includes at least one criteria criterion. In another embodiment, the criteria store includes at least one personalized criteria. In yet another embodiment, the criteria store includes both standard criteria and personalized criteria.
Additionally, in some embodiments, the digital reality system 200 includes an application server module 34 that facilitates providing a user of the client device 300 with access to the digital reality scene 40. In some embodiments, application server module 34 sends data elements associated with digital reality scene 40 to each respective client device 300 when there is a request for those data elements by the respective client device 300 (such as when a user logs into client application 320 at client device 300 or in response to a determination with computer system 200). For example, the login module 36 of the application server 34 may verify information provided by the user of the client device 300 against information stored in the user profile 16 to ensure that the correct user requests access to the digital reality scene 40. Thus, a community of users employs client device 300 to access application server module 34 at digital reality system 200 and interact with digital reality scene 40 hosted by digital reality system 200.
In some embodiments, application server module 34 also facilitates allowing a user of client device 300 to configure digital reality scene 40 according to a determination that the user is a healthcare practitioner. For example, in some embodiments, a user interface of a client device allows a user to configure one or more aspects of a digital reality scenario, such as a plurality of non-player characters (NPCs) associated with one or more challenges, such as first social challenge 26-1 and second social challenge 26-2, and the like. Examples of NPCs include other avatars associated with digital reality scene 40 that the subject may interact with (e.g., coffee shop cafeterias, guests at a party, colleagues, transportation staff, menu commentators, computer-implemented communication agents, etc.). However, the present disclosure is not limited thereto.
In some embodiments, each respective digital reality scene 40 defines a digital domain for use by a community of users. A digital reality scene broadly means any space (e.g., digital space and/or real world space) in which digital real content (e.g., avatars, digital real objects, etc.) is presented to a user, such as through a display of a client device. For example, in some embodiments, the digital reality scenario 40 includes an avatar creation client application, a video game, a social networking website or forum, a messaging client application, or any other application where a user wants to have a digital representation.
In some embodiments, the digital reality scenario 40 is configured for exposing a subject to a therapy for improving a mental disorder or condition of a subject. For example, in some embodiments, the therapy for improving a subject's psychosis or mental condition is cognitive therapy (e.g., therapy provided by completing an experience associated with a CBT challenge) and/or exposure therapy (e.g., therapy provided by completing an experience associated with an exposure challenge). As a non-limiting example, in some such embodiments, the cognitive therapy is Cognitive Behavioral Therapy (CBT) or positive-concept based cognitive therapy (MBCT). Thus, in some embodiments, the digital reality scenario is configured for exposing the subject to cognitive reconstruction training, a cognitive reconstruction session, positive training, a positive session of ideas, exposure therapy (e.g., a digital reality scenario for presenting challenges), and/or other educational or therapeutic sessions. Additional details and information :Segal et al.,2018,"Mindfulness-based Cognitive Therapy for Depression,"Guilford Publications,print;Hayes et al.,2018,"Process-basedCBT:The Science and Core Clinical Competencies of Cognitive Behavior Therapy,"New Harbinger Publications,print( regarding exposing a subject to cognitive therapy can be found in the following documents, each of which is incorporated herein by reference in its entirety for all purposes).
In particular, the respective digital reality scene 40 includes a plurality of objects (e.g., a first object 42-1, a second object 42-2, …, an object J42-J of the digital reality scene H40-H of FIG. 2B) that populate the respective digital reality scene 40. In some embodiments, the plurality of objects 42 includes a plurality of player character objects 42 (e.g., avatars) representing users and/or being controlled by users, a plurality of NPC objects 42 representing NPCs in respective digital reality scenes that the users cannot directly control, and a plurality of scene objects 42 (e.g., objects that are not player character objects 42 or NPC objects 42, such as bodies of water in digital reality scene 40, buildings and furniture in digital reality scene 40, and the like). As used herein, a digital reality scene refers to any space (e.g., digital space and/or real world space) in which digital real content (e.g., avatars, digital real objects, etc.) is presented to a user through a display (e.g., a display of client device 300).
However, the present disclosure is not limited thereto. For example, in some embodiments, the object 42 is an object in the digital reality scene 40 that is available for consumption by a user, such as a video, text, or in-game consumable object (e.g., a digital reality beverage object), or the like. Collectively, the plurality of objects 42 enable a user of the client device 300 to actively interact with the digital reality scene 40, such as online and interacting in the digital reality scene 40 and forming one or more users of the respective digital reality scene 40, and so forth.
In some embodiments, each respective object 42 includes a plurality of attributes that describe not only how the respective object 42 interacts with the digital reality scene 40 (such as with other objects 42 in the digital reality scene 40, etc.), but also how the respective object 42 interacts with other users in the digital reality scene 40. In some embodiments, the properties of the object 42 that may be modified or changed include the mass of the object 42, the volume of the object 42, the coefficient of friction of the object 42, the state of matter of the object 42, the stiffness of the body of the object 42, the location of the object 42, the health value of the object 42 (e.g., the point of impact of the object 42, the point of energy of the object, etc.), the junction of the object 42, etc. As a non-limiting example, consider a first attribute describing a response to a collision with a respective object 42 (e.g., hardness of the object 42, adhesiveness of the object 42, etc.).
In some embodiments, the attributes associated with the respective objects 42 are the same for each user in the digital reality scene 40. For example, if the respective object 42 has properties that enable the respective object 42 to interact with a user, each user in the digital reality scene 40 may interact with the respective object 42. On the other hand, if the respective object 42 has properties that enable the respective object 42 to interact with respect to a selected group of users (such as subjects in the user profile 16 having indications of psychosis or mental status, etc.), only users in the selected group of users may interact with the respective object 42. For example, in some embodiments, an administrator user of digital reality scene 40 limits interactions with particular objects 42 for all users except the administrative user or one or more particular users (such as users exhibiting psychosis or mental status).
In some embodiments, the digital reality system 200 includes an application model library 50 that stores one or more models (e.g., classifiers, regressors, etc.). In some embodiments, the model is implemented as an artificial intelligence engine. For example, in some embodiments, the model includes one or more gradient lifting models, one or more random forest models, one or more Neural Networks (NNs), one or more regression models, one or more naive bayes models, one or more Machine Learning Algorithms (MLAs), or a combination thereof. In some embodiments, the MLA or NN is trained from a training dataset comprising one or more features identified from the dataset (e.g., a first training dataset comprising user profile repository 14, experience repository 22, door repository 30, application server module log 34, or a combination thereof). As an example, in some embodiments, the training data set includes data associated with the first user profile 16-1 and data associated with user trends in the face of the experience 24 in the digital reality scene 40.
Thus, in some embodiments, the first model is a neural network classification model, and the second model is a naive bayes classification model, and so on. Further, in some embodiments, the model includes a decision tree algorithm, a neural network algorithm, a Support Vector Machine (SVM) algorithm, and the like. Further, in some embodiments, the model described herein is a logistic regression algorithm, a neural network algorithm, a convolutional neural network algorithm, a Support Vector Machine (SVM) algorithm, a naive bayes algorithm, a nearest neighbor algorithm, a lifting tree algorithm, a random forest algorithm, a decision tree algorithm, a clustering algorithm, or a combination thereof.
In some embodiments, the model is utilized to normalize the values or data sets, such as by transforming the values or sets of values into a common reference frame for comparison purposes, and the like. For example, in some embodiments, when one or more pixel values corresponding to one or more pixels in a respective image are normalized to a predetermined statistic (e.g., a mean and/or standard deviation of the one or more pixel values across the one or more images), the pixel value of the respective pixel is compared to the respective statistic such that an amount by which the pixel value differs from the statistic is determined.
In some embodiments, the untrained model (e.g., "untrained classifier" and/or "untrained neural network") includes a machine learning model or algorithm (such as a classifier or neural network, etc.) that has not been trained on the target data set. In some embodiments, training a model (e.g., training a neural network) refers to the process of training an untrained or partially trained model (e.g., an untrained or partially trained neural network). For example, consider the case of a plurality of training samples including a corresponding plurality of images (e.g., images captured when a digital reality scene is presented on a display of client device 300) discussed below. In conjunction with corresponding measurement indications of one or more objects (e.g., scene objects 42) for each respective image (hereinafter referred to as a training dataset), multiple images are applied as common inputs to an untrained or partially trained model to train the untrained or partially trained model on the indications identifying the objects associated with the morphology class, thereby obtaining a trained model. Furthermore, it will be understood that the term "untrained model" does not exclude the possibility of using a transfer learning technique in such training of an untrained or partially trained model. For example ,Fernandes et al.,2017,"Transfer Learning with Partial Observability Applied to Cervical Cancer Screening,"Pattern Recognition and Image Analysis:8th Iberian Conference Proceedings,243-250(, which is incorporated herein by reference in its entirety for all purposes) provides a non-limiting example of such migration learning. In the case of using transfer learning, additional data other than the data of the main training data set is provided to the above-described untrained model. That is, in a non-limiting example of a transfer learning embodiment, an untrained model receives (i) a plurality of images and measurement indications for each respective image ("main training dataset") and (ii) additional data. In some embodiments, the additional data is in the form of parameters (e.g., coefficients, weights, and/or super-parameters) learned from another set of auxiliary training data. Further, while a description of a single auxiliary training data set is disclosed, it will be appreciated that in the present disclosure there is no limitation on the number of auxiliary training data sets that may be used to supplement the main training data set when training an untrained model. For example, in some embodiments, two or more auxiliary training data sets, three or more auxiliary training data sets, four or more auxiliary training data sets, or five or more auxiliary training data sets are used to supplement the primary training data set by transfer learning, wherein each such auxiliary data set is different from the primary training data set. In such embodiments, any manner of transfer learning may be used. For example, consider the case where there is a first secondary training data set and a second secondary training data set in addition to the primary training data set. The parameters learned from the first auxiliary training dataset (by applying the first model to the first auxiliary training dataset) may be applied to the second auxiliary training dataset using a transfer learning technique (e.g., a second model that is the same as or different from the first model), which in turn may result in a trained intermediate model, the parameters of which are then applied to the main training dataset, and which in combination with the main training dataset itself is applied to the untrained model. Alternatively, the two applications of the first set of parameters learned from the first auxiliary training dataset (by applying the first model to the first auxiliary training dataset) and the second set of parameters learned from the second auxiliary training dataset (by applying the second model, which is the same or different from the first model, to the second auxiliary training dataset) may each be applied separately (e.g., by separate independent matrix multiplication) to separate instances of the main training dataset, and the parameters applied to separate instances of the main training dataset in conjunction with the main training dataset itself (or some simplified form of the main training dataset, such as principal components or regression coefficients learned from the main training dataset, etc.) may then be applied to the untrained model, To train the untrained model. in some instances, the untrained model is additionally or alternatively trained using knowledge about objects related to morphological classes derived from the secondary training dataset in conjunction with the object and/or class label images in the primary training dataset.
As used herein, the term "model" refers to a machine learning model or algorithm.
In some embodiments, the model is an unsupervised learning algorithm. One example of an unsupervised learning algorithm is cluster analysis.
In some embodiments, the model is supervised machine learning. Non-limiting examples of supervised learning algorithms include, but are not limited to, logistic regression, neural networks, support vector machines, naive bayes algorithms, nearest neighbor algorithms, random forest algorithms, decision tree algorithms, lifting tree algorithms, polynomial logistic regression algorithms, linear models, linear regression, gradient lifting, hybrid models, hidden markov models, gaussian NB algorithms, linear discriminant analysis, or any combination thereof. In some embodiments, the model is a polynomial classifier algorithm. In some embodiments, the model is a 2-level Stochastic Gradient Descent (SGD) model. In some embodiments, the model is a deep neural network (e.g., a deep wide sample level classifier).
A neural network. In some embodiments, the model is a neural network (e.g., a convolutional neural network and/or a residual neural network). Neural network algorithms (also known as Artificial Neural Networks (ANNs)) include convolution and/or residual neural network algorithms (deep learning algorithms). The neural network may be a machine learning algorithm that may be trained to map an input data set to an output data set, where the neural network includes interconnected node groups organized into multiple layers of nodes. For example, the neural network architecture may include at least an input layer, one or more hidden layers, and an output layer. The neural network may include any total number of layers and any number of hidden layers, where the hidden layers serve as a trainable feature extractor that allows mapping of an input data set to an output value or set of output values. As used herein, a deep learning algorithm (DNN) may be a neural network that includes multiple hidden layers (e.g., two or more hidden layers). Each layer of the neural network may include a plurality of nodes (or "neurons"). The nodes may receive input directly from the input data or the output of the nodes in the previous layer and perform certain operations (e.g., summation operations). In some embodiments, the connections from the inputs to the nodes are associated with parameters (e.g., weights and/or weighting factors). In some embodiments, the node may sum the products of all input pairs (xi) and their associated parameters. In some embodiments, the weighted sum is offset by the offset b. In some embodiments, the output of a node or neuron may be gated using a threshold or activation function f (which may be a linear or non-linear function). The activation function may be, for example, a modified linear unit (ReLU) activation function, a leak ReLU activation function, or other functions such as: saturated hyperbolic tangent, identity, binary step, logarithmic, arctangent, softsign, parameter-modifying linear units, exponential linear units, softPlus, curved identity, softExponential, sinusoidal, gaussian or sigmoid functions, or any combination thereof.
One or more sets of training data may be used to "teach" or "learn" the weighting factors, bias values, and thresholds or other computational parameters of the neural network during the training phase. For example, parameters may be trained using input data from a training data set and gradient descent or back-propagation methods such that the ANN calculated output value(s) are consistent with examples included in the training data set. Parameters may be obtained from a back propagation neural network training process.
Any of a variety of neural networks may be suitable for performing the methods disclosed herein. Examples may include, but are not limited to, feed forward neural networks, radial basis function networks, recurrent neural networks, residual neural networks, convolutional neural networks, residual convolutional neural networks, and the like, or any combination thereof. In some embodiments, machine learning utilizes an ANN or deep learning architecture of pre-training and/or transfer learning. In accordance with the present disclosure, a convolution and/or residual neural network may be used to analyze an image of a subject.
For example, the deep neural network model includes an input layer, a plurality of individually parameterized (e.g., weighted) convolutional layers, and an output score. Parameters (e.g., weights) of the convolutional layers as well as the input layers contribute to a plurality of parameters (e.g., weights) associated with the deep neural network model. In some embodiments, at least 100 parameters, at least 1000 parameters, at least 2000 parameters, or at least 5000 parameters are associated with the deep neural network model. Thus, deep neural network models require the use of a computer because they cannot be solved mentally. In other words, in such embodiments the model output needs to be determined using a computer rather than mentally, taking into account the input to the model. See, for example, documents :Krizhevsky et al.,2012,"Imagenet classification with deep convolutional neural networks,"in Advances in Neural Information Processing Systems 2,Pereira,Burges,Bottou,Weinberger,eds.,pp.1097-1105,Curran Associates,Inc.;Zeiler,2012"ADADELTA:an adaptive learning rate method,"CoRR,vol.abs/1212.5701; and Rumelhart et al.,1988,"Neurocomputing:Foundations of research,"ch.Learning Representations by Back-propagating Errors,pp.696-699,Cambridge,MA,USA:MIT Press, below, each of which is incorporated by reference herein in its entirety for all purposes.
Additional example neural networks suitable for use as models are disclosed in, for example, duda et al, 2001,Pattern Classification,Second Edition,John Wiley&Sons,Inc, new York, and Hastine et al, 2001,The Elements of Statistical Learning,Springer-Verlag, new York, each of which is incorporated by reference herein in its entirety for all purposes additional example neural networks suitable for use as models are also described in Draghici,2003,Data Analysis Tools for DNA Microarrays,Chapman&Hall/CRC, and Mount,2001,Bioinformatics:sequence and genome analysis,Cold Spring Harbor Laboratory Press,Cold Spring Harbor,New York,, each of which is incorporated by reference herein in its entirety for all purposes.
And supporting a vector machine. In some embodiments, the model is a Support Vector Machine (SVM). SVM algorithms :Cristianini and Shawe-Taylor,2000,"An Introduction to Support Vector Machines,"Cambridge University Press,Cambridge;Boser et al.,1992,"A training algorithm for optimal margin classifiers,"in Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory,ACM Press,Pittsburgh,Pa.,pp.142-152;Vapnik,1998,Statistical Learning Theory,Wiley,New York;Mount,2001,Bioinformatics:sequence and genome analysis,Cold Spring Harbor Laboratory Press,Cold Spring Harbor,N.Y.;Duda,Pattern Classification,Second Edition,2001,John Wiley&Sons,Inc.,pp.259,262-265; and Hastie,2001,The Elements of Statistical Learning,Springer,New York;and Furey et al.,2000,Bioinformatics16,906-914, suitable for use as models are disclosed, for example, in the following documents, each of which is incorporated herein by reference in its entirety for all purposes. When used for classification, the SVM separates a given set of binary labeled data from a hyperplane furthest from the labeled data. For the case where linear separation is not possible, the SVM may work in conjunction with a "kernel" technique, which automatically implements a nonlinear mapping to feature space. The hyperplane found by the SVM in the feature space may correspond to a nonlinear decision boundary in the input space. In some embodiments, a plurality of parameters (e.g., weights) associated with the SVM define a hyperplane. In some embodiments, the hyperplane is defined by at least 10, at least 20, at least 50, or at least 100 parameters, and the SVM model requires a computer to calculate because the SVM cannot be solved mentally.
Naive bayes algorithm. In some embodiments, the model is a naive bayes algorithm. A naive bayes classifier suitable for use as a model is disclosed, for example, in Ng et al.,2002,"On discriminative vs.generative classifiers:A comparison of logistic regression and naive Bayes,"Advances in Neural Information Processing Systems,14(, which is incorporated herein by reference in its entirety for all purposes). A na iotave bayes classifier is any classifier in the family of "probability classifiers" based on the bayes theorem with strong (na iotave) independence hypotheses applied between features. In some embodiments, they are coupled with a core density estimate. See Hastie et al.,2001,The elements of statistical learning:data mining,inference,and prediction,eds.Tibshirani and Friedman,Springer,New York(, for example, which is incorporated by reference in its entirety for all purposes).
Nearest neighbor algorithm. In some embodiments, the model is a nearest neighbor algorithm. The nearest neighbor model may be memory-based and does not include a model to be fitted. For nearest neighbors, assume query point x0 (first image), identify k training points x(r), r, k (here training images) nearest in distance to x0, and then classify points x0 using k nearest neighbors. In some embodiments, the distance to these neighbors is a function of the value of the decision set. In some embodiments, the Euclidean distance in the feature space is used to determine the distance as d(i)=‖x(i)-x(O). In some embodiments, when using the nearest neighbor algorithm, the value data used to calculate the linear discriminant is normalized to have a mean of zero and a variance of 1. Nearest neighbor rules can be refined to address the issues of unequal class priors, differential misclassification costs, and feature selection. Many of these refinements involve some form of weighted voting on neighbors. For more information on nearest neighbor analysis, see the following: duda, pattern Classification, second Edition,2001,John Wiley&Sons,Inc; and hasie 2001,The Elements of Statistical Learning,Springer,New York (each of which is incorporated herein by reference in its entirety for all purposes).
The k nearest neighbor model is a non-parametric machine learning method, where the input consists of k nearest training examples in feature space. The output is class membership. Objects are classified by a number of votes for their neighbors, where an object is assigned to the most common class of its k nearest neighbors (k is a positive integer, typically small). If k=1, then the object is simply assigned to the single nearest neighbor class. See Duda et al, 2001,Pattern Classification,Second Edition,John Wiley&Sons (which is incorporated herein by reference in its entirety for all purposes). In some embodiments, the number of distance calculations required to solve the k-nearest neighbor model is such that a computer is used to solve the model for a given input, as this cannot be done mentally.
Random forests, decision trees, and lifting tree algorithms. In some embodiments, the model is a decision tree. Decision trees suitable for use as models are generally described by Duda,2001,Pattern Classification,John Wiley&Sons,Inc. New York, pp.395-396 (which is incorporated herein by reference). The tree-based approach divides the feature space into a set of rectangles, and then fits a model (e.g., a constant) into each rectangle. In some embodiments, the decision tree is a random forest regression. One specific algorithm that may be used is classification and regression trees (CART). Other specific decision tree algorithms include, but are not limited to, ID3, C4.5, MART, and random forest. CART, ID3 and C4.5 are described in Duda,2001,Pattern Classification,John Wiley&Sons,Inc, new York, pp.396-408and pp.411-412 (which are incorporated herein by reference in their entirety for all purposes). CART, MART and C4.5 are described in hasie et al, 2001,The Elements of Statistical Learning,Springer-Verlag, new York, chapter 9 (which is incorporated herein by reference in its entirety for all purposes). At Breiman,1999,"Random Forests--Random Features,"Technical Report 567,Statistics Department,U.C.Berkeley,September 1999(, which is incorporated by reference herein in its entirety for all purposes). In some embodiments, the decision tree model includes at least 10, at least 20, at least 50, or at least 100 parameters (e.g., weights and/or decisions), and requires a computer to calculate, as this cannot be solved mentally.
And (5) regression. In some embodiments, the model uses a regression algorithm. The regression algorithm may be any type of regression. For example, in some embodiments, the regression algorithm is logistic regression. In some embodiments, the regression algorithm is a logistic regression with lasso, L2, or elastic network regularization. In some embodiments, those extracted features having corresponding regression coefficients that fail to meet the threshold are removed (removed) from consideration. In some embodiments, generalization of a logistic regression model that handles multiple classes of responses is used as the model. At Agresti,An Introduction to Categorical Data Analysis,1996,Chapter 5,pp.103-144,John Wiley&Son,New York(, which is incorporated by reference herein in its entirety for all purposes) discloses a logistic regression algorithm. In some embodiments, the model utilizes a regression model disclosed in Hastin et al, 2001,The Elements of Statistical Learning,Springer-Verlag, new York. In some embodiments, the logistic regression model includes at least 10, at least 20, at least 50, at least 100, or at least 1000 parameters (e.g., weights), and requires a computer to calculate, as it cannot be solved mentally.
Linear discriminant analysis algorithm. Linear Discriminant Analysis (LDA), normal Discriminant Analysis (NDA), or discriminant function analysis may be generalizations of Fisher's linear discriminant analysis (methods used in statistics, pattern recognition, and machine learning to find linear combinations of features that characterize or separate objects or events of two or more classes). In some embodiments of the present disclosure, the combination thus obtained may be used as a model (e.g., a linear classifier).
Hybrid models and hidden markov models. In some embodiments, the model is a hybrid model, such as the model described in MCLACHLAN ET al, bioinformation 18 (3): 413-422,2002, and the like. In some embodiments (particularly those that include a time component), the model is a hidden Markov model such as described by Schliep et al, 2003,Bioinformatics 19 (1): i255-i 263.
And (5) clustering. In some embodiments, the model is an unsupervised clustering model. In some embodiments, the model is a supervised clustering model. Clustering algorithms suitable for use as models are described, for example, at pages 211-256 (which are incorporated herein by reference in their entirety for all purposes) in Duda and Hart, pattern Classification AND SCENE ANALYSIS,1973,JOHN WILEY&SONS,INC, new York (hereinafter "Duda 1973). The clustering problem may be described as finding one of the natural groupings in the dataset. To identify natural groupings, two problems can be solved. First, the manner in which the similarity (or dissimilarity) between two samples is measured may be determined. The metric (e.g., similarity metric) may be used to ensure that samples in one cluster are more similar to each other than they are to samples in other clusters. Second, a mechanism for partitioning data into clusters using similarity metrics may be determined. One way to start a cluster investigation may be to define a distance function and calculate a distance matrix between all pairs of samples in the training dataset. If the distance is a good measure of similarity, the distance between reference entities in the same cluster may be significantly smaller than the distance between reference entities in different clusters. However, clustering may not use distance measures. For example, a non-metric similarity function s (x, x ') may be used to compare the two vectors x and x'. s (x, x ') may be a symmetric function whose value is large when x and x' are "similar" to some extent. Once the method for measuring "similarity" or "dissimilarity" between points in the dataset is selected, clustering may use a criterion function of the cluster quality of any partition of the measured data. Partitions of the data set that maximize the criterion function may be used to cluster the data. Certain exemplary clustering techniques that may be used in the present disclosure may include, but are not limited to, hierarchical clustering (aggregated clustering using nearest neighbor, farthest neighbor, average link, centroid, or sum of squares algorithms), k-means clustering, fuzzy k-means clustering, and Jarvis-Patrick clustering. In some embodiments, the clusters include unsupervised clusters (e.g., an unexpected number of clusters and/or no predetermined cluster assignment).
Integration and promotion of the model. In some embodiments, integration of the model (two or more) is used. In some embodiments, boosting techniques such as AdaBoost, etc. are used in conjunction with many other types of learning algorithms to improve the performance of the model. In this method, the outputs of any of the models disclosed herein, or equivalents thereof, are combined into a weighted sum representing the final output of the lifting model. In some embodiments, the multiple outputs from the model are combined using any measure of central tendency known in the art, including but not limited to mean, median, mode, weighted mean, weighted median, weighted mode, etc. In some embodiments, a voting method is used to combine the multiple outputs. In some embodiments, the respective models in the integration of the models are weighted or not weighted.
The term "classification" may refer to any number(s) or other character(s) associated with a particular property of a sample. For example, a "+" sign (or the word "positive") may indicate that a sample is classified as having a desired result or characteristic, while a "-" sign (or the word "negative") may indicate that a sample is classified as having an undesired result or characteristic. In another example, the term "classification" refers to a corresponding result or characteristic (e.g., high risk, medium risk, low risk). In some embodiments, the classification is binary (e.g., positive or negative) or has more levels of classification (e.g., scale from 1 to 10 or from 0 to 1). In some embodiments, the terms "cutoff" and "threshold" refer to a predetermined number used in operation. In one example, a cutoff value refers to a value above which the result is excluded. In some embodiments, the threshold is a value above or below which a particular classification is applied. Any of these terms may be used in any of these contexts.
Those skilled in the art will readily understand other models suitable for use with the systems and methods of the present disclosure. In some embodiments, the systems, methods, and apparatus of the present disclosure utilize more than one model to provide an evaluation with increased accuracy (e.g., to be evaluated given one or more inputs). For example, in some embodiments, each respective model is correspondingly evaluated when provided with a respective data set. Thus, each respective model may be independently derived and then collectively verified by comparison or fusion of the models. Thus, the cumulative result is provided by the model. However, the present disclosure is not limited thereto.
In some embodiments, the respective models are assigned tasks for corresponding activities. As a non-limiting example, in some embodiments, tasks performed by the respective models include, but are not limited to: diagnosing a psychotic disorder; generating a presentation of the corresponding challenge in the form of experience 24 associated with digital reality scene 40; identifying each of a plurality of categories of the assessment obtained from the subject; performing verification of an evaluation obtained from the subject; performing a further verification of another verification of the assessment obtained from the subject by the healthcare practitioner; generating corresponding gate criteria; generating corresponding biometric thresholds; generating an exposure progress comprising a plurality of categories arranged in a sequence; determining whether the challenge completed successfully; identifying a subsequent challenge for the subject to complete; determining whether the category completed successfully; identifying a subsequent category for completion by the subject; or any combination thereof. In some embodiments, each respective model of the present disclosure utilizes 10 or more parameters, 100 or more 100 parameters, 1000 or more 1000 parameters, 10000 or more 10000 parameters, or 100000 or more 100000 parameters. In some embodiments, each respective model of the present disclosure cannot be performed mentally.
The modules and applications described above each correspond to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in this disclosure. These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be optionally combined or otherwise rearranged in various embodiments of the disclosure. In some embodiments, memory 212 optionally stores a subset of the modules and data structures described above. Furthermore, in some embodiments, memory 212 stores additional modules and data structures not described above.
It should be appreciated that fig. 2A and 2B are merely examples of the digital reality system 200, and that the digital reality system 200 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 2A and 2B are implemented in hardware, software, firmware, or a combination thereof (including one or more signal processing and/or application specific integrated circuits). Further, the digital reality system 200 may be a single device including all the functions of the digital reality system 200. The client device 300 may be a combination of a plurality of devices. For example, the functionality of the digital reality system 200 may be distributed across any number of networked computers and/or resident on each of several networked computers and/or hosted at one or more virtual machines and/or containers at remote locations accessible across a communication network (e.g., the communication network 106, the network interface 205, or both). Those skilled in the art will appreciate that a number of different computer topologies are possible for the digital reality system 200, as well as other devices and systems of the present disclosure, and that all such topologies are within the scope of the present disclosure.
Referring to fig. 3, an exemplary client device 300 (e.g., a first client device 300-1) is provided. Client device 300 includes one or more processing units (CPUs) 302, one or more network or other communication interfaces 304, memory 312 (e.g., random access memory and/or non-volatile memory) optionally accessed by one or more controllers, and one or more communication buses 314 interconnecting the aforementioned components.
In some embodiments, the client device 300 includes a mobile device, such as a mobile phone, tablet, laptop, wearable device such as a smart watch, and the like. In such embodiments, the respective digital reality scene 40 accessible by the client device 300 comprises an augmented reality scene. In some embodiments, the respective digital reality scenes accessible by the client device 300 include mixed reality scenes. However, the present disclosure is not limited thereto. For example, in some embodiments, client device 300 is a desktop computer or other similar device that accepts one or more wearable devices (e.g., wearable displays). In some embodiments, the client device 300 is a standalone device dedicated to providing the digital reality scenario 40 of the systems and methods of the present disclosure. Further, in some embodiments, each client device 300 enables a respective subject to provide information (e.g., subject preferences, subject feedback, etc.) related to the respective subject.
In addition, the client device 300 includes a user interface 306. The user interface 306 generally includes a display device 308, which display device 308 is used to present media, such as the digital reality scene 40, and receive instructions from a subject operating the client device 300. In some embodiments, the display device 308 is optionally integrated within the client device 300, such as a smart device (e.g., a smart phone) or the like (e.g., housed in the same chassis as the CPU 302 and memory 312). In some embodiments, the client device 300 includes one or more input devices 310, the one or more input devices 310 allowing a subject to interact with the client device 300. In some embodiments, input device 310 includes a keyboard, a mouse, one or more cameras (e.g., objective lenses in communication with a two-dimensional pixelated detector) configured to determine a position of an object over a period of time (e.g., tracking a hand of the object across space and/or time), and/or other input mechanisms. Alternatively or additionally, in some embodiments, the display device 308 comprises a touch-sensitive surface, e.g., where the display 308 is a touch-sensitive display or the client device 300 comprises a touchpad.
In some embodiments, the client device 300 includes an input/output (I/O) subsystem 330 for interfacing with one or more peripheral devices of the client device 300. For example, in some embodiments, audio is presented by an external device (e.g., speaker, headset, etc.) that receives audio information from client device 300 and/or a remote device (e.g., digital reality system 200) and presents audio data based on the audio information. In some embodiments, input/output (I/O) subsystem 330 also includes or interfaces with audio output devices, such as speakers or audio outputs for connection with speakers, headphones, or earphones, and the like. In some embodiments, input/output (I/O) subsystem 330 also includes voice recognition capabilities (e.g., to supplement or replace input device 310).
In some embodiments, the client device 300 also includes one or more sensors (e.g., accelerometers, magnetometers, proximity sensors, gyroscopes, etc.), camera devices (e.g., camera devices or camera modules and related components), positioning modules (e.g., global Positioning System (GPS) receivers or other navigation or geolocation system modules/devices and related components), combinations thereof, or the like.
As described above, the client device 300 includes the user interface 306. The user interface 306 generally includes a display device 308, the display device 308 optionally being integrated within the client device 300 (e.g., housed in the same chassis as the CPU and memory, such as with a smart phone or an integrated desktop computer client device 300, etc.). In some embodiments, the client device 300 includes a plurality of input devices 310, such as a keyboard, mouse, and/or other input buttons (e.g., one or more sliders, one or more joysticks, one or more radio buttons, etc.), and the like. Alternatively or additionally, in some embodiments, the display device 308 comprises a touch-sensitive surface, e.g., wherein the display 308 is a touch-sensitive display 308 or the respective client device 300 comprises a touch pad.
In some embodiments, the pose of the client device 300 is determined based on one or more characteristics such as: one or more local characteristics at the client device 300 (e.g., acceleration of the client device), and/or one or more proximity characteristics near the client device 300 associated with respective regions of interest, such as using a subject's hand of the client device 300 or a hand controller of the client device 300, etc. For example, in some embodiments, the one or more proximity characteristics associated with the respective region of interest include an appearance of the region of interest. As an example, in some embodiments, the respective proximity characteristics are associated with a shape of the region of interest (e.g., a change in a subject's hand from an open fist to a closed fist, etc.), a color of the region of interest (e.g., an evaluation of a color of clothing worn by the subject), or a reflectivity of the region of interest, etc. In some embodiments, the one or more proximity characteristics associated with the respective region of interest are derived from information derived from previous challenges of the respective digital reality scene (e.g., information retained by a solution store of the corresponding user profile), such as a workflow of exposure progress, and the like. In some embodiments, the one or more proximity characteristics associated with the respective region of interest are based on a reference database comprising a plurality of characteristics associated with the predetermined region of interest. Additional details and information :Oe et al.,2005,"Estimating Camera Position and Posture by Using Feature Landmark Database,"Scandinavian Conference on Image Analysis,pg.171;Lee et al.,1998,"Fine Active Calibration of Camera Position/Orientation through Pattern Recognition,"IEEE ISIE,print;Dettwiler etal.,1994,"Motion Tracking with an Active Camera,"IEEE Transactions on Pattern Analysis and Machine Intelligence,16(5),pg.449;Kritikos et al.,2020,"Comparison between Full Body Motion Recognition Camera Interaction and Hand Controllers Interaction used in Virtual Reality Exposure Therapy for Acrophobia,"Sensors,20(5),pg.1244, regarding determining gestures based on characteristics of a region of interest may be found in the following documents, each of which is incorporated herein by reference in its entirety for all purposes.
Further, in some embodiments, the client device 300 includes a heads-up display (HUD) device, for example, wherein the display 308 is mounted on the head of the user, such as a virtual reality headset that facilitates presentation of the virtual reality scene 40, an augmented reality headset that facilitates presentation of the augmented reality scene 40, or a mixed reality headset that facilitates presentation of the mixed reality scene 40, or the like. In such embodiments, the client device 300 includes input device(s) 310 such as a haptic feedback device or the like. Thus, the HUD client device 300 provides the functionality of a virtual reality client device 300 with synchronous haptic and audio feedback, an augmented reality client device 300 with synchronous haptic and audio feedback, a mixed reality client device 300 with synchronous haptic and audio feedback, or a combination thereof.
In some embodiments, the display 308 is a wearable display, such as a smart watch, a head mounted display, or a smart garment client device (e.g., display 1100 of fig. 13), among others. One such non-limiting example of wearable display 308 includes a near-eye display or a head-mounted display. In other embodiments, the display 308 is a smart mobile device display (e.g., a smart phone), such as the client device 300-1 of fig. 8A, and the like. In some embodiments, the display 308 is a Head Mounted Display (HMD), a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers, connected to the host computer system 300. Fig. 13 is a perspective view of an example of a near-eye display 1100 in the form of a pair of glasses for implementing some of the examples of display 308 disclosed herein. In some embodiments, the near-eye display 1100 is configured to operate as a virtual reality display, an augmented reality display, and/or a mixed reality display. In some embodiments, the near-eye display 1100 includes a frame 1105 and a display 1110. In some embodiments, display 1110 is configured to present content to a user. In some embodiments, display 1100 includes display electronics and/or display optics. For example, in some embodiments, the display 1100 includes an LCD display panel, an LED display panel, or an optical display panel (e.g., a waveguide display assembly). In some embodiments, the near-eye display 1100 also includes various sensors 1150a, 1150b, 1150c, 1150d, and 1150e on the frame 1105 or within the frame 1105. In some embodiments, the various sensors 1150 a-1150 e include one or more depth sensors, motion sensors, position sensors, inertial sensors, recorder sensors (e.g., microphone sensors), or ambient light sensors. In some embodiments, the various sensors 1150 a-1150 e include one or more image sensors configured to generate image data representing different fields of view in different directions. In some embodiments, various sensors 1150 a-1150 e are used as input devices to control or influence the display content of the near-eye display 1100 and/or to provide an interactive VR/AR/MR experience to a user of the near-eye display 1100. In some embodiments, various sensors 1150 a-1150 e are also used for stereoscopic imaging.
In some embodiments, the near-eye display 1100 further includes one or more illuminators 1130 for projecting light into the physical environment. In some embodiments, the projected light is associated with different frequency bands (e.g., visible light, infrared light, ultraviolet light, etc.), and in such embodiments, is used for various purposes. For example, in some embodiments, the illuminator(s) 1130 project light in a dark environment (or in an environment where the intensity of infrared light, ultraviolet light, etc. is low) to assist the sensors 1150 a-1150 e in capturing images of different objects within the dark environment. In some embodiments, the illuminator(s) 1130 are used to project certain light patterns onto objects within the environment. In some embodiments, the illuminator(s) 1130 are used as locators.
In some embodiments, the near-eye display 1100 includes a high resolution camera 1140. In some embodiments, the camera 1140 captures images of the physical environment in the field of view. In some embodiments, the captured images are processed, for example, by a virtual reality engine (e.g., engine 322 of fig. 3) to add one or more virtual objects 42 to the captured images or to modify physical objects in the captured images. In some embodiments, the processed image is displayed to a user through display 1100 for use in an AR or MR application provided by the present disclosure (e.g., client application 320 of fig. 3).
Additionally, in some embodiments, the client device 300 includes or is an integral part of a digital reality suite for rendering the digital reality scene 40. Additional details and information regarding digital reality suites can be found in U.S. patent application publication No. 2020/012320 A1 entitled "Virtual REALITY KIT," filed on 10 months 18 in 2019, which is incorporated herein by reference in its entirety for all purposes.
In some embodiments, client device 300 includes one or more readily available (e.g., off-the-shelf) components, such as Pico Neo 3pro (Pico Interactive Inc., san Francisco, calif.), oculus Quest 2 (Oculus VR, erwan, calif.), SNAPCHAT SPECTACLES 3 (Snap Inc., st. Tamonica, calif.), google Cardboad (Google LLC, mountain scene, calif.), or HTC VIVE Pro 2 (HTC Corporation, peach garden, taiwan, china), or the like. Those skilled in the art will appreciate that the present disclosure is not so limited.
In some embodiments, client device 300 presents media to a user through display 308. Examples of media presented by display 308 include one or more images, video, audio (e.g., waveforms of audio samples), or a combination thereof. In an exemplary embodiment, one or more images, video, audio, or a combination thereof are presented by the display through the digital reality scene 40. In some embodiments, the audio is presented by an external device (e.g., speaker, headset, etc.), which receives audio information from the client device 300, the digital reality system 200, or both, and presents audio data based on the audio information. In some embodiments, the user interface 306 also includes an audio output device, such as a speaker or audio output for connection with a speaker, earphone, or headset, or the like. In some embodiments, the user interface 306 also includes an audio input device (e.g., a microphone) and optional speech recognition capabilities (e.g., to supplement or replace a keyboard). Optionally, the client device 300 includes an audio input device 310 (e.g., a microphone) to capture audio (e.g., speech from a user). In some embodiments, the audio input device 310 is a single omnidirectional microphone.
In some embodiments, the client device 300 further includes one or more of the following: one or more sensors (e.g., accelerometer, magnetometer, proximity sensor, gyroscope); imaging devices (e.g., camera devices or modules and related components); and/or a positioning module (e.g., a Global Positioning System (GPS) receiver or other navigation or geographic positioning device and related components). In some embodiments, the sensor includes one or more hardware devices that detect spatial and motion information related to the client device 300. The spatial and motion information may include information related to: the location of the client device 300, the orientation of the client device 300, the velocity of the client device 300, the rotation of the client device 300, the acceleration of the client device 300, or a combination thereof. For example, in some embodiments, the sensor includes one or more Inertial Measurement Units (IMUs) for detecting rotation of the user's head while the user is utilizing (e.g., wearing) the client device 300. In some embodiments, this rotation information (e.g., by the client application 320 of fig. 3 and/or the digital reality session engine 38 of fig. 2B) is used to adjust the image displayed on the display 308 of the client device 300. In some embodiments, each IMU includes one or more gyroscopes, one or more accelerometers, and/or one or more magnetometers that collect spatial and motion information. In some embodiments, the sensor includes one or more cameras located on the client device 300.
Memory 312 includes high-speed random access memory (such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices, etc.), and optionally also non-volatile memory (such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state memory devices, etc.). Memory 312 may optionally include one or more storage devices located remotely from CPU(s) 302. The memory 312 or alternatively the nonvolatile memory device(s) within the memory 312 include a non-transitory computer readable storage medium. Access to memory 312 by other components of client device 300, such as CPU(s) 302 and I/O subsystem 330, is optionally controlled by a controller. In some embodiments, memory 312 may include mass storage remotely located relative to CPU 302. In other words, some of the data stored in the memory 312 may actually be hosted on a device external to the client device 300, but may be electronically accessed by the client device 300 through the internet, an intranet, or other form of network 106, or an electronic cable using the communication interface 304.
In some embodiments, the memory 312 of the client device 300 stores the following:
an operating system 316 that includes processes for handling various basic system services;
An electronic address 318 associated with the client device 300 that identifies the client device 300 within the distributed system 100;
a client application 320 for generating content for display through a graphical user interface presented on the display 308 of the client device 300; and
An engine 322 that allows the client application 320 to operate in conjunction with the client device 300.
In some embodiments, an electronic address 318 is associated with the client device 300, the electronic address 318 being used to at least uniquely identify the client device 300 from other devices and components of the distributed system 100. In some embodiments, the electronic address 318 associated with the client device 300 is used to determine a source of the assessment provided by the client device 300 (e.g., receiving the assessment from the digital reality system 200 and communicating one or more responses based on the assessment).
In some embodiments, each client application 320 is a set of instructions that, when executed by a processor, generates content for presentation to a user, such as a virtual reality scene 40, an augmented reality scene 40, a mixed reality scene 40, and the like. The client application 320 may generate content in response to input received from a user through movement of the client device 300, such as the input device 310 of the client device. Here, the client application 320 includes a gaming application, a conferencing application, a video playback application, or a combination thereof. For example, in some embodiments, client application 320 facilitates providing one or more sessions of a digital reality scene (such as digital reality scene 40-1, 40-2, …, or 40-H of FIG. 1, etc.). In some embodiments, the client application 320 is used to obtain an assessment from the subject that includes an identification of the plurality of suggested experiences 24. In some embodiments, client application 320 is used to configure one or more criteria associated with nodes of experience 24, and optionally, configure the experience within digital reality scene 40, such as the number of player characters and/or the number of non-player characters that may participate in digital reality scene 40 during a given challenge.
In some embodiments, engine 322 is a software module that allows client application 320 to operate in conjunction with client device 300. In some embodiments, engine 322 receives information from sensors on client device 300 and provides the information to client application 320. Based on the received information, engine 322 determines the type of media content, and/or haptic feedback, to be provided to client device 300 for presentation to the user via display 308 or one or more audio devices. For example, if engine 322 receives information from a sensor of client device 300 indicating that the user has seen to the left, engine 322 generates content for display 308 that reflects the user's movements in digital reality scene 40. As another example, if the user hits a wall (e.g., in the digital reality scene 40), the engine 322 generates a control signal for the haptic feedback mechanism of the client device 300 to generate vibrations, and optionally generates audio corresponding to the user action (e.g., a sound of a human fist hitting a wooden wall, or a sound of a human fist hitting a Plexiglas wall, which would be different than the sound generated for a wooden wall). As yet another non-limiting example, in some embodiments, engine 322 receives information from one or more sensors by electronic communication with client device 300, wherein the one or more sensors obtain biometric data (such as an instantaneous heart rate of the user captured over a period of time, etc.) from a user of client device 300. In such an embodiment, the engine 322 generates content for the display 308 that is responsive to the biometric data from the user, such as changing the color of the first object 42-1 in the digital reality scene 40 from a first color (orange) to a second color (purple) to reflect a decrease in the user's instantaneous heart rate, and so forth. However, the present disclosure is not limited thereto.
Similarly, in some embodiments, engine 322 receives information from sensors of client device 300 and provides information from the sensors to client application 320. Thus, in some embodiments, the application 320 uses this information to act within the digital reality scene of the application 320. Thus, if the engine 322 receives information from the sensor that the user has lifted his or her hand, the simulated hand in the digital reality scene 40 is raised to a corresponding height. However, the present disclosure is not limited thereto.
In some embodiments, the engine 322 generates a control signal for the haptic feedback mechanism that causes the haptic feedback mechanism to create one or more haptic problems. As described above, the information received by the engine 322 may also include information from the client device 300. For example, in some embodiments, one or more cameras (e.g., input device 310, I/O subsystem 330 of fig. 3) disposed on client device 300 capture movement of client device 300, and client application 320 may use this additional information to act within digital reality scene 40 of client application 320.
In some embodiments, engine 322 provides feedback to the user that an action was taken. In some embodiments, the feedback provided is provided visually through the display 308 of the client device 300, audibly through one or more audio devices (e.g., the I/O subsystem 330) of the client device 300, and/or tactilely via one or more of the haptic feedback mechanisms of the client device 300.
Additional details and information regarding utilization of engines (e.g., digital reality session engine 38 of fig. 2B, engine 322 of fig. 3) may be found in: U.S. patent application publication No. 2018/0254097A1 entitled "Dynamic Multi-Sensory Simulation System for Effecting Behavior Change" filed on 3/5 of 2018; U.S. patent application publication No. 2020/0022632A1 entitled "Digital Content Processing and Generation for a Virtual Environment" filed on 7.16.2019; U.S. patent application publication No. 2020/0023157A1 entitled "DYNAMIC DIGITAL Content DELIVERY IN A Virtual Environment" filed on 7/16/2019; each of these documents is incorporated by reference herein in its entirety for all purposes.
Each of the above modules and applications corresponds to a set of executable instructions for performing one or more of the functions described above and the methods described in this disclosure. These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be optionally combined or otherwise rearranged in various embodiments of the disclosure. In some embodiments, memory 312 optionally stores a subset of the modules and data structures described above. Furthermore, in some embodiments, memory 212 stores additional modules and data structures not described above.
It should be understood that fig. 3 illustrates only an example of a client device 300, and that the client device 300 may alternatively have more or fewer components than shown, may alternatively combine two or more components, or may alternatively have a different configuration or arrangement of components. The various components shown in fig. 3 are implemented in hardware, software, firmware, or a combination thereof (including one or more signal processing and/or application specific integrated circuits). Further, the client device 300 may be a single device including all the functions of the client device 300. The client device 300 may be a combination of a plurality of devices. For example, the functionality of client device 300 may be distributed across any number of networked computers and/or resident on each of a number of networked computers and/or hosted at one or more virtual machines and/or containers at remote locations accessible across a communication network (e.g., communication network 106, network interface 304, or both). Those skilled in the art will appreciate that a number of different computer topologies are possible for the client device 300, as well as other devices and systems of the present disclosure, and that all such topologies are within the scope of the present disclosure.
Referring now to fig. 4A-4R, a flow chart illustrating an exemplary method 400 according to some embodiments of the present disclosure is depicted. In the flow chart, preferred portions of the method are shown in solid line boxes, while additional, alternative, or alternative portions of the method are shown in dashed line boxes. The method prepares a regimen for improving the subject's ability to manage a mental disorder or condition exhibited by the subject. In particular, the method 400 enables an exposure progression that improves the subject's ability to manage a mental disorder or condition of the subject. In various embodiments, the method 400 combines digital reality (e.g., one or more virtual reality scenes), biometric capture, digital reality interactions from subjects, and/or other elements to create or modify personal exposure progress for each subject in a targeted and flexible manner. Thus, in some such embodiments, the present disclosure not only provides for customized exposure progress for each subject, but also dynamically personalizes the timing and/or nature of the exposure progress as the subject interacts with the exposure progress. For example, in some embodiments, the present disclosure dynamically builds or revises personal exposure progress based at least in part on the level of success that the subject has in one or more social challenges.
In some embodiments, the method 400 obtains a plurality of categories for the subject, wherein respective ones of the plurality of categories are associated with a corresponding plurality of suggested experiences, and respective ones of the corresponding plurality of suggested experiences are associated with corresponding digital reality scenes that present corresponding challenges. The method 400 then presents a first digital reality scene on the display that presents a first challenge designed for a first suggested experience of a first category. The method 400 obtains at least one biometric data element associated with the subject when the subject is completing the first challenge in the first digital reality scenario. In some embodiments, using the obtained biometric data elements, the method 400 determines whether the subject successfully completed the first challenge. In some embodiments, in accordance with a determination that the first challenge was successfully completed for the subject, the method 400 determines whether the subject successfully completed the first category. In some embodiments, in accordance with a determination that the first category was successfully completed for the subject, the method 400 determines a second category of the plurality of categories for the subject to proceed next. Thus, the method 400 achieves an exposure progression for improving the subject's ability to manage a subject's psychosis or mental condition.
In some embodiments, the method 400 includes or provides a comparison of the obtained biometric data element with a baseline of the subject or a baseline of a population of users. By comparing the obtained biometric data elements to a baseline, the methods and systems of the present disclosure are able to analyze the change in biometric metrics (e.g., heart rate) over time, evaluate the stress or anxiety (if any) that the subject is experiencing throughout the challenge, the exposure progression and/or the condition of the overall plan, or a combination thereof. In some embodiments, the methods and systems of the present disclosure are capable of evaluating one or more evaluations and/or providing one or more recommendations for each subject for each exposure progress. However, the present disclosure is not limited thereto.
Block 402. Referring to block 402, in various embodiments, the method 400 is provided at a computer system associated with a subject (e.g., the system 100 of fig. 1, the digital reality system 200 of fig. 2A and 2B, the client device 300 of fig. 3, etc.). The computer system includes one or more processors (e.g., the CPU 202 of fig. 2A, the CPU 302 of fig. 3, etc.), a display (e.g., the display of the client device 300), a plurality of sensors (e.g., the sensor 110-1, the sensor 110-2 of fig. 1), and a memory (e.g., the component 212 of fig. 2A, the memory 312 of fig. 3, etc.) coupled to the one or more processors. The memory includes one or more programs configured to be executed by the one or more processors. Thus, in some such embodiments, the method 400 requires the use of a computer system, such as to present information to the subject, and so forth, and thus cannot be performed mentally.
Block 404. Referring to block 404, the display may be any suitable display. For example, in some embodiments, the display is a wearable display, such as display 308 of fig. 13, and the like. Examples of wearable displays include, but are not limited to, smart watches, head mounted displays, smart clothing, near-eye displays, and smart mobile device displays (e.g., smart phones), such as the display of fig. 13, etc. In some embodiments, the display is a Head Mounted Display (HMD) connected to a host computer system, such as client device 300 of fig. 3, and the like. In some embodiments, the display is a standalone HMD, mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Blocks 406 through 410. Referring to blocks 406 through 410, in some embodiments, the plurality of sensors includes at least one biometric sensor (e.g., sensor 110-1 of fig. 1) configured to collect a measurable biometric signal associated with a physiological or psychological state of the subject. The biometric sensor of the at least one biometric sensor may be any suitable biometric sensor. Examples of suitable biometric sensors include, but are not limited to, heart rate sensors, heart rate variability sensors, galvanic skin activity sensors, galvanic skin response sensors, electroencephalogram sensors, eye tracking sensors, recorders, microphones, thermometers, and cameras. In some embodiments, the biometric sensor of the at least one biometric sensor is incorporated in or is a component of a wearable device, such as a wristwatch, wristband, headset, garment, vest, shirt, or other suitable device.
In some embodiments, a biometric sensor of the at least one biometric sensor is incorporated with or is a component of the client device. For example, in some embodiments, when the smart phone is used as a client device, the microphone of the smart phone is used to capture voice data. In some embodiments, a biometric sensor of the at least one biometric sensor communicates with the one or more client devices such that data captured by the one or more sensors may be transmitted to the one or more client devices and/or aggregated in the one or more client devices. For example, in some embodiments, an eye tracking sensor configured to track eye movements is physically or wirelessly connected to a client device. In some embodiments, a biometric sensor of the at least one biometric sensor is in communication with the system (e.g., connected to the digital reality system 200 via the communication network 106) such that data captured by the one or more sensors may be transmitted to and/or aggregated on the system. In some embodiments, a biometric sensor of the at least one biometric sensor communicates with one or more client devices and the digital reality system 200.
In some embodiments, the at least one biometric sensor is comprised of a single biometric sensor. In some embodiments, the at least one biometric sensor comprises two, three, four, five, or more than five biometric sensors of the same type or different types. For example, as a non-limiting example, in some embodiments, the at least one biometric sensor includes two first biometric sensors positioned at different locations, e.g., one heart rate sensor positioned at the subject's wrist and another heart rate sensor positioned at the subject's arm. As another non-limiting example, in some embodiments, the at least one biometric sensor includes a first biometric sensor (e.g., a heart rate sensor for measuring a heart rate of the subject) and a second biometric sensor (e.g., an eye tracking sensor for tracking eye movements of the subject) different from the first biometric sensor. As further non-limiting examples, in some embodiments, the at least one biometric sensor includes one or more heart rate sensors, one or more heart rate variability sensors, one or more galvanic skin activity sensors, one or more galvanic skin response sensors, one or more electroencephalogram sensors, one or more eye tracking sensors, one or more recorders, one or more microphones, one or more thermometers, one or more cameras, or any combination thereof.
In some embodiments, the biometric sensor features other sensors and capabilities. A non-limiting example is a heart rate sensor that includes an accelerometer to collect additional data. Another non-limiting example is a mobile device that includes one or more cameras and/or microphones that may be used to capture facial expressions and/or to record speech or speech.
The plurality of sensors may include other sensors. For example, in some embodiments, the plurality of sensors includes, but is not limited to, accelerometers, magnetometers, proximity sensors, gyroscopes, camera devices (e.g., camera devices or camera modules and related components), positioning modules (e.g., global Positioning System (GPS) receivers or other navigation or geographic positioning system modules/devices and related components), combinations thereof, and the like.
In some embodiments of the present invention, in some embodiments, the plurality of sensors includes between 2 and 100 sensors, between 2 and 50 sensors, between 2 and 20 sensors, between 2 and 15 sensors, between 2 and 10 sensors between 2 and 5 sensors, between 3 and 100 sensors, between 3 and 50 sensors, between 3 and 20 sensors, between 3 and 15 sensors between 2 and 5 sensors, between 3 and 100 sensors, between 3 and 50 sensors between 3 and 20 sensors, between 3 and 15 sensors. In some embodiments, the plurality of sensors includes at least 2 sensors, at least 3 sensors, at least 4 sensors, at least 5 sensors, at least 6 sensors, at least 8 sensors, at least 10 sensors, at least 12 sensors, at least 15 sensors, at least 20 sensors, at least 25 sensors, at least 50 sensors, at least 75 sensors, or at least 100 sensors. In some embodiments, the plurality of sensors includes at most 2 sensors, at most 3 sensors, at most 4 sensors, at most 5 sensors, at most 6 sensors, at most 8 sensors, at most 10 sensors, at most 12 sensors, at most 15 sensors, at most 20 sensors, at most 25 sensors, at most 50 sensors, at most 75 sensors, or at most 100 sensors.
In some embodiments, the plurality of sensors includes a continuous sensor configured to obtain an uninterrupted or repeating (e.g., periodic) flow of data elements from the subject. In some embodiments, the plurality of sensors includes a passive sensor configured to obtain information from an environment associated with the subject. Further, in some embodiments, the plurality of sensors includes a non-invasive sensor configured to obtain the data element from the subject without being introduced into the subject's body. Thus, in some such embodiments, the method 400 can provide a unique exposure progression to the subject based on information (e.g., data) provided by a plurality of sensors associated with the computer system, which allows for improved ability of the subject.
Blocks 412 through 430. Referring to blocks 412-430, in some embodiments, the psychosis or mental condition is a clinically diagnosed mental disorder or a subclinically diagnosed mental disorder. Examples of psychosis or mental conditions include, but are not limited to, being stressed in social situations, fear of social situations, or overwhelming in social situations. For example, in some embodiments, the clinically diagnosed mental disorder is an anxiety disorder, such as separation anxiety disorder, selective mutism, specific phobia, social anxiety disorder, panic disorder, agoraphobia, generalized anxiety disorder, substance-induced anxiety disorder, or anxiety disorder due to a medical condition of the subject, and the like. In some embodiments, the clinically diagnosed mental disorder is a mood disorder, such as depression, bipolar disorder, or environmental mood disorder, and the like. For example, in some embodiments, the depression is major depression. In some embodiments, the clinically diagnosed psychotic disorder is a psychotic disorder, such as schizophrenia, delusional disorder, or hallucinations disorder, or the like. In some embodiments, the clinically diagnosed mental disorder is an eating disorder, such as anorexia nervosa, bulimia nervosa, or binge eating disorder. In some embodiments, the clinically diagnosed mental disorder is an impulse control disorder, such as pyrosis, theft, or compulsive gambling disorder, and the like. In some embodiments, the clinically diagnosed mental disorder includes, but is not limited to, personality disorder, obsessive-compulsive disorder, or post-traumatic stress disorder. In some embodiments, the clinically diagnosed mental disorder is an addictive disorder, such as an alcohol use disorder or substance abuse disorder, or the like. In some embodiments, the clinically diagnosed mental disorder is a personality disorder, such as an anti-social personality disorder, a compulsive personality disorder, or a paranoid personality disorder, among others. However, the present disclosure is not limited thereto.
Block 432. Referring to block 432, in various embodiments, the method includes obtaining a plurality of categories for a subject. In some embodiments, each respective category of the plurality of categories relates to improving a subject's ability to manage a subject's mental disease or condition. Each respective category of the plurality of categories is associated with a corresponding set of suggested experiences (e.g., experience 24-1, experience 24-2, …, and/or experience 24-I) from experience store 22 of digital reality system 200. In some embodiments, each respective category of the plurality of categories is associated with a corresponding plurality of suggested experiences. Each respective category of the plurality of categories is also associated with at least one respective gate criterion of a plurality of gate criteria (e.g., gate criterion 32-1, gate criterion 32-2, …) from the criteria store 30 of the digital reality system 200.
For each respective category of the plurality of categories, each respective suggested experience (e.g., experience 24-1) of the corresponding set of suggested experiences (or plurality of suggested experiences) is associated with a corresponding digital reality scene (e.g., digital reality scene 40-1) of the corresponding plurality of digital reality scenes. A corresponding digital reality scene (e.g., digital reality scene 40-1) of the corresponding plurality of digital reality scenes presents a corresponding challenge (e.g., challenge 26-1) of the corresponding plurality of challenges designed for a respective suggested experience of the respective category. For each respective category of the plurality of categories, each respective suggested experience (e.g., experience 24-1) of the corresponding set of suggested experiences (or plurality of suggested experiences) is also associated with at least one biometric threshold of the plurality of biometric thresholds (e.g., biometric threshold 33-1, threshold 33-2, …) from the criteria store 30 of the digital reality system 200.
It should be noted that the plurality of categories may include any suitable number of categories. For example, in some embodiments, the plurality of categories includes at least a first category and a second category. In some embodiments, the plurality of categories includes at least a first category, a second category, and a third category. In some embodiments, the plurality of categories includes more than three, more than four, more than five, more than ten, or more than twenty categories. In some embodiments, the categories relate to improving the ability of the subject to manage psychosis or mental conditions in social interactions and/or anxiety of interactions (e.g., see parties, meet strangers), public performance anxiety (e.g., report to a panel), observation fear (e.g., write while observed), ingestion anxiety (e.g., eat and/or drink), confidence anxiety (e.g., resist sales), or any combination thereof. However, the present disclosure is not limited thereto. In some embodiments, each respective category of the plurality of categories relates to a unique ability to enhance the subject's management of the subject's mental disease or condition. For example, in some embodiments, a first category (e.g., exposure category) is associated with improving the ability of the subject to face a conflict associated with the subject (such as a fear-causing conflict, etc.). In some embodiments, a second class (e.g., CBT class) is associated with improving the ability of the subject to reconstruct ideas associated with the subject. In some embodiments, a third category (e.g., CBT category) is associated with an ability to enhance a subject's ability to use ideas associated with the subject. In some embodiments, a fourth category (e.g., CBT category) is associated with improving the ability of the subject to dismiss ideas associated with the subject.
In some embodiments, the plurality of obtained categories is a predetermined set of categories or a subset of the predetermined set of categories. In some embodiments, the plurality of obtained categories is selected by the subject, a healthcare practitioner associated with the subject, a model, or a combination thereof. For example, in some embodiments, multiple obtained categories are first selected by the subject, and then the first selection is refined by the healthcare practitioner and/or model to provide a second selection of categories. In some embodiments, the plurality of obtained categories are customized based on the mental illness or condition exhibited by the subject. For example, consider a first user of a first client device 300-1 exhibiting social anxiety disorder and a second user of a second client device 300-2 exhibiting addictive disorder. The method 400 may obtain a different plurality of categories for the first user and the second user, wherein each of the plurality of categories is unique to the user based on the mental illness or condition exhibited by the user.
The corresponding set of suggested experiences may include any suitable number of suggested experiences, such as one, two, three, four, five, more than ten, more than twenty, more than fifty, more than one hundred suggested experiences, and so forth. In some example embodiments, each respective category of the plurality of categories is associated with a corresponding plurality of suggested experiences, e.g., with two, three, four, five, more than ten, more than twenty, more than fifty, more than one hundred suggested experiences. In some embodiments, each respective category of the plurality of categories is associated with: between 2 and 100 categories, between 2 and 50 categories, between 2 and 25 categories, between 2 and 10 categories, between 2 and 5 categories, between 3 and 100 categories, between 3 and 50 categories, between 3 and 25 categories, between 3 and 10 categories, between 3 and 5 categories, between 7 and 100 categories, between 7 and 50 categories, between 7 and 25 categories, between 7 and 10 categories, between 30 and 100 categories, or between 30 and 50 categories.
In some embodiments, different ones of the plurality of categories are associated with the same number of suggested experiences or different numbers of suggested experiences. For example, in some embodiments, the first category and the second category are associated with the same number (e.g., six) of suggested experiences, while the third category is associated with a different number (e.g., eight) of suggested experiences than the first category and the second category. In some embodiments, different ones of the plurality of categories are associated with an entirely different set of suggested experiences or a partially overlapping set of suggested experiences. In other words, in some embodiments, the suggested experience associated with one category is completely different from the suggested experience associated with another category (e.g., no suggested experience is associated with two different categories), or overlaps with the suggested experience associated with another category (e.g., at least one suggested experience is shared by two or more different categories). For example, as a non-limiting example, in some embodiments, a first category is associated with a first set of suggested experiences consisting of experience 24-1, experience 24-2, experience 24-3, and experience 24-4, while a second category is associated with a second set of suggested experiences consisting of experience 24-5, experience 24-6, and experience 24-7 from experience store 22 of digital reality system 200. In such embodiments, the experience associated with the first category is different from the experience associated with the second category. As another non-limiting example, in some alternative embodiments, a first category is associated with a first set of suggested experiences consisting of experience 24-1, experience 24-2, experience 24-3, and experience 24-4, while a second category is associated with a second set of suggested experiences consisting of experience 24-4, experience 24-5, and experience 24-6 from experience store 22 of digital reality system 200. In such an embodiment, experience 24-4 is associated with both the first category and the second category.
In some embodiments, the categories are associated with the first experience 24-1 and the second experience 24-2 of FIG. 2B. The first experience 24-1 is associated with a corresponding digital reality scene (e.g., first digital reality scene 40-1) that presents a corresponding challenge (e.g., first challenge 26-1). Similarly, the second experience 24-2 is associated with a corresponding digital reality scene (e.g., the second digital reality scene 40-2) that presents a corresponding challenge (e.g., the second challenge 26-2).
In some embodiments, the experience is associated with a digital reality scene that presents challenges. For example, in some embodiments, the experience is an exposed experience, such as in social interactions or anxiety of interactions (e.g., see strangers), and the like. In some embodiments, the experience is an exposure experience associated with a digital reality scenario that presents challenges in spoken/non-spoken performance, such as the subject's performance in front of others or public speech (e.g., reporting to a team), etc. In some embodiments, the experience is an exposed experience associated with a digital reality scene that presents challenges in observing fear (e.g., writing while being observed). In some embodiments, the experience is an exposed experience associated with a digital reality scene that presents challenges (eating and/or drinking) in ingestion anxiety. As an example, fig. 5A and 5B illustrate an exemplary digital reality scenario for social challenge training.
In some embodiments, the experience is a CBT experience associated with a digital reality scenario that presents challenges in collecting evidence associated with the subject's mind. In some embodiments, the experience is a CBT experience associated with a digital reality scene that presents challenges in reconstructing the mind of the subject. In some embodiments, the experience is a CBT experience associated with a digital reality scenario that presents challenges in dismissing the subject's mind. In some embodiments, the experience is a positive idea associated with a digital reality scenario that presents challenges in positive presence (e.g., the ability of the subject to be fully present, or awareness of where the subject is and what the subject is doing, etc.).
In some embodiments, a category of the plurality of categories is associated with one or more experiences 24 that involve a meeting with a stranger, such as on a wedding, at a work event, in an appointment App (application), or at the beginning of a school, etc. In some embodiments, a category of the plurality of categories is associated with one or more experiences 24 that involve interaction with a person, e.g., joining a group at work, joining a group, boring with a neighbor, asking a question of a colleague, or receiving feedback from a manager, etc. In some embodiments, a category of the plurality of categories is associated with one or more experiences 24 that involve presentation in front of a person, e.g., presentation at work, alcohol at a meeting, interviewing at work, speaking in front of a full class, etc.
In some embodiments, a category of the plurality of categories is associated with one or more experiences 24 that involve one or more exposure techniques (such as exposure therapy techniques, etc.). In some embodiments, the subject is increasingly confronted with one or more anxiety triggers associated with the subject by interacting with an exposure experience of the exposure category. In some embodiments, such as during a period of time, anxiety of the subject is reduced by exposure to a social experience (e.g., as determined based on one or more data sets obtained by the sensor and evaluated by the healthcare practitioner and/or model of the present disclosure), confidence is established, and their range of activity can be expanded in a manner that improves the subject's ability.
For example, in some embodiments, the exposure category includes one or more social interaction experiences 24 that relate to previous performances of others. In some embodiments, the exposure categories in the plurality of categories include or are associated with one or more experiences 24 that involve anxiety about interactions (e.g., specific challenges 26 for meeting strangers), public lectures (e.g., specific challenges 26 for reporting to a team), fear of observation (e.g., specific challenges 26 for writing when observed), anxiety about ingestion (e.g., specific challenges 26 for eating and/or drinking), or any combination thereof. As a non-limiting example, consider the anxiety of interactions that are in distress for strangers. Non-limiting examples of exposure categories configured for distress to strangers' interaction anxiety include: a first advice experience 24-1 of a corresponding first challenge 26-1 looking at eyes of a strange wine holder when grabbing a beverage from a bar counter; a second advice experience 24-2 that introduces you own and speaks some of the corresponding second challenge 26-2 about things you own to a player character (e.g., another avatar) in the digital reality scene 40; and a third suggested experience 24-3 that participates in a corresponding third challenge 26-3 of the augmented digital reality scene 40 or the mixed digital reality scene 40. In other embodiments, the exposure categories of the plurality of categories include one or more experiences 24 that involve anxiety about interactions (e.g., strangers in distress), non-verbal performance anxiety (e.g., taking exams), ingestion anxiety, public performance anxiety, credibility anxiety (e.g., against high pressure sales personnel), or any combination thereof. In some embodiments, the exposure category of the plurality of categories includes one or more experiences 24 that involve speaking face-to-face with a stranger (e.g., with someone you are less aware of (such as another non-player character in a digital reality scenario or NPC, etc.), overall performance (e.g., speaking in a meeting, making a prepared verbal presentation to a group, etc.), confidence (e.g., expressing disagreement or disfavor to someone you are less aware of), or any combination thereof.
More specifically, in some embodiments, suggesting corresponding challenges 26 of the exposure experience 24 includes: a first challenge 26-1 of using the phone in public; a second challenge 26-2 to engage in group activities; a third challenge 26-3 of eating in public; a fourth challenge 26-4 of drinking with others; a fifth challenge 26-6 of talking to someone with authority; a sixth challenge 26-6 of acting, performing or speaking in front of the spectator; a seventh challenge 26-7 to attend the party; an eighth challenge 26-8 that works when observed; a ninth challenge 26-9 written when viewed; a tenth challenge 26-10 of making a call to someone you are less aware of; eleventh challenge 26-11 of talking face-to-face with someone you are less aware of; a twelfth challenge 26-12 of stranger urination in a public bath; thirteenth challenge 26-13 to enter room when other people have been seated; fourteenth challenge 26-14 as a focus; sixteenth challenge in speaking at meeting 26-16; seventeenth challenges to testing your ability, skills or knowledge 16-17; expressing an eighteenth challenge to someone you do not know well 26-18 that is disagreeable or disfavored; nineteenth challenges 26-19 of looking directly at someone's eyes that you are less aware of (e.g., keeping eye-contact); twentieth challenges 26-20 in making a ready oral presentation to the group; a twenty-first challenge in romantic and/or sexual relation to try to identify someone 26-21; a twenty-second challenge 26-22 of refunding the merchandise to the store and requesting refunds; a twenty-third challenge 26-23 of holding a party; a twenty-fourth challenge against high pressure sales personnel 26-24; or any sub-combination thereof (e.g., any 2, 3, 4,5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, or 22 of the challenges described above).
Additional details and information regarding exposure therapy can be found in the following documents: U.S. provisional patent application number 63/223,871 filed on day 7 and 20 of 2021, U.S. provisional patent application number 63/284,862 filed on day 12 and 1 of 2021, U.S. patent application number 17/869,670 filed on day 7 and 20 of 2022, U.S. provisional patent application number 63/415,860 filed on day 10 and 13 of 2022, and U.S. provisional patent application number 63/415,876 filed on day 10 and 13 of 2022, each of which is incorporated herein by reference in its entirety for all purposes.
In some embodiments, a category of the plurality of categories is associated with one or more experiences 24 that involve one or more CBT techniques (such as one or more cognitively reconstructed CBT techniques, one or more collected evidence CBT techniques, one or more useful CBT techniques, or any combination thereof, etc.). In some embodiments, one or more of the experience for the CBT class includes identifying ideas or statements or challenges for statements obtained from the subject. In some embodiments, one or more experience for the CBT category includes a challenge of breaking a first (e.g., native, initial, etc.) cognitive pattern associated with the formation of a negative idea or statement of the subject with a second (or new) cognitive pattern associated with the formation of a second idea or statement (such as a second positive or positive idea) that is different from the first idea. In some embodiments, the CBT category of the plurality of categories includes one or more experiences that involve implementing cognitive reconstruction challenges within a digital reality scene. As a non-limiting example, in some embodiments, one or more experiences of the CBT class, when considered to be completed by a subject, cause the subject's behavior to activate via long-term or short-term target settings within the client application (such as within a digital reality scene presented by the client application, etc.). In some embodiments, admission Commitment Therapy (ACT) and/or Behavioral Activation (BA) are forms of CBT technology utilized within an experience in one or more experiences that provide particularly effective challenges for anxiety disorders and depression used by the systems, methods, and devices of the present disclosure. However, the present disclosure is not limited thereto.
In some embodiments, one or more of the collected evidence experiences of the CBT category include implementing cognitive reconstruction challenges within the digital reality scenario, including the following: causing the subject to self-agree on harmful or negative evidence of the idea spoken by the subject; causing the subject, medical practitioner, model, or any combination thereof to evaluate whether the self-agreed evidence is sufficient to reconstruct the idea; re-constructing the subject (such as by modulating the expectation of injury to the subject and/or controlling the perception, etc.) to improve the subject's psychosis or mental condition; or any combination thereof. In some embodiments, the one or more useful experiences for the CBT category include implementing cognitive reconstruction challenges within the digital reality scenario by having the subject, healthcare practitioner, model, or any combination thereof identify core beliefs associated with ideas and correlate the core beliefs with one or more short-term and/or long-term targets associated with the subject. In some embodiments, the one or more useful experiences for the CBT category include implementing cognitive reconstruction challenges within the digital reality scenario by having the subject, healthcare practitioner, model, or any combination thereof identify core beliefs associated with anxiety ideas (such as statements captured from the subject by a recorder sensor of the client device, etc.). In some embodiments, the useful CBT experience includes implementing cognitive reconstruction challenges within a digital reality scenario by having a subject, healthcare practitioner, model, or any combination thereof determine how useful or detrimental a core belief is to help the subject achieve one or more short-term and/or long-term goals. In some embodiments, the one or more dissociation experiences for the CBT class include implementing cognitive reconstruction challenges within the digital reality scene by causing the subject to repeat ideas associated with the subject while speaking in a third person within the digital reality scene. In some embodiments, the one or more dissociation experiences include implementing a cognitive reconstruction challenge within the digital reality scenario by having the subject, a healthcare practitioner, a model, or any combination thereof determine whether the anxiety-inducing mind or statement has a reducing or suppressing effect (e.g., loses its intensity) on the subject based on one or more data sets obtained from sensors of the plurality of sensors (e.g., based on the heart rate of the subject, based on sounding characteristics of the subject (such as a praise of the subject, etc.).
Additional details and information regarding categories and/or types of suggested experiences are disclosed in the following documents :Heimberg et al.,1999,Psychometric Properties of the Leibowitz Social Anxiety Scale,"Psychological Medicine,29(1),pg.199;Safren et al.,1999,"Factor Structure of Social Fears:The Liebowitz Social Anxiety Scale,"Journal of Anxiety Disorders,13(3),pg.253;Baker et al.,2002,"The Liebowitz Social Anxiety Scale as a Self-Report Instrument:A preliminary Psychometric Analysis,"Behavior Research and Therapy,40(6),pg.701;Loenen I et al.,"The Effectiveness of Virtual Reality Exposure-Based Cognitive Behavioral Therapy for Severe Anxiety Disorders,Obsessive-Compulsive Disorder,and Posttraumatic Stress Disorder:Meta-analysis,"J Med Internet Res.2022Feb10;24(2);Wu et al.,"Virtual Reality-Assisted Cognitive Behavioral Therapy for Anxiety Disorders:A Systematic Review and Meta-Analysis,"Front Psychiatry.2021Jul 23;12:575094;Garland et al.,"Biobehavioral Mechanisms of Mindfulness as a Treatment for Chronic Stress:An RDoC Perspective,"Chronic Stress(Thousand Oaks).2017Feb.,1:2470547017711912;Hofmann et al.,2017,"Mindfulness-Based Interventions for Anxiety and Depression,"Psychiatr.Clin.North Am.,40(4),pg.739-749;Creswell et al.,"Mindfulness Training and Physical Health:Mechanisms and Outcomes,"Psychosom Med.2019Apr,81(3):224-232;Seabrook et al.,"Understanding How Virtual Reality Can Support Mindfulness Practice:Mixed Methods Study."J Med Internet Res.2020Mar 18,22(3);Navarro-Haro et al.,"Meditation experts try Virtual Reality Mindfulness:A pilot study evaluation of the feasibility and acceptability of Virtual Reality to facilitate mindfulness practice in people attending a Mindfulness conference,"PLoS One.2017Nov 22,12(11);Chandrasiri et al.,"A virtual reality approach to mindfulness skills training,"Virtual Reality 24,143-149(2020);Bluett et al.,2014,"Acceptance and commitment therapy for anxiety and OCD spectrum disorders:an empirical review.,"J Anxiety Disord.,28(6),pg.612-24;Zawn etal.,2021,"What is behavioral activation?,"Medical News Today,October 24,2021;et al.,"Acrophobia treatment with virtual reality-assisted acceptance and commitment therapy:two case reports,"2020;Paul et al.,"Virtual Reality Behavioral Activation as an Intervention for Major Depressive Disorder:Case Report,"JMIR mental health,7(11),2020( Each of which is incorporated by reference herein in its entirety for all purposes). By using these aforementioned different limited types of advice experiences (e.g., interactive, performance, and/or confidence), the subject can more easily follow and track the progress of the regimen 20. Furthermore, in some embodiments, different types of advice experiences are configured for respective mental diseases or conditions. For example, in some embodiments, the addictive disorder condition requires the use of a first experience, while the social anxiety disorder requires the use of a second experience that is different from the first experience. However, the present disclosure is not limited thereto.
Blocks 434 through 436. Referring to blocks 434 through 436, in some embodiments, at least one gate criterion is set by a system administrator (e.g., an administrator of the digital reality system 200), a model or algorithm, a user (e.g., a subject), a healthcare practitioner associated with the subject, or any combination thereof. For example, in some embodiments, the gate criteria are set by a system administrator that configures the gate criteria to be conditional on receiving payment from the subject (e.g., for accessing digital reality system 200, for accessing a particular digital reality scene 40, etc.). However, the present disclosure is not so limited, and those skilled in the art of the present disclosure will appreciate that other corresponding gate criteria set by a system administrator are within the scope of the present disclosure. In some embodiments, the gate criteria set by the system are geographic gate criteria that impose geographic restrictions on utilizing the systems, methods, and apparatus of the present disclosure within one or more geographic areas, such as by restricting the subject and/or healthcare practitioner from using the system when outside of the first geographic area. However, the present disclosure is not limited thereto.
In some embodiments, the gate criteria are set by the subject (e.g., the user of the first client device 300-1). For example, in some embodiments, the gate criteria is a number of challenges completed, such as a number of short-term goals achieved by the subject, a number of reconstructed ideas of the subject, a number of exposed challenges completed by the subject, a period of time to interact with the digital reality scene, or a combination thereof.
In some embodiments, the gate criteria are set by a health care worker (such as a healthcare practitioner, etc.) associated with the subject (e.g., the user of the second client device 300-2, via the client application 320). In some embodiments, the first category of gate criteria is set by a system administrator or healthcare worker associated with the subject, while the second category of gate criteria is set by the subject.
In some embodiments, at least one gate criterion is further modified by a healthcare practitioner associated with the subject or a classification-based model, cluster, or other parameter associated with the user that indicates that changing the respective one of the at least one gate criterion will increase the likelihood of interaction and/or better clinical outcome. For example, in some embodiments, a healthcare practitioner associated with the subject or a classification-based model changes a first gate criterion associated with a re-conception law threshold that the subject must meet, or a second gate criterion associated with an exposure challenge threshold that the subject must meet, etc.
Blocks 438 through 448. Referring to blocks 438 through 448, in some embodiments, the at least one respective gating criterion associated with the respective category includes a ranking gating criterion. In some embodiments, the ranking gate criteria is associated with a hierarchical ranking of each of the plurality of categories. In some embodiments, the ranking gating criteria include subjective ratings from highest to lowest (e.g., user-provided "mild," "moderate," "severe," or "unresponsive" ratings), objective ratings from highest to lowest (e.g., most effective to least effective ratings determined by the digital reality system 200 or a healthcare practitioner associated with the subject), or a combination thereof. In some embodiments, each category in the hierarchical ranking is a low order ranking or a general top ranking, such as a competitive ranking, a dense ranking, an ordinal ranking, a scoring ranking, or a combination thereof, or the like. For example, in some embodiments, the first ranking criterion is ranked higher than the second ranking criterion, the ranking is lower than the second ranking criterion, or the ranking is equal to the second ranking criterion.
In some embodiments, the at least one respective gate criterion associated with the respective category includes a difficulty gate criterion. The difficulty gate criteria are associated with the level of complexity or need required by the subject to meet the respective challenges. For example, in some embodiments, the difficulty gate criteria modify how one or more NPC characters in the digital reality scene interact with the subject, such as how the speech of the NPC character and/or speech is used to speak into the subject within the digital reality scene, and the like, based on the respective ages of the subject. However, the present disclosure is not limited thereto. In some embodiments, difficulty gate criteria are used to determine whether the level of interaction of the subject with the digital reality scene (such as how much interaction or supervision of a healthcare practitioner, etc.) is required to progress in the exposure progression.
In some embodiments, the at least one respective gate criterion associated with the respective category comprises a healthcare practitioner gate criterion. The healthcare practitioner gate criteria are associated with approval from a healthcare practitioner associated with the subject. In this way, a healthcare practitioner associated with a subject may provide supervision of improving a mental disorder or condition exhibited by the subject by approving or denying access to the category and suggested experience associated with the category. For example, a healthcare practitioner may refuse a particular user to access a particular category and a particular suggested experience until the healthcare practitioner believes that the user is "ready" for the particular category and suggested experience.
In some embodiments, the at least one respective gate criterion associated with the respective category includes a user gate criterion. In some embodiments, the user gate criteria are associated with approval or confirmation of the selection of the category from the subject. In this way, if the user feels that the particular category and the suggested experience(s) associated with the particular category are not ready, the user may actively participate in the selection of the particular category to cooperate with the category or reject the category.
In some embodiments, the gate criteria sets conditions for determining whether a category completed successfully and/or for identifying one or more subsequent categories for completion by the subject. In some embodiments, the gate criteria sets preconditions for executing a category or conditions that must be met in order for the category to be considered complete. A non-limiting example of a precondition is a requirement that a certain class (e.g., a first class) be completed successfully before allowing the user to invoke a particular class (e.g., a second class). For example, in some embodiments, the first precondition is a requirement to successfully complete the first coaching category before allowing the user to invoke a second category different from the first coaching category. Another non-limiting example of a condition that must be met in order for a category to be considered complete is the minimum number of suggested experiences associated with the category that must be successfully completed.
In some embodiments, the at least one respective gate criterion associated with the respective category includes an arrangement gate criterion. The placement gate criteria is associated with an order of one or more of the plurality of categories, such as an order of one or more of a series of categories forming a story or co-narrative cue, or the like. Consider, for example, a collection of three categories A, B and C that form a story or co-narrative cue. To implement a story or co-narrative cue in the correct order of A, B, then C, a first arrangement gate criterion is applied to B that requires that A be completed prior to launching B, and a second arrangement gate criterion is applied to C that requires that both A and B be completed prior to launching C.
Blocks 450 through 452. Referring to blocks 450 through 452, the corresponding digital reality scene may be a virtual reality scene, an augmented reality scene, or a mixed reality scene. For example, in some embodiments, the corresponding digital reality scene is a virtual reality scene that facilitates complete digital immersion, allowing one or more digital objects (e.g., object 42 of fig. 2B) within a digital space (e.g., three-dimensional digital space). In some alternative embodiments, the corresponding digital reality scene is an augmented reality scene that presents the real world using digital objects. In other embodiments, the corresponding digital reality scene is a mixed reality scene 40 that provides spatial mapping and data contextualization in real-time, allowing one or more digital objects within the digital space that are related to the user space (e.g., the field of view of the user or client device 300). Additional details and information about the type of digital reality scene may be found in Parveau et al.,2018,3iVClass:A New Classification Method for Virtual,Augmented and Mixed Realities,"Procedia Computer Science,141,pg.263(, which is incorporated herein by reference in its entirety for all purposes). Thus, by presenting challenges within a digital reality scenario, the subject becomes immersed in the experience, which improves interaction between the subject and the systems and methods of the present disclosure to increase the subject's ability.
Block 454. Referring to block 454, in some embodiments, a respective biometric threshold (e.g., biometric threshold 33-1) of a respective suggested experience (e.g., experience 24-1) of the plurality of suggested experiences is set by a system administrator, a subject, a healthcare worker associated with the subject, a model, or an algorithm, or any combination thereof. For example, as a non-limiting example, in some embodiments, a biometric threshold of the suggested experience (such as a maximum heart rate threshold, etc.) is set by the subject (e.g., the user of the first client device 300-1). As another non-limiting example, in some embodiments, the biometric threshold of the recommended experience is set by a health care worker (e.g., a healthcare practitioner) associated with the subject, such as by configuring a threshold vocalization feature that the subject needs to meet, or the like. As yet another non-limiting example, in some embodiments, a biometric threshold of a first suggested experience (e.g., experience 24-1) of the plurality of suggested experiences is set by the subject and a biometric threshold of a second suggested experience (e.g., experience 24-2) of the plurality of suggested experiences is set by a healthcare worker associated with the subject. In some embodiments, the biometric threshold of the suggested experience is further modified by a healthcare practitioner associated with the subject or a classification-based model, cluster, or other parameter associated with the user that indicates that changing the biometric threshold will increase the likelihood of interaction and/or better clinical outcome.
In some embodiments, the biometric threshold is set, at least in part, to a condition for determining whether a challenge associated with the suggested experience completed successfully and/or for identifying a subsequent challenge/experience for the subject to complete. In some embodiments, the biometric threshold sets, at least in part, a prerequisite for executing a digital reality scenario associated with a suggested experience or a condition that must be met in order to consider the challenge/experience to be complete. Non-limiting examples of preconditions are requirements that require some challenges (e.g., attending a gathering, reconstructing ideas, determining the usefulness of core beliefs, etc.) to be successfully completed before allowing the subject to invoke a particular challenge (e.g., speak in front of a large audience, dismiss ideas, etc.). A non-limiting example of a condition that must be met in order for a challenge to be considered complete is the minimum length of eye contact (e.g., duration of a period of time) with a specified portion of a corresponding digital reality scene associated with the corresponding challenge. Another non-limiting example of a condition that must be met in order for the challenge to be considered complete is a threshold root mean square of the data set captured by the recorder, such as a representation of the sustained strength of the subject's voice, etc. Yet another non-limiting example of a condition that must be met in order for a challenge to be considered complete is a threshold speech entropy that describes the ability of the vocalization feature obtained from the subject, such as how much information was conveyed through the vocalization feature, and so forth.
Block 458. Referring to block 458, the biometric threshold may be an absolute parameter, a relative parameter, a normalized parameter, or the like. For example, in some embodiments, the biometric threshold of the experience is an absolute parameter. Non-limiting examples of absolute biometric thresholds include, but are not limited to, a minimum number of utterances required by the subject when the subject is completing the corresponding challenge of the experience, a minimum decibel level of one or more utterances of the subject to be heard, a minimum length of eye contact required by the subject when the subject is completing the corresponding challenge of the experience, a threshold spectral entropy of the vocal features (e.g., a measure of irregularity of the vocal features), a threshold PDF entropy of the vocal features (e.g., a measure of stability of the vocal features), or a combination thereof. In some embodiments, the biometric threshold of the experience is a relative parameter, for example, relative to a baseline of each subject or relative to a baseline of the population. A non-limiting example of a relative biometric threshold is a change in decibel level relative to a decibel level baseline of the subject (e.g., a decibel level when speaking under no pressure) for determining a reduction in subjective anxiety or improvement in biometric metrics achieved by the subject during a corresponding challenge of experience. Another non-limiting example of a relative biometric threshold is a condition for determining whether the subject has reached a relaxed state based on the subject's heart rate relative to a baseline heart rate of the subject or population indicative of a relaxed state. Yet another non-limiting example of a relative biometric threshold is a condition for determining whether the subject has reached a relaxed state based on the PDF entropy of the vocal features obtained from the subject using the logger relative to a baseline PDF entropy of the object or population indicative of the relaxed state. In some embodiments, the biometric threshold of the experience is a normalized parameter, such as scale-based confidence, and the like. In some embodiments, the biometric threshold of the experience includes a combination of absolute parameters, relative parameters, and/or normalized parameters. However, the present disclosure is not limited thereto.
Block 460. Referring to block 460, biometric thresholds may be set for various biometric measurements. Examples of biometric thresholds include, but are not limited to, eye contact threshold, heart rate threshold, confidence threshold, decibel level threshold, pitch threshold, utterance threshold, word threshold, or emotion criteria, among others.
Block 462. Referring to block 462, in some embodiments, the biometric threshold of the suggested experience is an eye contact threshold. In some embodiments, the eye contact threshold includes a threshold length of eye contact (e.g., minimum eye contact duration) of the subject during presentation of a corresponding digital reality scene for a corresponding challenge designed for the suggested experience. In some embodiments, the threshold length of eye contact is at least 2 seconds, at least 3 seconds, at least 5 seconds, at least 10 seconds, or at least 30 seconds. In some embodiments, the threshold length of eye contact is at most 2 seconds, at most 3 seconds, at most 5 seconds, at most 10 seconds, or at most 30 seconds. In some embodiments, the threshold eye contact is between 2 seconds and 30 seconds, between 2 seconds and 10 seconds, between 3 seconds and 30 seconds, between 3 seconds and 10 seconds, between 4 seconds and 30 seconds, between 4 seconds and 10 seconds, between 5 seconds and 30 seconds, or between 5 seconds and 10 seconds. Alternatively, alternatively or additionally, in some embodiments, the biometric threshold of the advice experience is an increase in eye contact for the subject during presentation of a corresponding digital reality scene for a corresponding challenge designed for the advice experience (e.g., an increased length of eye contact compared to an eye contact baseline of the subject). In some embodiments, the increase in eye contact is at least 1 second, at least 2 seconds, at least 3 seconds, at least 5 seconds, or at least 10 seconds. In some embodiments, the increment of eye contact is at most 1 second, at most 2 seconds, at most 3 seconds, at most 5 seconds, or at most 10 seconds. In some embodiments, the increment of eye contact is between 1 second and 10 seconds, between 1 second and 5 seconds, between 2 seconds and 10 seconds, between 2 seconds and 5 seconds, between 3 seconds and 10 seconds, between 3 seconds and 5 seconds, between 4 seconds and 10 seconds, or between 4 seconds and 5 seconds.
In some embodiments, the desired minimum length of eye contact and/or the desired increment of eye contact are together with a portion of the corresponding digital reality scene of the suggested experience. For example, as a non-limiting example, in the digital reality scenario illustrated in fig. 5A, the desired minimum length of eye contact and/or the desired delta of eye contact is along with a portion of the digital reality scenario where the subject is interacting with (e.g., talking to) the player character (e.g., object 42-1). In the digital reality scenario illustrated in fig. 5B, the desired minimum length of eye contact and/or the desired increment of eye contact is along with a portion of the digital reality scenario where the subject is interacting with (e.g., talking to) the player character (e.g., object 42-2).
Block 464. Referring to block 464, in some embodiments, the biometric threshold of the suggested experience includes a heart rate threshold that reveals a corresponding challenge designed for the suggested experience. In some embodiments, the heart rate threshold includes a maximum heart rate (e.g., a maximum number of beats per minute (bpm)) for the subject during presentation of a corresponding digital reality scene for a corresponding challenge designed for the suggested experience. In some embodiments, the threshold heart rate is at most 200bpm, at most 190bpm, at most 180bpm, at most 170bpm, at most 160bpm, at most 150bpm, at most 140bpm, or at most 130bpm, etc. In some embodiments, the threshold heart rate is at least 200bpm, at least 190bpm, at least 180bpm, at least 170bpm, at least 160bpm, at least 150bpm, at least 140bpm, or at least 130bpm, etc. In some embodiments, the threshold heart rate is between 55bmp and 100bmp, between 90bmp and 120bmp, between 105bmp and 140bmp, between 120bmp and 120bmp, between 135bmp and 180bmp, or between 150bmp and 200 bmp.
Alternatively, alternatively or additionally, in some embodiments, the biometric threshold of the suggested experience includes a heart rate reduction for the subject (e.g., a reduced heart rate per minute compared to a heart rate baseline of the subject) during presentation of a corresponding digital reality scene for a corresponding challenge designed for the suggested experience. In some embodiments, the heart rate reduction is at least 2bpm, at least 4bpm, at least 6bpm, at least 8bpm, at least 10bpm, at least 15bpm, at least 20bpm, at least 25bpm, at least 30bpm, at least 40bpm, or at least 50bpm, etc. In some embodiments, the heart rate reduction is at most 2bpm, at most 4bpm, at most 6bpm, at most 8bpm, at most 10bpm, at most 15bpm, at most 20bpm, at most 25bpm, at most 30bpm, at most 40bpm, or at most 50bpm, etc. In some embodiments, the heart rate reduction is between 2 and 50, between 2 and 40, between 2 and 30, between 7 and 20, between 7 and 10, between 15 and 50, between 15 and 40, between 15 and 30, between 15 and 20, between 25 and 50, between 25 and 40, or between 25 and 30, between 4 and 10, between 2 and 50, between 2 and 40, between 2 and 10, between 15 and 50, between 15 and 30, or between 25 and 30.
Blocks 466 through 472. Referring to blocks 466 through 472, in some embodiments, the biometric threshold of the suggested experience includes a confidence, decibel level, pitch, or combination thereof of the subject during presentation of the corresponding digital reality scene for the corresponding challenge designed for the suggested experience. In some embodiments, the confidence threshold may include an absolute confidence threshold, a relative confidence threshold, or both, as may the eye contact threshold and/or the heart rate threshold. The decibel level threshold may comprise an absolute decibel level threshold, a relative decibel level threshold, or both. The pitch threshold may include an absolute pitch threshold, a relative pitch threshold, or both. The relative confidence, decibel level, or pitch threshold is set as compared to the baseline of the subject to determine the improved conditions achieved by the subject during the corresponding challenge.
In some embodiments, the absolute confidence threshold is represented by a score (e.g., 50 in a scale of 1 to 100), or by a range having a lower threshold and an upper threshold (e.g., 40 to 60 in a scale of 1 to 100), or the like. In some embodiments, utterances having scores above an absolute confidence threshold indicate that the subject is exhibiting confidence or confidence. In some embodiments, the relative confidence threshold is a desired increase or decrease in confidence compared to the confidence baseline of the subject, and is set to, for example, 2, 3,4, 5, 10, 15, or 20, etc.
In some embodiments, the absolute decibel level threshold is presented by a range (e.g., 30 to 85dB, 40 to 80dB, or 50 to 70dB, etc.) having a lower decibel level threshold and an upper decibel level threshold. Utterances at decibels below the lower decibel level threshold are not large enough to be heard, and utterances at decibel levels above the upper decibel level threshold are too large. Typically, 0dB is the minimum sound level that a person with good hearing can hear. 130dB is a distressing point of sound. A low-sound whisper will produce sound in the decibel level of about 30 dB. A normal conversation will produce sound in the decibel level of about 60 dB. A singing voice will produce sound in the decibel level of about 70 dB.
In some embodiments, the relative decibel level threshold is a desired increase or decrease in the decibel level from a decibel level baseline of the subject. In some embodiments, the relative decibel level threshold is at least 1dB, at least 2dB, at least 3dB, at least 4dB, at least 5dB, at least 6dB, at least 7dB, at least 8dB, at least 9dB, or at least 10dB, etc. In some embodiments, the relative decibel level threshold is at most 1dB, at most 2dB, at most 3dB, at most 4dB, at most 5dB, at most 6dB, at most 7dB, at most 8dB, at most 9dB, or at most 10dB, etc. For some subjects (e.g., subjects who typically speak too gently), the relative decibel level threshold sets a condition for increasing the decibel level for the subject. For some other subjects, (e.g., subjects who typically speak too loud), the relative decibel level threshold sets a reduced decibel level condition for the subject.
In some embodiments, the absolute pitch threshold is presented by a range (e.g., 0.1kHz to 15kHz, 0.3kHz to 10kHz, or 0.5kHz to 5kHz, etc.) having a lower pitch threshold and an upper pitch threshold. In some embodiments, the relative pitch threshold is a desired increase or decrease in pitch compared to the pitch baseline of the subject. In some embodiments, the relative pitch threshold is at least 10Hz, at least 20Hz, at least 30Hz, at least 40Hz, at least 50Hz, at least 60Hz, at least 70Hz, at least 80Hz, at least 90Hz, at least 100Hz, at least 150Hz, at least 200Hz, at least 250Hz, or at least 300Hz, etc. In some embodiments, the relative pitch threshold is at most 10Hz, at most 20Hz, at most 30Hz, at most 40Hz, at most 50Hz, at most 60Hz, at most 70Hz, at most 80Hz, at most 90Hz, at most 100Hz, at most 150Hz, at most 200Hz, at most 250Hz, or at most 300Hz, etc. For some subjects (e.g., subjects who typically speak at a lower pitch), a condition to increase the pitch is set for the subject relative to the pitch threshold. For some subjects (e.g., subjects who typically speak at a higher pitch), a condition to decrease the pitch is set for the subject relative to the pitch threshold.
In some embodiments, the pitch threshold and the decibel level threshold are related to each other, e.g., based on auditory sensitivity of human hearing. Typically, the human ear perceives frequencies between 20Hz (lowest pitch) and 20kHz (highest pitch). However, the auditory sensitivity of the human ear varies at a frequency between 20Hz and 20 kHz. For example, at about 2kHz, sharp hearing can hear sound in decibel levels between 0dB and 120 dB. As the frequency decreases or increases, the auditory sensitivity narrows. For example, at frequencies near 20Hz, the human ear can typically hear sound in the decibel level between 80dB and 100dB, and at frequencies near 20kHz, the human ear can typically hear sound in the decibel level between 60dB and 80 dB. In view of auditory sensitivity, in some embodiments, the decibel level threshold is set to a relatively large range or increment when the pitch threshold is at about 1kHz to 2kHz and to a relatively small range or increment when the pitch threshold is below 1kHz or above 2 kHz. However, the present disclosure is not limited thereto. For example, in some embodiments, the pitch threshold and decibel level threshold are set independently of one another, with or without consideration of auditory sensitivity.
Blocks 474 through 476. Referring to blocks 474 through 476, in some embodiments, the biometric threshold of the suggested experience includes an utterance threshold for the subject during presentation of a corresponding digital reality scene for a corresponding challenge designed for the suggested experience. In some embodiments, the utterance threshold includes an absolute utterance threshold represented by a range having a minimum number of utterances and a maximum number of utterances. Typically, the minimum number of utterances sets conditions that encourage the subject to speak, while the maximum number of utterances sets conditions that discourage the subject from being in a pter-like condition. In some embodiments, the threshold number of utterances is at least 2 utterances, at least 3 utterances, at least 5 utterances, at least 10 utterances, or at least 30 utterances. In some embodiments, the threshold number of utterances is at most 100 utterances, at most 90 utterances, at most 70 utterances, or at most 60 utterances. In some embodiments, the threshold number of utterances is between 2 and 100 utterances, between 2 and 80 utterances, between 2 and 50 utterances, between 2 and 20 utterances, between 2 and 10 utterances, between 5 and 100 utterances, between 5 and 80 utterances, between 5 and 50 utterances, between 5 and 20 utterances, between 5 and 10 utterances, between 15 and 100 utterances, between 15 and 80 utterances, between 15 and 50 utterances, between 15 and 20 utterances, between 20 and 20 utterances, between 35 and 80 utterances, or between 35 and 50 utterances.
Alternatively, alternatively or additionally, in some embodiments, the utterance threshold includes a relative utterance threshold for the subject (e.g., an increase or decrease in the number of utterances compared to a baseline of utterances of the subject) during presentation of a corresponding digital reality scene for a corresponding challenge designed for the suggested experience. In some embodiments, the relative utterance threshold is at least 1 utterance, at least 2 utterances, at least 3 utterances, at least 5 utterances, at least 10 utterances, at least 15 utterances, or at least 20 utterances. For some subjects (e.g., subjects that tend to be silent when stressed), the relative utterance threshold encourages them to speak more. For some subjects (e.g., subjects who tend to have a pter shape when stressed), the relative utterance threshold encourages them to speak less.
In some embodiments, the biometric threshold of the suggested experience includes a word threshold for the subject during presentation of a corresponding digital reality scene for a corresponding challenge designed for the suggested experience. Similar to the utterance threshold, in some embodiments the word threshold includes an absolute word threshold represented by a range having a minimum number of words and a maximum number of words, and/or a relative word threshold that requires an increase or decrease in the number of words compared to the word baseline of the subject.
Block 478. Referring to block 478, in some embodiments, the biometric threshold of the suggested experience is that the examinee meets or fails to meet the emotion analysis criteria during presentation of the corresponding digital reality scene for the corresponding challenge designed for the suggested experience. In some embodiments, the emotion analysis criteria includes an excited emotion threshold and an overexcited emotion threshold. However, the present disclosure is not limited thereto. For example, in some embodiments, the emotion analysis threshold is used to determine emotion changes when comparing a first idea or statement obtained from a subject with a second idea or statement obtained from the subject, such as from such as feeling positive emotion from neutral emotion, from negative emotion to neutral emotion, or from negative emotion to positive emotion, etc. In some embodiments, the emotion analysis criteria are associated with: the ideas or statements determined to be associated with the subject are further associated with a first emotion. In some embodiments, the first emotion is an all or nothing emotion, an oversummarized emotion, a filtered emotion, a cancelled positive emotion, a read heart emotion, a thematic big emotion, an emotion inference emotion, a tagged emotion, a personalized emotion, or a combination thereof.
In some embodiments, all or none of the emotion is associated with ideas or statements from the subject describing binary (e.g., black and white) classification of the subject. In some embodiments, the oversummaries emotion is associated with ideas or statements from the subject describing a single event as a forever or patterned event. In some embodiments, filtering emotion is associated with ideas or statements from the subject that excessively describe individual details. In some embodiments, cancelling the positive emotion is associated with an idea or statement from the subject describing a refusal to accept the positive event. In some embodiments, the read mood is associated with an idea or statement from the subject describing any third party description associated with the subject. In some embodiments, the topic majorities are associated with exaggerated ideas or statements from the subject describing aspects of the event. In some embodiments, emotional inferences are associated with ideas or statements from a subject describing a depiction of internal emotions with external features. In some embodiments, the tagging emotion is associated with an idea or statement from the subject describing an action of attaching one or more tags to the subject. In some embodiments, the personalized emotion is associated with ideas or statements from the subject that describe the subject as the source of responsibility or action. However, the present disclosure is not limited thereto. Additional details and information about the emotions of the present disclosure are found in Burns et al, 1981, "Feeling Good: the New Mood Therapy," New York, pg.393, print (which is incorporated herein by reference in its entirety for all purposes).
It should be noted that in some embodiments, although the biometric thresholds of different experiences are the same type of biometric threshold, these biometric thresholds may be different. For example, as a non-limiting example, assume that a first suggested experience is associated with a first digital reality scenario for an exposure category (where a subject is talking face-to-face with a person in a privacy scenario). A second suggested experience for the exposure category is associated with a second digital reality scenario in which the subject is speaking in front of the audience in a party with background noise. In such embodiments, the decibel level threshold of the second suggested experience will typically be higher than the decibel level threshold of the first suggested experience, e.g., 60 to 80dB for the second suggested experience and 40 to 60dB for the first suggested experience. However, the present disclosure is not limited thereto.
Furthermore, it should be noted that the biometric threshold of the same experience may be different for different subjects. For example, as a non-limiting example, assume that a first subject and a second subject interact in the same exposure (e.g., social) challenge (e.g., with job interview). The first subject tends to be calm under stress, while the second subject tends to be pterygoid (where heart rate increases (e.g., increases) and/or pitch changes) under stress. In such embodiments, the biometric thresholds (absolute and/or relative thresholds) for the first and second subjects will typically be different, e.g., the threshold for utterances for the first subject will be set to encourage the first subject to speak more, while the threshold for utterances for the second subject will be set to encourage the second subject to speak less.
Further, it should be noted that the biometric threshold of the experience may be reset or modified by the subject, a healthcare worker associated with the subject, a model/algorithm, or any combination thereof. For example, as a non-limiting example, suppose a subject engages in a social challenge a second time after receiving some educational or therapeutic challenge (e.g., a positive and/or cognitive reconstruction challenge) and having made some improvement in managing psychosis or mental condition (e.g., after an initial attempt at exposing a category of social challenges). The biometric threshold of the experience when the subject interacts with the corresponding challenge may be reset or modified according to the progress that the subject has made.
Block 480. Referring to block 480, the method includes presenting, on a display, a first digital reality scene that presents a first challenge designed for a first category of first suggested experiences. For example, as a non-limiting example, in some embodiments, the method presents the first digital reality scene 40-1 on a display. The first digital reality scene 40-1 presents a first challenge 26-1 (e.g., fear) designed for the first experience 24-1. In some embodiments, the challenge is unique to the experience. However, the present disclosure is not limited thereto.
The first digital reality scene may be any suitable type of digital reality scene including, but not limited to, a virtual reality scene, an augmented reality scene, or a mixed reality scene. In some embodiments, the first digital reality scenario depends on the type of display of the respective client device 300. For example, in some embodiments, for a first client device 300-1 having processing capabilities to display a virtual reality scene, the first digital reality scene is a virtual reality scene. For the second client device 300-1 having processing capabilities to display an augmented reality scene, the first digital reality scene is an augmented reality scene. In some embodiments, the first digital reality scene depends on the type of experience. For example, the first experience is associated with a virtual reality scene and the second experience is associated with an augmented reality scene. In some embodiments, the first digital reality scenario depends on the type of challenge. For example, a first challenge is associated with a virtual reality scene and a second challenge is associated with a mixed reality scene.
Block 482. Referring to block 482, the method includes (C) obtaining, in coordination with the presenting (B), a plurality of data elements from all or a subset of the plurality of sensors (e.g., sensor 110-1, sensor 110-2, … of FIG. 1). The subset of sensors includes at least one biometric sensor (e.g., sensor 110-1). The at least one biometric sensor is configured to capture at least one biometric data element associated with the subject when the subject is completing the first challenge in the first digital reality scenario. In some embodiments, the at least one biometric sensor includes, but is not limited to, a heart rate sensor, a heart rate variability sensor, a blood sensor, an electrical skin activity sensor, an electrical skin response sensor, an electroencephalogram sensor, an eye tracking sensor, a recorder, a microphone, a thermometer, a heat map sensor, a camera, or any combination thereof. For example, in some embodiments, the at least one biometric sensor includes a recorder, microphone, camera, or combination thereof for capturing a sound producing feature (e.g., an utterance) from the subject.
Blocks 484 through 490. Referring to blocks 484 through 490, the method includes (D) determining whether a set of biometric data elements (e.g., a first set of biometric data elements) obtained from obtaining (C) meets at least one biometric threshold of a first challenge to evaluate whether the first challenge completed successfully. The set of biometric data elements includes a first plurality of biometric data elements captured by a first biometric sensor of the at least one biometric sensor (e.g., sensor 110-1 of fig. 1). The at least one biometric threshold includes a first biometric threshold (e.g., biometric threshold 33-1 from criteria store 30 of digital reality system 200 of fig. 2B). Determining (D) includes determining whether a comparison of the first plurality of biometric data elements to the corresponding threshold baseline characteristics meets a first biometric threshold.
By comparing the first plurality of biometric data elements to corresponding threshold baseline characteristics, the methods and systems of the present disclosure are able to evaluate improvements for each subject based on subject-specific values, specific challenges, and/or population values. In an embodiment, the corresponding threshold baseline characteristic is a biometric baseline of the subject captured at the beginning of the corresponding challenge of the first digital reality scene. In another embodiment, the corresponding threshold baseline characteristic is a biometric baseline of the subject captured while the subject is in a relaxed state (such as in a happy place or at the time of initiating the experience), such that the baseline does not reflect the expected anxiety on the challenge. Fig. 6A illustrates a non-limiting example of a pleasure venue in which a subject can play an introduction or educational video (e.g., a psychological educational challenge requiring the subject to view the duration of the introduction or educational video). In yet another embodiment, the corresponding threshold baseline characteristic is a biometric baseline of the subject that is captured before the subject begins any challenges and/or experiences.
For example, in some embodiments, the first biometric sensor is a heart rate sensor and the corresponding threshold baseline characteristic is an initial heart rate of the subject captured at the beginning of a corresponding challenge of the first digital reality scene, while the subject is in a relaxed state, or before the subject begins any experience and/or challenge. In such embodiments, the comparison of the first plurality of biometric data elements to the corresponding threshold baseline characteristic provides a change in heart rate over time based on a particular initial value of the subject. By comparing the first plurality of biometric data elements to corresponding threshold baseline characteristics, the methods and systems of the present disclosure are able to distinguish improvements by different subjects, or the same subject during different challenges, or the same subject upon repeated social challenges. For example, the methods and systems of the present disclosure can distinguish the improvement achieved by a first subject whose heart rate drops from a high initial heart rate (e.g., 140 beats per minute) to a moderate heart rate (e.g., 120 beats per minute), from a second subject whose heart rate drops from a moderate initial heart rate (e.g., 125 beats per minute) to a moderate heart rate (e.g., 120 beats per minute), and from a deterioration of a third subject whose heart rate increases from a moderate initial heart rate (e.g., 110 beats per minute) to (e.g., 120 beats per minute). However, the present disclosure is not limited thereto.
The first biometric sensor may be any suitable biometric sensor including, but not limited to, a heart rate sensor, a heart rate variability sensor, a blood sensor, an electrical skin activity sensor, an electrical skin response sensor, an electroencephalogram sensor, an eye tracking sensor, a recorder, a microphone, a thermometer, a heat map sensor, a camera, or any combination thereof. For example, as a non-limiting example, in some embodiments, the at least one biometric sensor includes a heart rate sensor configured to capture a heart rate of the subject while the subject is completing the first challenge in the first digital reality scenario. As another non-limiting example, in some embodiments, the at least one biometric sensor comprises a heart rate variability sensor configured to capture heart rate variability of the subject while the subject is completing the first challenge in the first digital reality scenario. As yet another non-limiting example, in some embodiments, the at least one biometric sensor comprises a microphone or recorder or the like configured to record an utterance of the subject when the subject is completing a first challenge in a first digital reality scene, wherein the one or more vocalization features are recognized and evaluated from the utterance obtained from the subject. As yet another non-limiting example, in some embodiments, the first biometric sensor is a blood pressure sensor and the corresponding threshold baseline characteristic is a systolic pressure of the subject or a diastolic pressure of the subject. In some embodiments, the systolic or diastolic blood pressure is captured at the beginning of the corresponding challenge of the first digital reality scenario. In some embodiments, systolic or diastolic blood pressure is captured while the subject is in a relaxed state. In some embodiments, systolic or diastolic blood pressure is captured before the subject begins any educational or treatment planning.
Blocks 494 through 498. Referring to blocks 494 through 498, in some embodiments, the method further comprises (H) electronically receiving a second plurality of data elements associated with the subject. The second plurality of data elements includes a second plurality of biometric data elements associated with an initial psychosis or mental condition of the subject. The first baseline characteristic is formed from a second plurality of biometric data elements.
For example, in an embodiment, a second plurality of biometric data elements (e.g., heart rate, pitch, decibel level, entropy, temporal features, etc. of the subject) are obtained or captured at the beginning of a corresponding challenge of the first digital reality scene. In another embodiment, the second plurality of biometric data elements is obtained or captured while the subject is in a relaxed state (e.g., during an introduction or tutorial challenge presented in a digital reality scenario including a happy venue such as the one illustrated in fig. 6A). In yet another embodiment, the second plurality of biometric data elements is obtained or captured during the positive challenge. Fig. 6B illustrates a non-limiting example of a digital reality scenario in which a subject may begin a positive challenge by having an avatar press a play button. In some embodiments, the second plurality of biometric data elements is obtained or captured during the CBT challenge. Fig. 9A-9C illustrate non-limiting examples of digital reality scenarios in which a subject may initiate a CBT challenge (such as a collect evidence challenge, etc.) but interact with a digital reality object (such as a digital reality recording object). In some embodiments, the second plurality of biometric data elements is obtained upon or prior to initiating presentation (B) of the first digital reality scene that presents the first challenge designed for the first suggested experience of the first category. However, the present disclosure is not limited thereto. For example, in some embodiments, the second plurality of biometric data elements is obtained prior to starting the entire plan (e.g., prior to executing the client application at the client device), during a previous challenge, or the like. In some embodiments, the second plurality of biometric data elements is an average of the subject's (one or more) previous challenges, or an average of a population of users associated with the subject (such as a population of users having similar conditions as the subject).
Block 500. Referring to block 500, in some embodiments, the first biometric sensor is a heart rate sensor and the first plurality of biometric data elements captured by the first biometric sensor are used to determine heart beats per minute. The method of the present disclosure may use any suitable type of heart rate sensor to capture biometric data elements. For example, in some embodiments, the heart rate sensor is an electrical heart rate sensor (e.g., electrocardiography or ECG) that includes electrodes placed on the chest of the subject to monitor the electrical activity of the subject's heart. In some embodiments, the heart rate sensor is an optical heart rate sensor (e.g., photoplethysmography or PPG) that includes one or more light sources (e.g., LEDs) to detect blood flow under the subject's skin. In some embodiments, the optical heart rate sensor is a wearable/mobile device, or is incorporated with a wearable/mobile device (such as a wristwatch, an activity tracker, an armband, a mobile phone, and the like).
Block 502. Referring to block 502, in some embodiments, the first biometric sensor is a heart rate variability sensor and the first plurality of biometric data elements captured by the first biometric sensor are used to determine intervals between heart beats, thereby providing an assessment of Heart Rate Variability (HRV). HRV has been frequently applied as a reliable indicator of health, stress, and mental burden. HRV has been studied in connection with cardiovascular disease, post-traumatic stress disorder, depression and fibromyomas. HRV has been proposed as a sensitive index for autonomic stress responses such as in panic disorder and work and mental burden.
In some embodiments, the heart rate variability sensor includes a chest-worn electrocardiogram sensor or a wearable/mobile photoplethysmogram (PPG) sensor that captures and provides cardiac signals for HRV analysis. In some embodiments, the heart rate variability sensor is a contactless sensor. For example, in an embodiment, the heart rate variability sensor includes a camera that captures images or videos of the subject while the subject is completing the challenge. Then, the image or video captured by the camera is used to extract arterial pulse information and derive HRV. Additional information about non-contact sensors can be found in "The PhysioCam:A Novel Non-Contact Sensor to Measure Heart Rate Variability in Clinical and Field Applications,"Davila et al.,Front Public Health,November 2017,Volume 5,Article 300(, which is incorporated herein by reference in its entirety for all purposes).
Blocks 506 through 512. Referring to blocks 506 through 512, in some embodiments, the first biometric sensor is an eye tracking sensor. The eye tracking sensor may be mounted on or incorporated with a device (e.g., desktop, stand or wall, etc.), a pair of glasses or virtual reality headphones, or the like. Eye tracking sensors typically include a projector that projects light (e.g., near infrared light) onto a user's eye, a camera that captures an image of the user's eye, and/or an algorithm that processes the image to determine the position, fixation, and/or other characteristics of the eye. In some embodiments, the eye tracking sensors do not include image processing capabilities, rather, biometric data elements acquired by such eye tracking sensors are sent to a client device (e.g., client device 300-1) or a remote system (e.g., digital reality system 200) for image processing. In some embodiments, the eye tracking sensor is based on optical tracking of corneal reflection to assess visual attention, e.g., track pupil center and where light is reflected from the cornea. Light reflected from the cornea and pupil center is used to determine the movement and direction of the eye.
In some embodiments, the first plurality of biometric data elements captured by the first biometric sensor is used to determine gaze fixation(s), smooth motion(s), glance(s), blink, scan path length, eye openness, pupil dilation, eye position, excessive vigilance or overscan, evasion, or any combination thereof.
In some embodiments, gaze fixation is defined based on spatial and temporal criteria on a region of interest (ROI) in a digital reality scene (e.g., the eyes of object 42-2 in the digital reality scene illustrated in fig. 6B). In some embodiments, the spatial criteria is a diameter of 0.8 ° view, 0.9 ° view, 1 ° view, 1.1 ° view, 1.2 ° view, 1.3 ° view, or the like, and the temporal criteria is a minimum of 120ms, 130ms, 140ms, 150ms, or the like. In some embodiments, the spatial criteria is a diametrical viewing angle between 0.8 ° and 1.3 °, between 0.8 ° and 1.2 °, between 0.8 ° and 1.1 °, between 0.8 ° and 1 °, between 0.8 ° and 0.9 °, between 0.9 ° and 1.3 °, between 0.9 ° and 1.2 °, between 0.9 ° and 1.1 °, between 0.9 ° and 1 °, between 1 ° and 1.3 °, between 1 ° and 1.2 °, between 1 ° and 1.1 °, between 1.1 ° and 1.3 °, between 1.1 ° and 1.2 °, or between 1.1 ° and 1.3 °. In some embodiments, the spatial criteria is a diametric viewing angle of at least 0.8 °, at least 0.9 °, at least 1 °, at least 1.1 °, at least 1.2 °, or at least 1.3 °. In some embodiments, the spatial criteria is a diameter viewing angle of at most 0.8 °, at most 0.9 °, at most 1 °, at most 1.1 °, at most 1.2 °, or at most 1.3 °. In some embodiments, the time criterion is a period of time between 120ms and 150ms, between 120ms and 145ms, between 120ms and 140ms, between 120ms and 135ms, between 120ms and 130ms, between 120ms and 125ms, between 125ms and 150ms, between 125ms and 145ms, between 125ms and 140ms, between 125ms and 135ms, between 125ms and 130ms, between 130ms and 145ms, between 130ms and 140ms, between 130ms and 135ms, between 135ms and 150ms, between 135ms and 145ms, between 135ms and 140ms, between 140ms and 150ms, between 140ms and 145ms, or between 145ms and 150ms. In some embodiments, the time criterion is at least 120ms, at least 125ms, at least 130ms, at least 135ms, at least 140ms, at least 145ms, or at least 150ms. In some embodiments, the time criterion is at most 120ms, at most 125ms, at most 130ms, at most 135ms, at most 140ms, at most 145ms, or at most 150ms.
In some embodiments, excessive alertness is defined as a time to a first fix during a particular challenge in a digital reality scenario. In some embodiments, avoidance is defined as the number fixed during a particular challenge in the digital reality scene divided by the total number fixed in the first digital reality scene. Studies have shown that socially anxious people direct their initial attention to emotionally threatening information (excessive vigilance) and subsequently evade messaging information (attention evasion) to reduce emotional distress. Additional information :"Gaze Behavior in Social Fear Conditioning:An Eye-Tracking Study in Virtual Reality,"Reichenberger et al.,Frontiers in Psychology,published 23January 2020; and "Capturing Hypervigilance:Attention Biases in Elevated Trait Anxiety and Posttraumatic Stress Disorder"by Lorana H.Stewart submitted for the degree of Doctor of Philosophy,University College London,September 2011, related to eye tracking can be found in the following documents, each of which is incorporated herein by reference for all purposes.
In some embodiments, the method bounds one or more ROIs for objects in the digital reality scene (e.g., eyes of an avatar, face of an avatar, non-player objects, etc.). In some embodiments, the method determines the percentage of time spent by the subject gazing within each ROI, the average number of gaze fixation within each ROI, the median duration of gaze fixation within each ROI, the average distance of gaze fixation relative to the center of each ROI, or any combination thereof.
However, the present disclosure is not limited thereto. For example, in some embodiments, the method records a scan path of the subject's eyes and examines the subject's facial expression using one or more models. In some embodiments, the examination of the subject's facial expression is performed by a healthcare worker associated with the subject. In some embodiments, the method determines other eye activities or characteristics, such as changes in eye position, or the number of occurrences of the eye at a predetermined reference position, etc., when the corresponding challenge is resolved.
Blocks 514 through 520. Referring to blocks 514 through 520, in some embodiments, the first biometric sensor is a logger. The methods of the present disclosure may use any suitable recorder to capture the biometric data elements. Examples of recorders include, but are not limited to, online voice recorders, microphone recorders, USB flash-drive voice recorders, portable digital recorders, voice-activated recorders, audio recorders, video recorders, and vibration-responsive sensors.
In some embodiments, the recorder is used to obtain a statement, utterance, vocal features, or a combination thereof from the subject. In some embodiments, the emotion expressed in speech is analyzed, typically at multiple levels as follows: physiological level, vocalization-pronunciation level, acoustic level, or a combination thereof. In some embodiments, physiological levels describe, for example, nerve impulses or muscle innervation patterns of the primary structures involved in the speech production process. The voicing-pronunciation level describes, for example, the position or movement of a primary structure such as the vocal cords, etc. The acoustic level describes characteristics of the speech waveform emitted from the mouth, for example. Most of the current methods for measuring at physiological and vocalization-pronunciation levels are quite invasive and require specialized equipment and a high level of expertise. In contrast, acoustic cues for the expression of the sounding emotion can be objectively, economically and unobtrusively obtained from the speech recordings without any special equipment.
In some embodiments, the biometric threshold is associated with one or more voice cues. In some embodiments, the one or more threads comprise: (a) Fundamental frequency (e.g., F0, such as perceived pitch correlation); (b) Sounding disturbances (e.g., short-term variability in sound production); (c) Speech quality (e.g., perceived relevance of "timbre"); (d) intensity (e.g., correlation of perceived loudness); (e) One or more temporal aspects of speech (e.g., speech rate); and various combinations of these aspects (e.g., prosodic features). In some embodiments, the first plurality of biometric data elements captured by the first biometric sensor are used to determine a fundamental frequency, a speech speed, one or more pauses (e.g., pauses when the subject speaks), a duration of silence by the subject, a speech intensity, a speech start time, one or more pitch perturbations, one or more loudness perturbations, one or more speech discontinuities, one or more pitch jumps, a speech quality (e.g., stuttering, tremors, "image", "one's"), a sound quality (e.g., pitch variation, stuttering), or a combination thereof.
In some embodiments, a first plurality of biometric data elements captured by a first biometric sensor (e.g., a recorder) are transcribed, such as by one or more computational models or the like, to create a transcription, such as a text object representing an utterance obtained from a subject by the recorder or the like. For example, in some embodiments, the transcription is then extracted by one or more models to produce a set of one or more words. In some embodiments, the first plurality of biometric data elements comprises waveform data, and the method first segments the first plurality of biometric data elements captured by the first biometric sensor (e.g., speech recorded by the recorder) into one or more voiced and unvoiced sounds, one or more words, one or more syllables, or combinations, thereby segmenting the waveform data or transcription to allow for a quantitative description of relatively homogenous and thus comparable portions of each utterance. As yet another non-limiting example, in some embodiments, a first plurality of biometric data elements comprising waveform data are input to an NN calculation model (such as a one-dimensional CNN, etc.) to extract vocal features. However, the present disclosure is not limited thereto.
In some embodiments, the utterance may be a word, a short phrase, or a complex sentence with many embedded clauses. Non-limiting examples of utterances include, but are not limited to, one or more spoken phrases or words, such as "ok? "one's mouth and one's tail" "not on floor-! "," pink. "and" good bars, I say she will go but she never has. In some embodiments, the segmentation of the recorded speech is based on pause(s) in the recorded speech, e.g., two utterances separated by a pause of at least 2 seconds, at least 2.5 seconds, at least 3 seconds, at least 3.5 seconds, or at least 4 seconds. In some embodiments, the segmentation of the recorded speech is based on pauses of at most 2 seconds, at most 2.5 seconds, at most 3 seconds, at most 3.5 seconds, or at most 4 seconds. In some embodiments, the utterance is never more than one complete sentence long, i.e., even if there is no detectable pause between two complete sentences, the two complete sentences are split into two utterances.
In some embodiments, the first plurality of biometric data elements includes or is used to determine a plurality of vocal features. In some embodiments, the sounding features are phonemes associated with the utterance, such as pitch, a pause in the pause, a change in the tone, and the like. In some embodiments, the plurality of sound producing features includes between 5 phones and 200 phones, between 5 phones and 150 phones, between 5 phones and 100 phones, between 5 phones and 80 phones, between 5 phones and 60 phones, between 5 phones and 40 phones, between 5 phones and 20 phones, between 15 phones and 200 phones, between 15 phones and 150 phones, between 15 phones and 100 phones, between 15 phones and 80 phones, between 15 phones and 60 phones, between 15 phones and 40 phones, between 15 phones and 20 phones, Between 35 phones and 200 phones, between 35 phones and 150 phones, between 35 phones and 100 phones, between 35 phones and 80 phones, between 35 phones and 60 phones, between 35 phones and 40 phones, between 60 phones and 200 phones, between 60 phones and 150 phones, between 60 phones and 100 phones, between 60 phones and 80 phones, between 80 phones and 200 phones, between 80 phones and 150 phones, or between 80 phones and 100 phones. In some embodiments, the plurality of sound producing features includes at least 5 phones, at least 10 phones, at least 15 phones, at least 20 phones, at least 25 phones, at least 30 phones, at least 35 phones, at least 40 phones, at least 45 phones, at least 50 phones, at least 55 phones, at least 60 phones, at least 65 phones, at least 70 phones, at least 75 phones, at least 80 phones, at least 85 phones, at least 90 phones, at least 95 phones, at least 100 phones, at least 105 phones, at least 110 phones, at least 115 phones, at least, At least 120 phones, at least 125 phones, at least 130 phones, at least 135 phones, at least 140 phones, at least 145 phones, at least 150 phones, at least 155 phones, at least 160 phones, at least 165 phones, at least 170 phones, at least 175 phones, at least 180 phones, at least 185 phones, at least 190 phones, at least 195 phones, or at least 200 phones. In some embodiments, the plurality of sound production features includes at most 5 phones, at most 10 phones, at most 15 phones, at most 20 phones, at most 25 phones, at most 30 phones, at most 35 phones, at most 40 phones, at most 45 phones, at most 50 phones, at most 55 phones, at most 60 phones, at most 65 phones, at most 70 phones, at most 75 phones, at most 80 phones, at most 85 phones, at most 90 phones, at most 95 phones, at most 100 phones, at most 105 phones, at most 110 phones, at most 115 phones, Up to 120 phones, up to 125 phones, up to 130 phones, up to 135 phones, up to 140 phones, up to 145 phones, up to 150 phones, up to 155 phones, up to 160 phones, up to 165 phones, up to 170 phones, up to 175 phones, up to 180 phones, up to 185 phones, up to 190 phones, up to 195 phones, or up to 200 phones.
In some embodiments, the method then extracts various speech cues that are related to the speech emotion. In some embodiments, the various speech cues include, but are not limited to, fundamental frequency, speech speed, one or more pauses, speech intensity, speech start time, jitter (e.g., one or more pitch disturbances), amplitude perturbations (e.g., one or more loudness disturbances), one or more speech breaks, one or more pitch jumps, one or more measures of speech quality (e.g., the relative degree of high frequency energy and low frequency energy in the spectrum, the frequency and bandwidth of energy peaks in the spectrum due to natural resonances of the channels called formants, etc.), or a combination thereof. Several metrics may be obtained for each type of thread. In some embodiments, the extraction of the voice cues is unsupervised (e.g., automatically without human involvement). In some embodiments, the extraction of the various voice cues is supervised (e.g., scrutinized by the subject, a system administrator, or a health care worker associated with the subject).
In some embodiments, the emotion analysis or emotion analysis is performed on the first plurality of biometric data elements captured by the logger. For example, in some embodiments, emotion analysis is performed on words, phrases, and/or sentences extracted from the first plurality of biometric data elements. In some embodiments, the predetermined emotion is comma, anger, anxiety, embarrassment, boring, calm, confusion, craving, aversion, moving affective pain, mania, excitement, fear, phobia, interest, happiness, boredom, reminiscence, relaxation, sadness, satisfaction, or surprise.
In some embodiments, emotion analysis is based at least in part on: dictionary (e.g., a list of words and the emotions they convey), emotion analysis dictionary (e.g., a dictionary containing information about the emotions or polarities expressed by words, phrases, or concepts), library (e.g., a library that computes a set of prosody and spectral features that support emotion recognition), complex machine learning algorithm (e.g., naive bayes, support vector machines, maximum entropy), or combinations thereof.
In some embodiments, emotion analysis is performed using distance metrics such as: cosine similarity measures or dot products of one or more utterances of the subject made during the corresponding challenge for each expression in the list of expressions that are considered characteristics of the predetermined emotion. In some embodiments, the emotion analysis is based on Duda et al, 1973, "Pattern Classification AND SCENE ANALYSIS," Wiley, print et al, 1983, "Introduction to Modern Information Retrieval," McGraw-Hill Book Co., the contents of Print (each of which is incorporated herein by reference in its entirety). For example, considerAndAre two vectors representing expressions in the expression list of the utterance of the subject and the characteristic considered as the predetermined emotion, respectively. The similarity measure may be determined using the following equation:
Table 1 below shows various other types of metrics for distance and further illustrates the nomenclature of the above formulas.
Table 1. An exemplary distance metric for the distance-based classification model 208. ConsiderAndIs two pattern vectors (is two vectors representing the expressions in the subject's utterance and expression list, respectively). It is also contemplated that maxi and mini are the maximum and minimum, respectively, of the ith attribute of a pattern in a dataset (e.g., text string). The distance between Xp and Xq is defined for each distance metric as follows.
Those skilled in the art will appreciate that other emotions are within the scope of the systems and methods of the present disclosure. Additional information related to emotion analysis or analysis of speech data may be found in "Sentiment Analysis and Opinion Mining,"Bing Liu,Morgan&Clypool Publishers,May 2012,"Speech motion analysis,"Juslin et al.,(2008),Scholarpedia,3(10):4240(, each of which is incorporated herein by reference in its entirety for all purposes). Additional details and information about the distance-based classification model may be known from Yang et al.,1999,"DistAI:An Inter-pattern Distance-based Constructive Learning Algorithm,"Intelligent Data Analysis,3(1),pg.55.
Blocks 522 through 526. Referring to blocks 522 through 526, in some embodiments, a first plurality of biometric data elements captured by a first biometric sensor is stored, thereby allowing playback of the first plurality of biometric data elements after the first digital reality scene is completed. For example, as a non-limiting example, in an embodiment, a first plurality of biometric data elements (e.g., recorded speech) captured by a first biometric sensor is stored in a recorder. As another non-limiting example, in another embodiment, the first plurality of biometric data elements captured by the first biometric sensor is transmitted to and stored in a client device (e.g., client device 300-1) or a remote system (e.g., digital reality system 200), or the like.
In some embodiments, one or more specific keywords are used in the analysis of the first plurality of biometric data elements captured by the first biometric sensor to prevent fraud. For example, as a non-limiting example, in some embodiments where the challenge is that the subject requests a napkin from a non-player character (e.g., a wine holder), the subject needs to speak a particular word "napkin" or "napkins" to begin a conversation. As another non-limiting example, in some embodiments (where the challenge is a subject's re-idea), the subject needs to speak a first word (e.g., "willing") or cannot speak a second word (e.g., "cannot") during the conversation. However, the present disclosure is not limited thereto.
In some embodiments, the first plurality of biometric data elements captured by the first biometric sensor are pre-processed to remove background noise, such as by modulating waveform data in the first plurality of biometric data elements. Alternatively, in some embodiments, the first plurality of biometric data elements is captured by the first biometric sensor in a state where the automatic noise cancellation feature is enabled.
Block 528. Referring to block 528, in some embodiments, a first plurality of biometric data elements captured by a first biometric sensor (e.g., recorder, camera, eye tracking sensor) is stored, thereby allowing playback of the first plurality of biometric data elements after the first digital reality scene is completed. For example, as a non-limiting example, in an embodiment, a first plurality of biometric data elements (e.g., recorded speech) captured by a first biometric sensor is stored in a recorder. As another non-limiting example, in another embodiment, the first plurality of biometric data elements captured by the first biometric sensor is transmitted to and stored in a client device (e.g., client device 300-1) or a remote system (e.g., digital reality system 200), or the like. As yet another non-limiting example, a first plurality of biometric data elements (e.g., images or videos) captured by a first biometric sensor are stored in a camera, or transmitted to and stored in a client device (e.g., client device 300-1), or transmitted to and stored in a remote system (e.g., digital reality system 200), and so forth. For example, in some embodiments, upon completion of a CBT challenge, a first idea or statement (e.g., waveform data of the first idea or statement) obtained by the subject associated with the first plurality of biometric data elements is stored, which allows the subject to reconstruct the first idea or statement at a future time (e.g., complete the first challenge).
Block 530. Referring to block 530, in some embodiments, a first plurality of biometric data elements are captured by a first biometric sensor in response to a particular trigger. For example, as a non-limiting example, in some embodiments in which the first biometric sensor is or includes an eye tracking sensor, the first plurality of biometric data elements are captured in response to the subject looking at a particular object of the digital real scene (e.g., looking at an eye of another player characterizer such as a wine guarantor, etc.). As another non-limiting example, in some embodiments, the first plurality of biometric data elements is captured in response to a selection or deselection of a checkbox by the subject or a healthcare worker associated with the subject. As yet another non-limiting example, the first plurality of biometric data elements is captured in response to a voice command from the subject or a healthcare worker associated with the subject. In some embodiments, the specific trigger includes changing a state of an input of the client device, such as by pressing a button input or moving a position of a sensor of the client device. In some embodiments, the particular trigger is associated with an interaction of a digital reality examinee in a digital reality scene (such as an interaction with a digital reality recording object, etc.).
However, the present disclosure is not so limited and any suitable trigger may be used to start and/or stop capturing biometric data elements. For example, in some embodiments, the first plurality of biometric data elements are captured in response to a touch to a first biometric sensor, a switch to an input mechanism by a subject or a healthcare worker associated with the subject, a change in a state of a digital reality scene (e.g., interaction with a digital reality object, etc.), or a combination thereof. In some embodiments, the first plurality of biometric data elements is captured in response to one or more specific keywords provided by the subject (such as one or more specific keywords obtained by a logger, etc.).
Blocks 534 through 538. Referring to blocks 534 through 538, in some embodiments, a first biometric sensor of the at least one biometric sensor is configured to capture biometric data elements associated with a physiological or psychological state of the subject at a predetermined sampling rate (such as a repeated sampling rate, a periodic sampling rate, an aperiodic sampling rate, etc.). For example, in some embodiments, the first biometric sensor captures biometric data elements at predetermined sample rates as follows: once every 200 or less milliseconds, once every 160 or less milliseconds, once every 140 or less milliseconds, once every 120 or less milliseconds, once every 100 or less milliseconds, once every 80 or less milliseconds, once every 60 or less milliseconds, once every 40 or less milliseconds, once every 30 or less milliseconds, once every 20 or less milliseconds, or once every 10 or less milliseconds. In some embodiments, the first biometric sensor captures biometric data elements at a predetermined sampling rate as follows: once every 200 or more milliseconds, once every 160 or more milliseconds, once every 140 or more milliseconds, once every 120 or more milliseconds, once every 100 or more milliseconds, once every 80 or more milliseconds, once every 60 or more milliseconds, once every 40 or more milliseconds, once every 30 or more milliseconds, once every 20 or more milliseconds, or once every 10 or more milliseconds. In some embodiments, the first biometric sensor captures biometric data elements at a predetermined sampling rate as follows: between 10ms and 200ms, between 10ms and 150ms, between 10ms and 100ms, between 10ms and 50ms, between 25ms and 200ms, between 25ms and 150ms, between 25ms and 100ms, between 25ms and 50ms, between 60ms and 200ms, between 60ms and 150ms, between 60ms and 100ms, between 130ms and 200ms, or between 130ms and 150 ms.
In some embodiments, the predetermined sampling rate is constant while the subject is completing the first challenge in the first digital reality scenario. In some other embodiments, the predetermined sampling rate is adjustable or variable, e.g., adjusted or changed in response to an earlier captured biometric data element. For example, in one embodiment, the method adjusts (e.g., increases) the predetermined sampling rate when it detects a sudden change in the biometric metric (e.g., a sudden increase in heart rate) over a relatively short period of time. In another embodiment, the method adjusts (e.g., reduces) the predetermined sampling rate when it does not detect a change in the biometric metric (e.g., a consistent and normal heart rate) for a relatively long period of time.
In some embodiments, the first biometric sensor captures biometric data elements at a constant predetermined sampling rate while the subject is completing one portion of the first digital reality scene and captures biometric data elements at an adjustable or variable predetermined sampling rate while the subject is completing another portion of the first digital reality scene.
In some embodiments, the first biometric sensor intermittently captures a biometric data element while the subject is completing a corresponding first challenge of the first digital reality scene. For example, as a non-limiting example, in some embodiments, the first biometric sensor begins to capture the biometric data element when the subject is speaking in the first digital reality scene, and stops collecting the biometric data element when the method or system provides instructions when other player characters in the first digital reality scene (e.g., object 42-2 of fig. 6B) are speaking, or when the subject is not speaking.
Blocks 540 through 542. Referring to blocks 540 through 542, in some embodiments, determining (D) includes determining whether the comparison of the first plurality of biometric data elements to the second baseline characteristic meets a second biometric threshold of the at least one biometric threshold. For example, as a non-limiting example, assume that the first biometric sensor is a recorder and the first plurality of biometric data elements is speech of a subject recorded during rendering of at least a portion of a first digital reality scene designed for a first advice experience associated with the first category. The recorded speech is partitioned into one or more utterances. Determining (D) includes determining whether a comparison of the first plurality of biometric data elements with a corresponding threshold baseline characteristic (e.g., speech baseline of the subject in a relaxed state, pitch baseline of the subject, pause baseline of the subject, tone change baseline of the subject, dialect baseline of the subject, grammar baseline of the subject, etc.) meets a first biometric threshold (e.g., relative speech threshold, relative pitch threshold, relative pause threshold of the subject, relative tone change threshold, relative dialect threshold, relative grammar threshold, etc.), and a comparison of the first plurality of biometric data elements with a second baseline characteristic (e.g., decibel level or pitch baseline of the subject in a relaxed state, PDF entropy baseline of the subject, etc.) meets a second biometric threshold (e.g., relative decibel level or pitch threshold, relative PDF entropy threshold, etc.).
It should be noted that in embodiments in which the first plurality of biometric data elements are recorded speech, the first biometric threshold or the second biometric threshold may be any threshold associated with an utterance including, but not limited to, a relative utterance threshold, a relative confidence threshold, a relative decibel level threshold, a relative pitch threshold, or any combination thereof. For example, in an embodiment, during a first digital reality scenario in which a first challenge designed for a first advice experience associated with a first category is presented, the first biometric threshold is a desired minimum change in the number of utterances compared to a speech baseline of the subject, and the second biometric threshold is (i) a desired minimum change in confidence compared to a confidence baseline of the subject, (ii) a desired minimum change in decibel level compared to a decibel level baseline of the subject, and/or (iii) a desired minimum change in pitch compared to a pitch baseline of the subject. In another embodiment, during the surfacing of the first digital reality scenario for the first challenge designed for the first advice experience associated with the first category, the first biometric threshold is (i) a desired minimum change in confidence from a confidence baseline of the subject, (ii) a desired minimum change in decibel level from a decibel level baseline of the subject, and/or (iii) a desired minimum change in pitch from a pitch baseline of the subject, and the second biometric threshold is a desired minimum change in number of utterances from a speech baseline of the subject. However, the present disclosure is not limited thereto.
Blocks 546 through 548. Referring to blocks 546 through 548, in some embodiments, the at least biometric data elements captured in obtaining (C) include a fourth plurality of biometric data elements captured by a second biometric sensor of the at least one biometric sensor (e.g., sensor 110-2 of fig. 1). The fourth plurality of biometric data elements is different from the first plurality of biometric data elements.
For example, as a non-limiting example, the first biometric sensor is a recorder and the first plurality of biometric data elements are recorded speech captured by the recorder, while the second biometric sensor is an eye tracking sensor and the fourth plurality of biometric data elements are eye tracking data (e.g., images) captured by the eye tracking sensor, or vice versa. In such embodiments, any threshold associated with voice or eye tracking may be used. For example, as a non-limiting example, during a first digital reality scenario exhibiting a first challenge designed for a first advice experience associated with a first category, one of the first and third biometric thresholds is (i) a desired minimum change in the number of words compared to a word baseline of the subject, (ii) a desired minimum change in the number of utterances compared to an utterance baseline of the subject, (iii) a desired minimum change in confidence compared to a confidence baseline of the subject, (iv) a desired minimum change in decibel level compared to a decibel level baseline of the subject, and/or (v) a desired minimum change in pitch compared to a pitch baseline of the subject, and the other of the first and third biometric thresholds is a desired minimum change in the length of eye contact compared to an eye contact baseline of the subject.
As another non-limiting example, the first biometric sensor is a recorder and the first plurality of biometric data elements is recorded speech captured by the recorder, and the second biometric sensor is a heart rate sensor and the fourth plurality of biometric data elements is heart rate data captured by the heart rate sensor, or vice versa. In such embodiments, any threshold value associated with speech or heart rate may be used. For example, as non-limiting examples, one of the first and third biometric thresholds is (i) a desired minimum change in the number of words compared to the word baseline of the subject, (ii) a desired minimum change in the number of utterances compared to the utterance baseline of the subject, (iii) a desired minimum change in confidence compared to the confidence baseline of the subject, (iv) a desired minimum change in decibel level compared to the decibel level baseline of the subject, and/or (v) a desired minimum change in pitch compared to the pitch baseline of the subject, and the other of the first and third biometric thresholds is a desired minimum change in heart rate compared to the heart rate baseline of the subject.
As a further non-limiting example, the first biometric sensor is a heart rate sensor and the first plurality of biometric data elements is heart rate data captured by the heart rate sensor, while the second biometric sensor is a heart rate sensor and the fourth plurality of biometric data elements is heart rate data captured by the heart rate sensor. In such embodiments, any threshold value related to heart rate or eye tracking may be used. For example, as a non-limiting example, during a first digital reality scenario exhibiting a first challenge designed for a first advice experience associated with a first category, one of the first biometric threshold and the third biometric threshold is a desired minimum change in heart rate from a heart rate baseline of the subject, and the other of the first biometric threshold and the third biometric threshold is a desired minimum change in length of eye contact from the eye contact baseline of the subject.
In some embodiments, determining (D) includes determining whether a comparison of the fourth plurality of biometric data elements to the third baseline characteristic meets a third biometric threshold of the at least one biometric threshold. For example, as a non-limiting example, in embodiments where the fourth plurality of biometric data elements are eye tracking data captured by the eye tracking sensor, the determining (D) determines whether a comparison of the eye tracking data (e.g., a length of eye contact) to an eye contact baseline characteristic (e.g., a length of eye contact of the subject in a relaxed state) meets an eye contact threshold. As another non-limiting example, in embodiments in which the fourth plurality of biometric data elements is heart rate data captured by a heart rate sensor, determining (D) determines whether a comparison of the heart rate data (e.g., heart beats per minute) to a heart rate baseline characteristic (e.g., heart beats per minute of the subject in a relaxed state) meets a heart rate threshold.
In some embodiments, the method includes determining whether at least one gate criterion associated with the first category is met if the first challenge is determined to be successfully completed. In some embodiments, the determination in the event that the first challenge is determined to be successfully completed is based at least in part on the result of the following determination: whether the first set of biometric data elements meets at least one biometric threshold associated with a first suggested experience for the first challenge (wherein the at least one biometric threshold comprises a first biometric threshold) to evaluate whether the first challenge is successfully completed by meeting the at least one biometric threshold associated with the first suggested experience for the first challenge, wherein the first set of biometric data elements comprises a first subset of biometric data elements captured by a first biometric sensor of the at least one biometric sensor; whether the first set of biometric data elements meets a corresponding first threshold baseline characteristic; whether at least one gate criterion associated with the first class is satisfied; or a combination thereof. For example, as a non-limiting example, assume that at least one gate criterion associated with the first category includes a single gate criterion that requires a minimum number (e.g., 3, 4, or 5) of challenges for the subject to successfully complete, such as a threshold number of exposure challenges, or a threshold number of CBT challenges (e.g., a threshold number of reconstruction ideas, etc.), and so forth. In such embodiments, the determination determines whether the number of challenges that the subject successfully completed meets or exceeds the minimum number of challenges required. As another non-limiting example, assume that the at least one gate criterion associated with the first category includes a first gate criterion and a second gate criterion. The first gate criteria requires that the subject successfully complete a threshold number of challenges and the second gate criteria requires that the subject successfully complete one or more specific challenges (e.g., challenge 26-2) associated with the first class. In such embodiments, the determination (E) not only determines whether the number of challenges that the subject successfully completed meets or exceeds the minimum number of challenges required, but also determines whether the subject successfully completed each of the one or more particular challenges required.
Block 552. Referring to block 552, the method includes (E) determining a next second category of the plurality of categories for the subject to proceed based at least in part on a result of determining (D) (e.g., the first set of biometric data elements satisfying a successful completion of at least one biometric threshold associated with a first suggested experience for the first challenge, the at least one biometric threshold including a first biometric threshold that satisfies at least one biometric threshold associated with the first suggested experience for the first challenge, wherein the first set of biometric data elements includes a first subset of biometric data elements captured by a first biometric sensor of the at least one biometric sensor, whether the first set of biometric data elements satisfies a successful completion of the corresponding first threshold baseline characteristic, and whether the first set of biometric data elements satisfies a successful completion of the at least one threshold associated with the first category), upon satisfaction of the at least one biometric threshold, thereby enabling an improved exposure of the subject to the subject's mental illness or mental condition. For example, as a non-limiting example, assume that the obtained plurality of categories for the subject are a set of three categories A, B and C, and that category a is a first category for which the subject has succeeded in completeness (e.g., meets each of at least one respective gate criterion associated with the first category). Determining (E) whether category B or category C should be the second category for the subject to proceed next.
The second class of determinations (E) for subsequent execution by the subject are based at least in part on the determination (D) of whether the at least one biometric data element meets at least one biometric threshold of the first challenge and the determination (E) of whether at least one gate criterion associated with the first class is met. For example, in some embodiments, the second class of determinations (E) for the subject to next make are based not only on the number of challenges that the subject successfully completed, but also on the degree to which the subject completed these challenges (e.g., met the requirements, exceeded some of the requirements, exceeded most of the requirements, exceeded all of the requirements) and/or the degree of improvement (e.g., only, moderate, significant) that the subject achieved by these challenges. In some embodiments, the second category of determination (E) for the subject to next make is based not only on whether the subject successfully completed each of the one or more particular challenges required, but also on any additional challenges that the subject has expected or successfully completed. In some embodiments, the second category of determination (E) for the subject to next make is additionally or alternatively based on other substances including, but not limited to, the subject's performance during other educational or therapeutic challenges, or the user's population's performance during the same challenge, etc. Thus, the methods of the present disclosure not only provide for customized exposure progress for each subject, but also personalize the timing and/or nature of the exposure practices. The method dynamically builds or revises personal exposure progress based at least in part on the subject's level of success in one or more social challenges.
Blocks 554 through 556. Referring to blocks 554 through 556, in some embodiments, the determination (E) of the second category for the subject to proceed next includes (e.1) evaluating whether the category immediately following the first category in the initial exposure progress (e.g., based on the initial exposure progress from the subject's evaluation and/or other data as exemplified by at least block 480) is appropriate for the subject to proceed next. In some embodiments, the determining (E) of the second category for the subject to make next further comprises (e.2) presenting the immediately following category in the initial exposure progression as the second category for the subject to make if the immediately following category in the initial exposure progression is appropriate for the subject to make next. In some embodiments, the determining (E) of the second category for the subject to proceed next further comprises (e.3) recommending a category in the initial exposure progress other than the immediately following category as the second category for the subject to proceed next if the immediately following category in the initial exposure progress is not suitable for the subject to proceed next.
For example, as a non-limiting example, assume that the obtained multiple categories for the subject are a set of three categories a (e.g., exposure category), B (e.g., CBT category), and C (e.g., positive concept category), which set is in the initial exposure progression, with category a as the first category, followed by category B, and then category C. If it is determined that the subject successfully completed category a (the first category in the initial exposure progression), the method determines in evaluation (e.1) whether category B (the category immediately following category a in the initial exposure progression) is appropriate for the subject to proceed next. If it is determined that class B is suitable for the subject to proceed next, the method presents class B in a presentation (E.2) (e.g., on a display) for the subject to proceed. If it is determined that category B is not suitable for the subject to proceed next, the method recommends category C or other educational/therapeutic challenges (e.g., positive or cognitive reconstruction challenges) in recommendation (e.3) for the subject to proceed. The recommendation may be made, for example, by placing an indicator (e.g., text or graphics, etc.) on the display via audio, etc.
Block 558. Referring to block 558, in some embodiments, the method further includes (I) repeating (B), obtaining (C), and determining (D) one or more times for other challenges associated with the first class without meeting a gate criterion of at least one respective gate criterion associated with the first class. For example, as a non-limiting example, assume that at least one gate criterion associated with a first category includes a first gate criterion that requires a minimum number of three challenges for a subject to successfully complete. If it is determined that the subject has only successfully completed one challenge associated with the first category (e.g., challenge 26-1), repeating (J) will present (B), obtain (C) and determine (D) at least twice for other challenges associated with the first category (e.g., challenge 26-2, challenge 26-3). If it is determined that two challenges (e.g., challenge 26-1, challenge 26-2) were successfully completed for the subject, repeating (I) would be repeated at least once for another challenge associated with the first class (e.g., challenge 26-3), obtaining (C), and determining (D).
As another non-limiting example, assume that the at least one gate criterion associated with the first category includes a second gate criterion that requires the subject to successfully complete one or more particular challenges (e.g., challenges 26-4) associated with the first category. If it is determined that the subject did not successfully complete each of the one or more particular challenges required, the method will notify the subject of the requirement and recommend the particular challenge required to the subject even if the subject successfully completed the minimum number of challenges required associated with the first category. In some embodiments, repeating (I) will present (B), obtain (C) and determine (D) one or more times for one or more desired challenges associated with the first category when the subject or a healthcare worker associated with the subject selects one or more desired challenges (e.g., challenges 26-4).
Blocks 560 through 574. Referring to 560 through 570, in some embodiments, the method further includes (J) recommending a challenge for the subject to proceed next if the first challenge is determined to be unsuccessful. The recommended challenges may be, but are not limited to, challenges that present equal challenges or less challenges than the first challenges of the first category, the same first challenges designed for the first suggested experience of the first category, challenges designed for different suggested experiences of the first category, challenges designed for suggested experiences of different categories (e.g., the second category) of the plurality of categories, or challenges not associated with any of the plurality of categories, etc. In some embodiments, the recommendation is based at least in part on the subject's performance in the first challenge and/or the other challenge(s).
For example, as a non-limiting example, assume: the plurality of categories includes a first category associated with four challenges (e.g., challenge 26-1, challenge 26-2, challenge 26-3, and challenge 26-4) and a second category associated with three challenges (e.g., challenge 26-5, challenge 26-6, and challenge 26-7), and challenge 26-2 presents challenges that are equal or smaller than challenge 26-1. In an embodiment, if it is determined that challenge 26-1 did not complete successfully, the method recommends challenge 26-2 in recommendation (K) for the subject to proceed next. In another embodiment, if it is determined that challenge 26-1 did not proceed successfully, the method recommends challenge 26-1 in recommendation (K) for the subject to proceed next (i.e., repeat the same challenge one or more times). In a further embodiment, if it is determined that the challenge 26-1 did not proceed successfully, the method recommends a challenge 26-5 in recommendation (J) that is not associated with the first category for the subject to proceed next. In an alternative embodiment, the method recommends challenges in recommendation (J) for the subject to follow.
The recommended challenges may be, but are not limited to, a positive-sense challenge or a cognitive reconstruction challenge, etc. In some embodiments, the challenges are unique positive challenges tailored to the first category, general positive challenges accessible from each of the plurality of categories, unique cognitive reconstruction challenges tailored to the first category, or general cognitive reconstruction challenges accessible from each of the plurality of categories.
The recommendation may be presented to the subject in any suitable manner. For example, in some embodiments, the recommendation is presented in text, graphics, audio (e.g., spoken by a digital reality host), or a combination thereof. By recommendation, the systems, methods, and devices of the present disclosure increase the likelihood of interaction and/or better clinical outcome.
Block 576. Referring to block 576, in some embodiments, the method further includes (K) repeating presenting (B) and obtaining (C) and determining (D) for the recommended challenge in response to the selection of the recommended challenge. For example, in response to selection of the recommended challenge, the method presents a digital reality scene that reveals the recommended challenge similar to the method disclosed herein and exemplified by at least block 480. In coordination with the presentation of the digital reality scenario presenting the recommended challenge, the method obtains a plurality of data elements from all or a subset of the plurality of sensors, similar to the method disclosed herein and exemplified by at least block 482, wherein at least one biometric sensor captures at least one biometric data element associated with the subject while the subject is completing the recommended challenge. Based on at least one biometric data element captured while the subject is completing the recommended challenge, the method determines whether the recommended challenge completed successfully, similar to the methods disclosed herein and exemplified by at least block 484.
Block 578. Referring to block 578, in some embodiments, the method further includes (L) responding to the selection of the challenge and presenting a second digital reality scene on the display that reveals the challenge. For example, in embodiments where the recommended challenge is a positive challenge, the method presents a digital reality scene that presents the positive challenge (e.g., a digital reality scene that directs the subject to focus on the current moment). In some embodiments, the digital reality scene that presents a positive challenges is an intermediary scene, such as the digital reality scene illustrated in fig. 6B, and so on. However, the present disclosure is not limited thereto. For example, in some embodiments, the recommended challenges are cognitive challenges presented by a digital reality scene, such as the digital reality scene illustrated in fig. 6C.
Block 580. Referring to block 580, in some embodiments, the method further comprises obtaining (M) a third plurality of data elements from a subset of sensors of the plurality of sensors in coordination with rendering (L) the second digital reality scene presenting the challenge. The third plurality of data elements includes a third plurality of biometric data elements associated with the subject and captured (e.g., by a first biometric sensor of the at least one biometric sensor) while the subject is completing the second digital reality scene that presents the challenge. For example, as a non-limiting example, assuming that the corresponding threshold baseline characteristic is a heart rate of the subject captured during a training or educational challenge at the happy place, the first plurality of biometric data elements from obtaining (C) is a heart rate of the subject captured while the subject is completing the first challenge in the first digital reality scenario, and the third plurality of biometric data elements is a heart rate captured while the subject is completing the positive challenge.
In some embodiments, the method further comprises (N) determining the change or improvement by comparing the third plurality of biometric data elements to the corresponding threshold baseline characteristic or to the first plurality of biometric data elements from obtaining (C). In some embodiments, comparison of the third plurality of biometric data elements to the corresponding threshold baseline characteristics or to the first plurality of biometric data elements from obtaining (C) reveals the validity of the positive challenge and provides insight for improving the likelihood of interaction and/or better clinical outcome by using other challenges as well as social challenges.
It should be noted that the present disclosure is not limited to positive-sense challenges and heart rate biometric metrics. Any suitable educational or treatment plan (e.g., introduction, training, cognitive reconstruction) may be a challenge, and any suitable biometric data (e.g., voice, eye movement) may be captured during the educational or treatment plan. The method may determine the effectiveness of the educational or treatment plan by comparing the biometric data obtained during the educational or treatment plan to the baseline or biometric data elements obtained during the first challenge.
Blocks 582 through 584. Referring to blocks 582 through 584, in some embodiments, the method further includes presenting subjective assessment options on the display prior to determining (E), e.g., asking the subject if he/she would like to assess the challenge/category(s) that the subject has completed, the challenge/category(s) that the subject has not completed, or both. In some embodiments, the method further comprises (P) performing subjective assessment in response to selection of the subjective assessment option.
In some embodiments, subjective assessment is performed through a user interface of a client device (e.g., user interface 306 of client device 300-1 of fig. 3). In some other embodiments, subjective assessment is performed by a web browser in communication with a client device or digital reality system 200. In some embodiments, subjective assessment is managed by the subject without supervision by a health care worker (e.g., a healthcare practitioner) associated with the subject. In some other embodiments, the subjective assessment is performed by the subject, but is supervised by a health care worker associated with the subject.
Referring to fig. 8A-8C, in some embodiments, the subjective assessment includes a plurality of cues for the subject to answer. For example, in some embodiments, the subjective assessment includes more than 2, more than 5, more than 10, more than 15, more than 20, more than 30, more than 50, more than 100, or more than 200 cues for the subject to answer. In some embodiments, the subjective assessment includes about twelve cues for the subject to answer. In an alternative embodiment, the subjective assessment includes about twenty-four cues.
In some embodiments, the subjective assessment is based on a minimum clinically significant difference (MCID), a clinical global impression improvement scale (CGI), a patient global impression improvement scale (PGI), a Liebowitz Social Anxiety Scale (LSAS), or a combination thereof. In some embodiments, MCID, CGI, PGI and/or LSAS are part of the assessment module 12 of fig. 2A.
MCID refers to the minimum benefit to the value of the subject. Which captures both the magnitude of the improvement and the subject being at a varying value. The MCID defines the minimum amount that the result must be changed to make sense to the subject. More details and information about MCID evaluation can be found in Kaplan,R.,2005,"The Minimally Clinically Important Difference in Generic Utility-based Measures,"COPD:Journal of Chronic Obstructive Pulmonary Disease,2(1),pg.91(, which is incorporated herein by reference for all purposes).
CGI evaluates severity and/or the subject's ability to manage psychosis or mental status. At the position of Pérez et al.,2007,"The Clinical Global Impression Scale for Borderline Personality Disorder Patients(CHI-BPD):A Scale Sensible to Detect Changes,"ActasDe Psiquiatr ia, 35 (4), pg.229 (which is incorporated herein by reference in its entirety for all purposes) can find additional details and information about CGI scale evaluation.
PGI provides a patient rating format, not a clinician rating format for CGI scale assessment. Additional details and information regarding PGI assessment may be found in Faith et al.,2007,"Twelve Years-Experience with the Patient Generated Index(PGI)of Quality of Life:A Graded Structured Review,"Quality of Life Research,16(4),pg.705(, which is incorporated herein by reference in its entirety for all purposes).
LSAS evaluates social anxiety disorders in clinical studies and practice. Including self-reporting (LSAS-SR) and clinician-specific (LSAS-CA). Additional details and information related to LSAS evaluation are found in Rytwinski et al.,2009,"Screening for Social Anxiety Disorder with the Self-Report Version of the Liebowitz Social Anxiety Scale,"Depression and Anxiety,26(1),pg.34(, which is incorporated herein by reference in its entirety for all purposes).
In some embodiments, the determination (E) of a category of the plurality of categories for the subject to next make is based at least in part on the result of the subjective assessment. For example, as a non-limiting example, assume that the obtained plurality of categories for the subject are a set of three categories A, B and C. The three categories are initially ranked by the subject, the healthcare worker and/or the model associated with the subject, in order of category a, followed by category B, and then category C, before the subject begins planning. After the subject successfully completes the desired challenge and/or other requirements associated with category a, the subjective assessment indicates that the subject considers category C to be less challenging than category B. The method considers subjective evaluations when determining the category for the subject to proceed next. For example, in an embodiment, upon confirmation of subjective assessment by a healthcare worker and/or model associated with the subject, the method determines category C as the next category for the subject to make, but not category B. The method 400 dynamically constructs or revises personal exposure progress and personalizes the time and/or nature of exposure practices based at least in part on subjective evaluations and/or other factors (e.g., the level of success a subject has in one or more social challenges, an evaluation or confirmation of a healthcare worker associated with the subject).
It should be noted that the presentation of subjective assessment options (O) and the performance of subjective assessment options (P) may be performed at other times. For example, as a non-limiting example, they may be performed after the subject successfully completes one or more challenges associated with a category (e.g., a first category, a second category, or a third category), but has not successfully completed all requirements associated with the category. As another non-limiting example, they may occur after the subject fails to successfully complete the challenge associated with the category one or more times. As a further non-limiting example, in some embodiments, the method allows the subject to begin, terminate, or resume subjective assessment at any time desired.
Subjective assessment may be used in various ways. For example, subjective evaluations may be used to rank or re-rank experiences associated with a category of a plurality of categories, to rank or re-rank a plurality of categories, to recommend alternative or additional challenges or categories, to recommend educational or therapeutic challenges (e.g., positive-esthetic challenges, cognitive-reconstruction challenges, etc.), or a combination thereof. Fig. 8D illustrates a non-limiting example of the use of subjective evaluations (e.g., recommended goals, challenges, etc.).
Block 586. Referring to block 586, in some embodiments, the method further comprises: (Q) repeating presenting (B), obtaining (C) and determining (D) for a digital reality scene that presents challenges designed for a suggested experience of a second category; and (R) repeating determining (E) for the second class.
For example, as a non-limiting example, assume that a second category is associated with experience 26-j, and that experience 26-j is associated with digital reality scene 40-j that presents challenges 26-j designed for experience 26-j of the second category. The method presents a digital reality scene 40-j that presents a challenge 26-j similar to the methods disclosed herein and illustrated by at least block 480. In coordination with the presentation of the digital reality scene 40-j, the method obtains a plurality of data elements from all or a subset of the plurality of sensors, similar to the method disclosed herein and illustrated by at least block 482, wherein at least one biometric sensor captures at least one biometric data element associated with the subject while the subject is completing the challenge 26-j. Based on at least one biometric data element captured while the subject is completing challenge 26-j, the method determines whether challenge 26-j completed successfully, similar to the methods disclosed herein and illustrated by at least block 484. If it is determined that the challenge 26-j completed successfully, the method determines whether at least one gate criterion associated with the second class is met, similar to the methods disclosed herein and illustrated by at least block 550.
Block 588. Referring to block 588, in some embodiments, the plurality of categories obtained in obtaining (a) are initially arranged in an initial category hierarchy, thereby forming an initial exposure progression. For example, referring to FIG. 7A, in some embodiments, the obtained plurality of categories for the subject includes a first category 740-1, a second category 740-2, and a third category 740-3. Of these three categories, the first category 740-1 is considered to be least challenging for the subject, the third category 740-3 is considered to be more challenging for the subject, and the second category 740-2 is considered to be most challenging for the subject. The plurality of categories are initially arranged in an initial category hierarchy. For example, fig. 7B illustrates a first category, a second category, and a third category placed in order. The initial exposure progression is formed in a plurality of categories arranged in an initial category hierarchy.
In some embodiments, the initial category hierarchy is set by (i) a system administrator, (ii) a subject, (iii) a healthcare worker associated with the subject, (iv) a model, or (v) a combination thereof. For example, as a non-limiting example, in some embodiments, determining the category from the assessment and/or subjective assessment (e.g., by the assessment module 12 facilitating obtaining the assessment and/or subjective assessment from the subject) is considered more or less challenging. By responding to the assessment, the subject provides input of selection and order of categories and at least helps to develop initial exposure progression. As another non-limiting example, in some embodiments, the category is considered more or less challenging to be determined by a healthcare practitioner associated with the subject, such as by having the healthcare practitioner evaluate some or all of the evaluations obtained by the subject and generate an initial exposure progression, etc.
As yet another non-limiting example, in some embodiments, the initial class progression is generated at least in part by a model. For example, in some embodiments, the model obtains at least an assessment from the subject and alternatively or additionally obtains other data (e.g., the user profile data of fig. 2A), and generates an initial exposure progression based on the assessment and/or other data from the subject. In some embodiments, the model obtains a subjective rating from highest to lowest (e.g., a user-provided "mild," "moderate," "severe," or "unresponsive" rating), an objective rating from highest to lowest (e.g., a most effective to least effective ranking determined by the digital reality system 200 or a healthcare practitioner associated with the subject), and/or other data, and generates an initial exposure progression based on the subjective rating, objective rating, and/or other data. In some embodiments, the healthcare practitioner and the model generate an initial exposure progression, such as by having the healthcare practitioner provide input and/or supervision to the model.
As yet another non-limiting example, in some embodiments, the recommended exposure progress is generated by (i) a system administrator, (ii) a subject, (iii) a healthcare worker associated with the subject, (iv) a model, or (v) a combination thereof. The subject is presented with recommended exposure progress. In some embodiments, the subject confirms the recommended exposure progress, or changes the order of categories to create an initial exposure progress that is different from the recommended exposure progress. For example, as an example, FIG. 7B illustrates recommended exposure progress, where a first category 740-1, a second category 740-2, and a third category 740-3 are placed in left to right order in the figure. Fig. 7A illustrates a change in order of categories by a subject, showing an initial exposure progression with an order of a first category 740-1, followed by a third category 740-3, and then a second category 740-2.
Blocks 590 through 592. Referring to blocks 590 through 592, in some embodiments, the method further includes (F) presenting a diagram (e.g., a diagram of user interface 700 of fig. 7A, a diagram of user interface 1800 of fig. 9D, a diagram of user interface 1400 of fig. 12A, a diagram of user interface 1400 of fig. 12B, etc.) on a display to represent an initial instance of the exposure progression. The presenting (F) is typically performed prior to presenting (B) the first digital reality scene presenting the first challenge designed for the first category of first suggested experiences.
The graph includes a plurality of nodes and a plurality of edges. For example, as a non-limiting example, FIG. 7B illustrates three nodes, namely node 730-1, node 730-2, and node 730-3. For each respective node of the plurality of nodes, the graph further includes a corresponding plurality of experience graphics displayed adjacent to the respective node. For example, as a non-limiting example, FIG. 7A illustrates experience graphics (e.g., experience graphics 742-1, 742-2, 742-3, …) displayed adjacent to node 730-1. Experience graphics represent suggested experiences. For example, in some embodiments, experience 742-1 represents experience 24-1 and experience graphic 742-2 represents experience 24-2. FIG. 7A also illustrates a plurality of experience graphics displayed adjacent to node 730-2 and a plurality of experience graphics displayed adjacent to node 730-3.
In the graph, each respective node of the plurality of nodes corresponds to a respective category of the plurality of categories. For example, in the illustrated embodiment, node 730-1 corresponds to (e.g., represents) a first category 740-1 (e.g., a first exposure category, a first CBT category, etc.), node 730-2 corresponds to a second category 740-2 (e.g., a second exposure category, a first positive concept category, etc.), and node 730-3 corresponds to a third category 740-3 (e.g., a second positive concept category, a second CBT category, etc.).
In the graph, each respective node of the plurality of nodes is also associated with a corresponding plurality of suggested experiences. For example, as a non-limiting example, node 730-1 is associated with a first experience 24-1 (e.g., see strangers at a first party), a second experience 24-2 (e.g., see strangers at a second party), a third experience 24-3 (e.g., see strangers at wedding feast), a fourth experience 24-4 (e.g., see strangers at work event), a fifth experience 24-5 (e.g., see strangers in an appointment App), and a sixth experience 24-6 (e.g., see strangers at beginning school). Nodes 730-2 and 730-3 are each associated with an experience (e.g., an experience from experience store 22 of digital reality system 200).
In the graph, each respective node of the plurality of nodes is further associated with at least one respective gate criterion of the plurality of gate criteria. That is, each respective node of the plurality of nodes is associated with the same one or more gate criteria as the category to which it corresponds. In some embodiments, the method displays a completion status of each respective gate criterion associated with each respective node in the graph.
In some embodiments, the gate criteria associated with one node in the graph specify conditions to be met by the subject before proceeding to another node in the graph. For example, in the embodiment illustrated in FIG. 7A, the gate criteria associated with node 730-1 in the graph specify the conditions to be met by the subject before proceeding to node 730-3 in the graph. The gate criteria associated with node 730-3 in the graph specify the conditions to be met by the subject before proceeding to node 730-2 in the graph.
In some embodiments, the gate criteria associated with one node in the graph specify a condition to be met by the examinee before activating the node in the graph. For example, in the embodiment illustrated in FIG. 7A, the gate criteria associated with node 730-1 in the graph specify the conditions to be met by the subject prior to activating node 730-1 in the graph. The gate criteria associated with node 730-3 in the graph specify the conditions to be met by the examinee before activating node 730-3 in the graph.
In the graph, for each respective node of the plurality of nodes, each respective experience graphic of the corresponding plurality of experience graphics corresponds to a respective suggested experience of the plurality of suggested experiences. For example, as a non-limiting example, experience graphic 742-1 displayed adjacent to node 730-1 in FIG. 7A corresponds to (e.g., represents) a first category of experience 24-1. Similarly, experience graphic 742-2, experience graphic 742-3, experience graphic 742-4, experience graphic 742-5, and experience graphic 742-6 correspond to first category experience 24-2, experience 24-3, experience 24-4, experience 24-5, and experience 24-6.
In the graph, for each respective node of the plurality of nodes, each respective experience graph of the corresponding plurality of experience graphs is also associated with at least one biometric threshold of the plurality of biometric thresholds. That is, each respective experience graphic of the plurality of experience graphics is associated with the same biometric threshold(s) as the experience to which it corresponds.
In some embodiments, each respective node of the plurality of nodes is connected to at least one other node in the graph by an edge of the plurality of edges. For example, as an example, FIG. 7A illustrates node 730-1 (the first node selected by the user and placed in the graph) connected to node 730-3 (the second node selected by the user and placed in the graph) by edge 744-1 and edge 744-2. Node 730-3 is connected to node 730-2 (the third node selected by the user and placed in the graph) by edge 744-3.
In some embodiments, each respective edge of the plurality of edges represents progress within the graph between the respective initial node and the respective subsequent node in the graph when the subject successfully completes the category represented by the respective initial node (e.g., a required number of corresponding challenges associated with the category represented by the respective initial node). For example, as a non-limiting example, assume node 730-1 is a respective initial node and node 730-3 is a respective subsequent node in the graph. Node 730-1 is associated with six proposed experiences, each associated with a corresponding digital reality scene that presents a corresponding challenge. To proceed to node 730-3, the subject must successfully complete a minimum number (e.g., 3, 4, or 5) of the corresponding challenges associated with node 730-1. In the event that the minimum number of corresponding challenges associated with node 730-1 are not successfully completed, the subject cannot proceed to node 730-3 unless a healthcare worker (e.g., a healthcare practitioner) associated with the subject intervenes (e.g., node 730-3 will not be activated to allow access to the suggested experience associated with node 730-3).
In some embodiments, the graph also includes multiple branches, such as branch 746-1, branch 746-2, and branch 746-3, for each respective node of the multiple nodes. In some embodiments, each experience graph of the plurality of experience graphs is connected to a respective node of the plurality of nodes by a branch of the plurality of branches. For example, as a non-limiting example, FIG. 7A illustrates that each of experience graph 742-1, experience graph 742-2, experience graph 742-3, experience graph 742-4, experience graph 742-5, and experience graph 742-6 is connected to node 730-1 by a branch. Specifically, experience graph 742-1 is connected to node 730-1 through branch 746-1, and experience graph 742-6 is connected to node 730-1 through branch 746-6.
It should be noted that the figures may include other alternative, or additional elements. For example, the graph may include one or more nodes (e.g., node 760 and node 770) that represent other educational or therapeutic challenges, such as cognitive reconstruction training, cognitive reconstruction challenges, positive training, and alternative/additional exposure exercises, etc. The map may also include other elements such as landmarks/landscapes, etc. The graph may also be modified or animated, etc. Additional information related to rendering a map on a display may be found in U.S. provisional patent application No. 63/223,871, which is incorporated herein by reference in its entirety for all purposes.
Block 594. Referring to block 594, in some embodiments, the obtained plurality of suggested experiences associated with the first category are initially arranged at an initial first experience level, thereby forming an initial first sub-progression (e.g., an initial first experience progression within the first category). For example, in some embodiments, the plurality of suggested experiences obtained associated with the first category include a first experience 24-1 (e.g., a first exposure experience that meets a stranger at a first party, a first CBT experience that reconstructs ideas), a second experience 24-2 (e.g., a second exposure experience that meets a stranger at a second party, a second CBT experience that determines the usefulness of ideas), a third experience 24-3 (e.g., a third exposure experience that meets a stranger at a wedding party, a third CBT experience that eliminates ideas), a fourth experience 24-4 (e.g., a fourth exposure experience that meets a stranger at a work event), a fifth experience 24-5 (e.g., a fifth exposure experience that meets a stranger at an appointment), and a sixth experience 24-6 (e.g., a sixth exposure experience that meets a stranger at an App). In some embodiments, the sixth category is considered to be the least challenging for the subject, followed by the third experience, the first experience, the second experience, and the fourth experience. The fifth physical test is considered to be the most challenging for the subject. In such a case, the six experiences are initially arranged in an initial first level of experience (i.e., in the order of sixth experience, third experience, first experience, second experience, fourth experience, and fifth experience). The plurality of experiences arranged in an initial first experience level form an initial first sub-progression. However, the present disclosure is not limited thereto.
In some embodiments, experience graphics corresponding to a plurality of suggested experiences associated with a first category are arranged on the graph in a particular order to represent an initial first experience level. For example, as a non-limiting example, FIG. 7A illustrates experience graph 742-6, experience graph 742-3, experience graph 742-1, experience graph 742-2, experience graph 742-4, and experience graph 742-5 arranged in a clockwise fashion adjacent to node 730-1. However, the present disclosure is not limited thereto. For example, in some embodiments, experience graphics are arranged on the graph in a counter-clockwise manner. In some embodiments, experience graphics are arranged on the graph in an ascending order. In some embodiments, experience graphics are arranged on the graph in a descending order. In some embodiments, experience graphics are arranged out of order on the graph, and the initial first experience level is represented by other metrics such as numbers or text.
In some embodiments, the initial first posterior hierarchy is set by (i) a system administrator, (ii) a subject, (iii) a healthcare worker associated with the subject, (iv) a model, or (v) a combination thereof. Generating the initial first posterior level is similar to generating the initial class exposure progression described above. For example, in some embodiments, the experience is considered more challenging or less challenging as determined (i) from an assessment or subjective assessment (e.g., by the assessment module 12 facilitating the acquisition of the assessment/subjective assessment from the subject), and/or (ii) by a healthcare practitioner associated with the subject (e.g., having the healthcare practitioner evaluate some or all of the assessment obtained by the subject and generate an initial first-trial progression). In some embodiments, the initial first posterior level is generated at least in part by the model based at least in part on the assessment and/or other data from the subject (e.g., the user profile data of fig. 2A). In some embodiments, the healthcare practitioner and the model generate an initial first experimental progression, such as by having the healthcare practitioner provide input and/or supervision for the model. In some embodiments, the initial first subject progression is generated by modifying a subject recommended first subject progression.
In some embodiments, the plurality of suggested experiences associated with the second category are initially arranged in an initial second experience level, thereby forming an initial second sub-progression (e.g., initial experience progression within the second category). In some embodiments, the plurality of suggested experiences associated with each respective category of the plurality of categories are initially arranged in an initial experience hierarchy, thereby forming a plurality of initial sub-advances.
In some embodiments, each category of the plurality of categories is associated with a unique ranking within the respective experience hierarchy (such as a first unique ranking of predicted efficacy or a second unique ranking of interest to the subject, etc.). For example, in some embodiments, the unique ranking is configured to define relative locations within the hierarchy. In some embodiments, the unique ranking provides an index for each of the plurality of categories such that no two categories have the same rank within the index for the plurality of categories. However, the present disclosure is not limited thereto.
Blocks 596 to 602. Referring to blocks 596-602, in some embodiments, the method further includes (S) evaluating whether a suggested experience immediately following the first suggested experience in the initial first sub-progression is suitable for the subject to proceed next. For example, as a non-limiting example, suppose a subject successfully completes a challenge 26-m designed for experience 24-m, and experience 24-n immediately follows experience 24-m in the initial experience hierarchy (e.g., first sub-progression). The method evaluates whether experience 24-n is appropriate for the subject to proceed next. In some embodiments, the assessment of whether experience 24-n is appropriate for the subject to next make is based at least in part on the level of success that the subject has in challenge 26-m designed for experience 24-m.
In some embodiments, the method further comprises (T) presenting a digital reality scene that presents challenges designed for the immediate follow-up advice experience in the initial first sub-progression, if the immediate follow-up advice experience is appropriate for the subject to follow-up. For example, as a non-limiting example, if it is determined that experience 24-n is appropriate for the subject to proceed next, the method presents a digital reality scene 40-n that presents a challenge 26-n similar to the method disclosed herein and illustrated by at least block 480.
In some embodiments, the method further comprises (U) repeating obtaining (C) and determining (D) for challenges designed for a immediately following suggested experience in the initial first sub-progression. For example, as a non-limiting example, the method obtains a plurality of data elements from all or a subset of the plurality of sensors similar to the method disclosed herein and exemplified by at least block 482, wherein at least one biometric sensor captures at least one biometric data element associated with the subject while the subject is completing challenge 26-n. Based on at least one biometric data element captured while the subject is completing the challenge 26-n, the method determines whether the challenge 26-n completed successfully, similar to the methods disclosed herein and exemplified by at least block 484.
In some embodiments, the method further comprises (V) recommending a suggested experience other than the immediately following suggested experience for the subject to proceed next if the immediately following suggested experience is not suitable for the subject to proceed next. For example, as a non-limiting example, if it is determined that experience 26-n is not suitable for the subject to proceed next, the method suggests experience 26-o for the subject to proceed next, where experience 26-o is associated with the same category as experience 26-m, but not immediately following experience 26-m in the initial experience hierarchy. However, the present disclosure is not limited thereto. For example, in some alternative embodiments, the method recommends experiences associated with a category other than experience 26-m, experiences not associated with any of the plurality of categories, educational challenges, positive-looking challenges, or cognitive-reconstruction challenges, and the like.
It should be noted that the processes illustrated in fig. 4A to 4R are not necessarily sequential. For example, in some embodiments, the presentation of the graph (F) as illustrated by at least block 590 is performed prior to the presentation of the first digital reality scene (B) that presents the first challenge as designed for the first category of first suggested experiences as illustrated by at least block 480. In some embodiments, the presentation of subjective assessment options (O) and the performance of subjective assessment (P) as illustrated by at least block 582 are performed prior to the acquisition of multiple categories for the subject (a) as illustrated by at least block 432. In some embodiments, the presentation of subjective assessment options (O) and the performance of subjective assessment (P) as illustrated by at least block 582 is performed after the repetition of (Q) and (R) for the second category as illustrated by at least block 586.
It should also be noted that the method may include additional, alternative, and/or alternative processes illustrated in the flowcharts in any meaningful and useful combination. For example, in some embodiments, the method includes generating a report for the subject and/or presenting the report to the subject.
It should also be noted that the processes disclosed herein and illustrated in the flowcharts may, but need not, be performed entirely. The subject and/or health care workers associated with the subject may begin, terminate, resume, or restart these processes as needed or desired.
Furthermore, in some embodiments, the present disclosure relates to providing means for achieving exposure progress (e.g., client device 300 of fig. 3 and/or digital reality system 200 of fig. 2A and 2B). The device is configured to enhance a subject's ability to manage a mental disorder or condition of the subject. Furthermore, the apparatus includes one or more processors and a memory coupled to the one or more processors. The memory includes one or more programs configured to be executed by the one or more processors. One or more programs are configured to cause a computer system to perform the methods of the present disclosure. In some embodiments, the apparatus includes a display and/or audio circuitry (e.g., a speaker). In some embodiments, the apparatus includes an objective lens in optical communication with the two-dimensional pixelated detector.
Referring to fig. 10A, 10B, and 10C, various therapeutic exposure advances are depicted for treating an individual (e.g., an individual 18 years old or older) having a mental health problem (such as social anxiety disorder or major depressive disorder, etc.) with the devices, systems, and/or methods of the present disclosure. In some embodiments, the exposure progress is structured to be used by a subject (e.g., a patient) at home while giving the healthcare professional (e.g., a clinician) associated with the subject the ability to asynchronously monitor and adjust the exposure progress (e.g., if needed) for the subject. In some embodiments, use of the systems and/or methods of the present disclosure provides for the treatment of a mental or psychiatric condition exhibited by a subject. That is, in some embodiments, the present disclosure includes methods of treating a psychotic disorder or condition by using the systems and/or methods of the present disclosure. In some embodiments, the method of treating a psychotic disorder or condition comprises combination therapy and/or one or more adjuvant therapies with any psychotic drug (e.g., a pharmaceutical composition administered to a subject to treat a psychotic disorder or condition exhibited by the subject). In some embodiments, the pharmaceutical composition comprises at least one selected from the group consisting of: a selective 5-hydroxytryptamine reuptake inhibitor (SSRI) pharmaceutical composition; a selective 5-hydroxytryptamine and norepinephrine inhibitor (SNRI) pharmaceutical composition; norepinephrine-dopamine reuptake inhibitor (NDRI) pharmaceutical compositions; N-methyl-D-aspartate receptor antagonist pharmaceutical compositions; a 5-hydroxytryptamine energy pharmaceutical composition; tricyclic antidepressant pharmaceutical compositions; monoamine oxidase inhibitor (MAOI) pharmaceutical compositions; tetracyclic antidepressant drug compositions; l-methyl folate pharmaceutical compositions; benzodiazepines a pharmaceutical composition; and beta blocker pharmaceutical compositions. In some embodiments, the pharmaceutical composition comprises at least one selected from the group consisting of: chlorpromazine, perphenazine, trifluoperazine, methodazine, fluphenazine, thiothixene, molinone, thioridazine, loxapine, haloperidol, aripiprazole, clozapine, ziprasidone, risperidone, quetiapine pharmaceutical compositions, olanzapine, citalopram, etapram, fluvoxamine, paroxetine, fluoxetine, sertraline, clomipramine, amoxapine, amitriptyline desipramine, nortriptyline, doxepin, trimipramine, promethazine, protamine, desvenlafaxine, venlafaxine, duloxetine, lorazepam, buspirone, propranolol, clonazepam, chlorazepine, oxazepam, atenolol, clorituxic acid, diazepam, alprazolam, amphetamine, dextroamphetamine, methylphenidate, lamotrigine, ketamine, and lithium.
In some embodiments, the exposure progress includes a plurality of categories, each of which relates to an ability to enhance the subject, such as an ability to reconstruct cognitive ideas, an ability to be exposed to stress, or an ability to cancel ideas, and the like. Further, each category is associated with a suggested experience that presents a corresponding challenge in a corresponding digital reality scenario associated with the suggested experience.
For example, in some embodiments, the exposure progress of the first category is associated with a plurality of educational experiences designed to educate the subject (such as a first educational experience educating the subject in terms of a long-term goal setting and a second educational experience educating the subject in terms of a short-term goal setting, etc.). In some embodiments, the educational experience presents challenges in corresponding digital reality scenarios through one or more psychological educational interactive challenges. In some embodiments, one or more psychological educational interactive challenges assist the subject in understanding potential psycho-social driving factors for their mental and behavioral health. In some embodiments, exposure to educational experiences, such as by completing challenges, etc., provides the ability for the psychological educational material to support, or provide the subject with, a solid basis and understanding of effective cross-diagnostic therapies.
In some embodiments, the exposure progression is configured (e.g., prescribed) by a qualified health care professional as one or more challenging doses per time period. In some embodiments, the time period may be one day, two days, three days, four days, five days, one week, two weeks, three weeks, one month, or more than one month. In some embodiments, the period of time is between 1 hour and 1 year, between 1 hour and 6 months, between 1 hour and 1 month, between 1 hour and 2 weeks, between 1 hour and 1 week, between 1 hour and 1 day, between 1 hour and 12 hours, between 6 hours and 1 year, between 6 hours and 6 months, between 6 hours and 1 month, between 6 hours and 2 weeks, between 6 hours and 1 week, between 6 hours and 1 day, between 6 hours and 12 hours, between 1 day and 1 year, between 1 day and 6 months, between 1 day and 1 month, between 1 day and 2 weeks, between 1 day and 1 week, between 5 days and 1 year, between 5 days and 6 months, between 1 day and 2 weeks, between 1 day and 1 day, between 5 days and 6 months, Between 5 days and 1 month, between 5 days and 2 weeks, between 5 days and 1 week, between 30 days and 1 year, between 30 days and 6 months, or between 30 days and 1 month. In some embodiments, the period of time is at least 1 hour, at least 6 hours, at least 12 hours, at least 1 day, at least 2 days, at least 5 days, at least 14 days, at least 20 days, at least 30 days, at least 31 days, at least 60 days, at least 2 months, at least 3 months, at least 4 months, at least 5 months, at least 6 months, at least 1 year, or at least 2 years. In some embodiments, the period of time is at most 1 hour, at most 6 hours, at most 12 hours, at most 1 day, at most 2 days, at most 5 days, at most 14 days, at most 20 days, at most 30 days, at most 31 days, at most 60 days, at most 2 months, at most 3 months, at most 4 months, at most 5 months, at most 6 months, at most 1 year, or at most 2 years. In some embodiments, the period of time is a duration of time that the subject interacts with or is presented with the digital reality scene. However, the present disclosure is not limited thereto. The period of time used by a challenge may be the same as or different from the period of time used by another challenge. As a non-limiting example, fig. 10A and 10B illustrate an overview of planned exposure progress 2300 over eight weeks (with three challenges per week). In some embodiments, the exposure progress includes one or more mental educational challenges, one or more exposure challenges (e.g., social challenge practices), one or more positive conceptual challenges, one or more cognitive reconstruction challenges, one or more goal setting challenges, or a combination thereof. in some embodiments, the exposure progress is assigned or divided into a plurality of chapters. in some embodiments, the plurality of chapters includes between 1 and 100 chapters, between 2 and 50 chapters, between 2 and 30 chapters, between 2 and 24 chapters, between 2 and 20 chapters, between 2 and 12 chapters, between 2 and 5 chapters, between 3 and 100 chapters, between 3 and 50 chapters, between 3 and 30 chapters, between 3 and 24 chapters, between 3 and 20 chapters, between 3 and 12 chapters, between 3 and 5 chapters, between, Between 5 and 100, between 5 and 50, between 5 and 30, between 5 and 24, between 5 and 20, between 5 and 12, between 10 and 100, between 10 and 50, between 10 and 30, between 10 and 24, between 10 and 20, between 10 and 12, between 18 and 100, between 18 and 50, between, Between 18 and 30 chapters, between 18 and 24 chapters, or between 18 and 20 chapters. In some embodiments, the plurality of chapters includes at least 1 chapter, at least 3 chapters, at least 5 chapters, at least 6 chapters, at least 12 chapters, at least 14 chapters, at least 20 chapters, at least 24 chapters, at least 30 chapters, at least 31 chapters, or at least 60 chapters. In some embodiments, the plurality of chapter sections includes at most 1 chapter, at most 3 chapters, at most 5 chapters, at most 6 chapters, at most 12 chapters, at most 14 chapters, at most 20 chapters, at most 24 chapters, at most 30 chapters, at most 31 chapters, or at most 60 chapters.
Some of the mental education, social challenge practices, positive idea practices, cognitive reconstruction practices, and goal settings are required, and some of them are not required. In some embodiments, social challenge practices are always needed.
In some embodiments, the exposure progress is configured to prevent the subject from moving faster than prescribed (e.g., up to three challenges a week). In various embodiments, the exposure progress is configured to allow the subject to slow his/her cadence if selected by the subject or if the healthcare professional associated with the subject proposes the suggestion, such as by having the subject repeat the first experience and/or the first challenge, etc. For example, in some embodiments, the exposure progress is configured to allow the subject to complete the challenge in about one week, about two weeks, or more than two weeks, if selected by the subject. In various embodiments, the exposure progress is further configured to allow the subject to return to the exposure progress and to make the selectable content at any time selected by the subject. For example, if the subject completes the desired content of the first chapter in the morning and wants to return to the exposure progress for positive practice at a later time of the day, the exposure progress is configured to allow the subject to access the positive practice of the first chapter. However, the present disclosure is not limited thereto.
In some embodiments, a client application, such as companion application 2100, is provided to or accessible by a subject and/or healthcare professional. In some embodiments, the client application 2100 includes one or more functions that are alternative, additional, or alternative to exposing the ongoing functions. In some embodiments, the client application is used by a healthcare professional to prescribe the subject's exposure progress, by the subject to track his/her progress, record his/her mood and mind, add short-term goals, by a healthcare professional associated with the subject to monitor the subject's progress and revise the exposure progress as needed, or any combination thereof.
In some embodiments, once in the exposure progress (e.g., once the subject indicates the exposure progress, such as by interacting with the first experiment and/or the first challenge, etc.), the subject may move through the exposure progress as prescribed by the healthcare professional. For example, in some embodiments, once the subject registers for exposure progress, has a unique PIN, and synchronizes his/her headset, the subject may begin experiencing in virtual/digital reality. In some embodiments, when the subject begins to experience in digital reality, the subject immediately finds himself/herself in a DR environment (such as DR environment 1000 of a scenic spot called a lakebed house, etc.). In some embodiments, while in the DR environment 1000 (e.g., a lakeside cabin), the subject may explore and/or tranche around the DR environment 1000, familiarize the surrounding environment, and/or have an avatar selected to represent his/her own options during exposure progression.
In some embodiments, when the subject is ready to begin exposure progress, a DR assistant (such as DR assistant 1100) will appear (e.g., knock on) and the first chapter of the desired content will begin. In some embodiments, the DR assistant will cause the subject to navigate to his/her first experience and/or first challenge, such as a psychological education challenge, etc. Psychological education may occur in any suitable DR area within the progression of exposure and in any suitable format. For example, in some embodiments, psychological education occurs primarily in a designated area 1010 called a theater or educational room, and is displayed as video on a DR object (such as a TV screen or the like).
In some embodiments, only one psychoeducational experience is available in the first chapter, but throughout the exposure progression, the subject will experience several psychoeducational videos. In some embodiments, the topics of the mental education video include, but are not limited to, education regarding: (i) thinking, feel, action and cognitive behavioral therapies; (ii) positive concepts; (iii) target setting; (iv) exposure therapy; (v) cognitive reconstruction to relate emotion and behavior; (vi) different cognitive distortion types; (vii) collecting evidence cognitive reconstruction techniques; (viii) a usefulness-aware reconstruction technique; and/or (ix) maintenance ready for graduation to exposure progression.
In some embodiments, the psychoeducational video may be a short video or a long video. For example, in some embodiments, the psycho-educational video may last for about one minute, about two minutes, about three minutes, about four minutes, about five minutes, about six minutes, about seven minutes, about eight minutes, or more than eight minutes. In an embodiment, each psychological education video takes about three to five minutes to complete.
In some embodiments, the subject will learn and experience positive ideas in the second section. The positive concepts are the subject's options at any time forward from the second chapter, and the subject will have access to an appropriate number of positive concepts practices. For example, a subject may have access to more than two, more than four, more than six, more than eight, more than ten, more than twelve, more than fourteen, more than sixteen, more than eighteen, or more than twenty positive concepts practices. In some embodiments, the subject may select a voice (e.g., male or female) that the subject prefers to guide the subject through mental practices, and/or a location (e.g., place or environment) that the subject wants to experience while in his/her formal sense practice. As a non-limiting example, fig. 14 illustrates allowing a subject to select a location among a plurality of locations (e.g., locations 2410-1, 2410-2) to practice the positive concepts.
In some embodiments, the third section relates to target settings. In some embodiments, the third section includes education as to why the goal settings are important and beneficial, and teaches the goal settings to the subject through one or more interactive activities. In some embodiments, the targeting is done in the exposure progress 2300 or in the client application 2100. For example, as a non-limiting example, in some embodiments, long-term target settings are made (e.g., using DR itinerary object 1022 in a study room) in the progress of exposure, and short-term target settings are made (e.g., typed or recorded, etc.) in a client application. As another non-limiting example, in some embodiments, both long-term and short-term target settings are made in the client application. As a further non-limiting example, in some embodiments, both long-term and short-term target settings are made in the progress of exposure. In some embodiments, the subject will set his/her long-term targets (e.g., three long-term targets) using DR journey object 1022 in the study room of the lakebab. In some embodiments, the subject will use the client application 2100 to set his/her short-term targets.
In some embodiments, the exposure therapy includes a plurality of social challenges personalized by the subject while in progress of the exposure and/or by a healthcare professional associated with the subject. For example, in some embodiments, the subject is guided by the DR assistant to personalize how the subject will move through his/her exposure by setting a hierarchy of his/her different fear categories. The fear hierarchy may include two, three, four, five, six, seven, eight, nine, ten, or more than ten fear categories. As a non-limiting example, fig. 12A and 12B illustrate a hierarchy with three different fear categories (general performance, confidence, and interaction with strangers).
In some embodiments, the fear level of the social challenge is set by the subject in a specified area in progress of exposure, such as specified area 1020 in interactive DR environment 1000 (e.g., a study room in a lake house), and so forth. To assist the subject in setting the fear hierarchy, in some embodiments, the designated area includes a plurality of DR category objects each representing a fear category, and DR hierarchy objects for the subject to place the selected fear categories in order, thereby forming the fear hierarchy. The plurality of DR category objects and DR level objects may be configured to simulate any real or non-real, present or absent item, device, image, text, symbol, cartoon, or the like. By way of non-limiting example, FIG. 12A illustrates a ladder-simulating DR-level object 2610 and a plurality of DR category objects (e.g., category object 2620-1, category object 2620-2, and/or category object 2620-3) simulating a placard. In some embodiments, fear titles 2622 and/or icons 2624 of respective categories are displayed on corresponding placards.
The DR assistant will take the subject's process of walking through his/her fear hierarchy, explaining that the subject will want to set the category in such a way that the subject can climb the stairs through his/her effort to reach his/her most fear social fear category. The subject will select a placard and hang the selected placard on the step. The least fear category will be placed at the lowest level of the ladder and the most fear category will be placed at the highest level of the ladder. In some embodiments, the DR assistant will require the subject to confirm the selection before the fear level is complete.
The fear hierarchy of social challenges may be set in other ways, such as those disclosed in the following documents: U.S. provisional patent application Ser. No. 63/223,871, filed 7/20/2021; U.S. provisional patent application Ser. No. 63/284,862 filed on 1/12/2021; and U.S. patent application Ser. No. 17/869,670 filed on 7/20/2022, each of which is incorporated herein by reference in its entirety for all purposes.
In some embodiments, the subject will start from his/her least fear category and reach his/her most fear category through his/her effort. In some embodiments, each category is mapped to one or more interactive challenges. In some embodiments, challenges are set in a variety of different experiences implemented through digital reality scenarios known to trigger mental diseases or conditions exhibited by the subject. For example, in some embodiments, exposure challenges are set in school cafeterias, classrooms, job interviews, park appointments, airport trips, and/or home BBQ/parties for subjects with social anxiety disorders. In some embodiments, the subject must practice the exposure challenge multiple times (e.g., at least 5 times, at least 10 times, at least 15 times, at least 20 times, at least 25 times, at least 30 times, at most 5 times, at most 10 times, at most 15 times, at most 20 times, at most 25 times, at most 30 times, between 5 and 20 times, between 10 and 30 times, or between 10 and 20 times, etc.) as the subject moves through the exposure progression to reach his/her fear step with his/her effort.
In some embodiments, before and/or after the challenge, as the subject moves throughout the exposure progress, the subject is required to answer the assessment, such as selecting his/her subjective pain unit (SUDS) to track his/her stress level from the challenge, and so on. In some embodiments, before and/or after each challenge, the subject is required to select his/her SUDS to track his/her stress level from the challenge as the subject moves throughout the exposure progression. SUDS is a self-assessment tool for measuring the intensity of anxiety, anger, agitation, stress, or other sensations, and is typically rated on a scale from a first number to a second number. As a non-limiting example, in some embodiments, the SUDS is rated on a scale from 0 (e.g., no pressure at all) to 10 (e.g., very high pressure). In some embodiments, the selection of the SUDS is self-administered by the subject without the supervision of a health care worker (e.g., a healthcare practitioner) associated with the subject. In some other embodiments, the selection of the SUDS is made by the subject, but is supervised by a health care worker associated with the subject. In some embodiments, the evaluation includes a GAD evaluation (e.g., a GAD-2 evaluation) and/or a PHQ evaluation (e.g., a PHQ-2 evaluation).
In some embodiments, the subject begins to learn and practice CBT techniques using the fourth section. In some embodiments, the CBT experience occurs in a digital reality scenario simulating a forest environment. In some embodiments, the experience is configured to facilitate a combination of interactive psychological educational challenges (such as directed by a digital reality host, etc.) and practical challenges. For example, in some embodiments, the fifth section is associated with a first CBT experience associated with a first challenge to let the examinee understand how thoughts, moods, and behaviors are tied together. In some embodiments, the sixth section is associated with a second CBT experience associated with a second challenge to cause the subject to mark the cognitive twist with a different twist type. In some embodiments, the seventh section is associated with a third CBT experience associated with a third challenge to cause the examinee to collect evidence associated with the idea that helps the examinee record evidence supporting and countering the cognitive distortion associated with the idea verbatim. In some embodiments, the eighth CBT experience associated with the fourth challenge includes a degree to which the subject knows that certain ideas are useful for achieving his/her short-term and/or long-term goals.
Accordingly, the present disclosure allows for personalized exposure therapy to be provided through digital reality to enhance a subject's ability to manage mental or psychotic conditions exhibited by the subject.
Cited references and alternative examples
All references cited herein are incorporated by reference in their entirety for all purposes as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference in its entirety for all purposes.
The present invention may be implemented as a computer program product comprising a computer program mechanism embedded in a non-transitory computer readable storage medium. For example, a computer program product may contain instructions for operating the user interface disclosed herein. These program modules may be stored on a CD-ROM, DVD, magnetic disk storage product, USB key, or any other non-transitory computer readable data or program storage product.
As will be apparent to those skilled in the art, many modifications and variations of the present invention can be made without departing from its spirit and scope. The specific embodiments described herein are offered by way of example only. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. The invention is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (92)

50. The method of claim 49, wherein during the first digital reality scenario exhibiting the first challenge designed for the first suggested experience associated with the first category, one of the first and third biometric thresholds is (i) a desired minimum change in a number of words compared to a word baseline of a subject, (ii) a desired minimum change in a number of utterances compared to a speech baseline of the subject, (iii) a desired minimum change in confidence compared to a confidence baseline of the subject, (iv) a desired minimum change in decibel level of the subject compared to a decibel level baseline of the subject, and/or (v) a desired minimum change in pitch compared to a pitch baseline of the subject, and the other of the first and third biometric thresholds is a desired minimum change in length of eye contact compared to an eye contact baseline of the subject.
CN202280090215.XA2021-12-012022-12-01 Management of psychosis or psychiatric conditions using digital or augmented reality with personalized exposure progressionPendingCN118660667A (en)

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
US63/284,8622021-12-01
US202263415876P2022-10-132022-10-13
US63/415,8762022-10-13
PCT/US2022/051549WO2023102125A1 (en)2021-12-012022-12-01Management of psychiatric or mental conditions using digital or augmented reality with personalized exposure progression

Publications (1)

Publication NumberPublication Date
CN118660667Atrue CN118660667A (en)2024-09-17

Family

ID=92704475

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202280090215.XAPendingCN118660667A (en)2021-12-012022-12-01 Management of psychosis or psychiatric conditions using digital or augmented reality with personalized exposure progression

Country Status (1)

CountryLink
CN (1)CN118660667A (en)

Similar Documents

PublicationPublication DateTitle
CN113873935B (en) Personalized digital treatment methods and devices
US20250149164A1 (en)Virtual healthcare communication platform
Rehg et al.Mobile health
US12171558B2 (en)System and method for screening conditions of developmental impairments
US20230099519A1 (en)Systems and methods for managing stress experienced by users during events
US20190013092A1 (en)System and method for facilitating determination of a course of action for an individual
US12009088B2 (en)Systems and methods for management of psychiatric or mental conditions using digital or augmented reality
Beccaluva et al.Predicting developmental language disorders using artificial intelligence and a speech data analysis tool
WO2023102125A1 (en)Management of psychiatric or mental conditions using digital or augmented reality with personalized exposure progression
EP3921792A1 (en)Virtual agent team
US12211607B2 (en)Management of psychiatric or mental conditions using digital or augmented reality with personalized exposure progression
Lim et al.Artificial intelligence concepts for mental health application development: therapily for mental health care
Sun et al.Catching audiovisual interactions with a first-person fisherman video game
Awada et al.Mobile@ old-an assistive platform for maintaining a healthy lifestyle for elderly people
Kohlberg et al.Development of a low-cost, noninvasive, portable visual speech recognition program
CN118660667A (en) Management of psychosis or psychiatric conditions using digital or augmented reality with personalized exposure progression
Sommer et al.I’ve Gut Something to Tell You: A Speculative Biofeedback Wearable Art Installation on the Gut-Brain Connection
US12350048B2 (en)Management of psychiatric or mental conditions using digital reality with cognitive behavioral technique
RamezanzadeAdding acoustical to visual movement patterns to retest whether imitation is goal-or pattern-directed
US11694797B2 (en)Virtual healthcare communication platform
MadanThin slices of interest
Wohlfahrt-LaymannCogniDecline: tracking mobile interaction for cognitive assessment
KarolusProficiency-aware systems: designing for user skill and expertise
AgarwalExploring Real-Time Bio-Behaviorally-Aware Feedback Interventions for Mitigating Public Speaking Anxiety
KhalilATICCS: An Assistive Tool for Increasing ADHD Children’s Concentration on the Screen

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20240920

Address after:Kentucky, USA

Applicant after:Bechville LLC

Country or region after:U.S.A.

Applicant after:FrontAct Corp.

Country or region after:Japan

Address before:U.S.A.

Applicant before:Bechville LLC

Country or region before:U.S.A.

Applicant before:SUMITOMO PHARMACEUTICALS CO.,LTD.

Country or region before:Japan

TA01Transfer of patent application right

[8]ページ先頭

©2009-2025 Movatter.jp