CROSS-REFERENCE TO RELATED APPLICATIONThis application is a continuation of and claims priority to U.S. Application No. 16/775,015, filed on Jan. 28, 2020 and entitled “SYSTEM FOR PROVIDING A VIRTUAL FOCUS GROUP FACILITY,” the entirety of which is incorporated herein by reference.
BACKGROUNDToday, many industries, companies, and individuals rely upon physical focus group facilities including a test room and adjacent observation room to perform product and/or market testing. These facilities typically separate the two rooms by a wall having a one-way mirror to allow individuals within the observation room to watch proceedings within the test room. Unfortunately, the one-way mirror requires the individuals to remain quiet and in poorly lit conditions. Additionally, the individual observing the proceedings is required to either be physically present at the facility or rely on a written report or summary of the proceeding when making final product related decisions.
BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
FIG.1 illustrates an example architecture of a virtual focus group platform according to some implementations.
FIG.2 illustrates an example pictorial view of a test subject participating in a session facilitated by a virtual focus group platform according to some implementations.
FIG.3 illustrates an example pictorial view of a moderator participating in a session facilitated by a virtual focus group platform according to some implementations.
FIG.4 illustrates an example pictorial view of a client group observing in a session facilitated by a virtual focus group platform according to some implementations.
FIG.5 illustrates an example flow diagram showing an illustrative process for providing a virtual focus group according to some implementations.
FIG.6 illustrates an example flow diagram showing an illustrative process for providing a virtual focus group according to some implementations.
FIG.7 illustrates an example flow diagram showing an illustrative process for providing a virtual focus group according to some implementations.
FIG.8 illustrates an example platform for providing a virtual focus group according to some implementations.
FIG.9 illustrates an example test subject system associated with the platform ofFIG.8 according to some implementations.
FIG.10 illustrates an example moderator system associated with the platform ofFIG.8 according to some implementations.
FIG.11 illustrates an example client system associated with the platform ofFIG.8 according to some implementations.
DETAILED DESCRIPTIONDescribed herein are devices and techniques for a virtual focus group facility via a cloud-based platform. The focus group platform, discussed herein, replicates and enhances the one-way mirror experience of being physically present within a research environment by removing the geographic limitations of the traditional focus group facilities and augmenting data collection and consumption by users via a virtual glass experience for the end client and real-time analytics. For example, the virtual glass may allow a moderator or other administer to view a focus group or other proceeding via a live stream of the audio and video data while augmenting the view experience by providing for additional data related to the focus group to be displayed or presented in conjunction with or superimposed on the audio and video stream. In some cases, the augmented data may be displayed over the live stream, such as in conjunction with the individual test subject that the data relates. In other cases, the platform may allow for a multi-device view experience in which one device displays the augmented live stream while an auxiliary or secondary device allows individual viewers of the live stream to annotate the live stream, receive and/or annotate a substantially real-time transcript of the live stream, chat or otherwise discuss the proceeding with other viewers, etc. label individual test subjects. Thus, the platform discussed herein, creates a focus group experience that enhances the experience for each of the actors, including the moderator, test subjects, and clients viewing the sessions.
In some implementations, the platform may include a test subject system or service, a moderator system or service, and a one or more component or device client system or service. Each of the systems or services may be accessible via one or more electronic devices, such that the test subjects, moderators, and/or clients may be physically remote from each other during a session. For example, the moderator may be located at their office or place of work and the test subject may be located within their home (e.g., to provide increased comfort and/or a test environment that is more representative to a real-life situation than a physical test room). The clients may also include a plurality of employees or individuals that may observe the proceedings of the session from multiple physical locations, such as is common in today’s international corporate environment. For example, a first individual client may be located in New York City and a second individual client may be located in San Francisco, both of which are able to participate in the session without incurring the costs and disruption of traveling.
In some cases, the moderator may be able to communicate (e.g., text, audio, and/or video) with one or more test subjects via the platform. For example, the moderator may be able to pose questions, present stimulus (e.g., images, text, audio, or other content), or otherwise communicate with one or more test subjects. For instance, the moderator may be able to cause audio and/or visual content to display on a test subject device while asking the test subject to rate an emotional state or feeling that is invoked by the presented content in a manner similar to displaying content in a shared physical test room. In some situations, the moderator may be in communication with a single test subject to replicate a traditional physical test room situation. However, the platform may allow the moderator to communicate with multiple test subjects substantially simultaneously without each of the test subjects being aware of the others. For example, since the test subjects may be located at physically distant facilities (e.g., within their homes), the platform may allow the moderator to provide content to each subject’s electronic device and to ask each subject the same or similar questions. In this manner, the platform allows for a one on one experience for the test subject but also allows the moderator to test multiple subjects substantially simultaneously, thereby, reducing the overall costs associated with conventional product and/or market testing.
The platform also improves the overall experience of the clients observing the session. For example, the platform may replicate the experience of a one-way mirror by capturing image (e.g., video) and audio data from each of the test subjects as well as the moderator (for instance, via a camera associated with the test subject device and/or moderator electronic devices) and presenting the image and audio data to each of the clients via a first device (e.g., a television). In this manner, the television may act as a virtual glass for the clients to view the session. In addition to replicating the one-way mirror of the conventional facility, by utilizing a virtual glass the clients are no longer required to sit in a poorly lighted room nor to maintain a quiet atmosphere (e.g., if two clients are co-located they may discuss the session in real time rather than simply taking notes to discuss later). Similarly, the test subject’s experience is also improved as the test subjects are no longer required to sit in a mirrored room that may feel like an interrogation chamber. In some cases, the improved experience by the test subject also relates directly to improved results and better data collection. Thus, the platform, described herein, is able to improve the overall conventional focus group facility by not only reducing costs, but by improving the user experience and facility conditions.
In some implementations, in addition to collecting image and audio data from the test subjects, the platform may also be configured to capture biometric data related to the test subject, such as heartbeat/heartrate data, brain activity, temperature, type and amount of motion (e.g., is the test subject fidgeting, walking, standing, siting, etc.), focus or eye movement data, among others. The platform may be configured to analyze the captured biometric data for each test subject and to generate various status indicators that may be presented to both the moderator and/or the clients. In some cases, the types of status or amount of data presented to the moderator may differ from the status indicators or amount of data presented to the clients to assist the moderator in quickly analyzing and understanding conditions associated with the test subjects. For example, the status indicators for the moderator may include colors, ratings, or icons such as red for negative mood, green for positive mood, smiley face for happy, laughing face for amused, etc. The status indicators for the clients may be more detailed and include brain activity, blink rate, facial expression analysis, voice analytics, electroencephalography (EEG) sentiment analysis, visual fixation rate, eye position or eye movement/tracking analysis, Galvanic skin response, response latency, body posture analysis, and/or heart rate graphs to further show a subject’s response to various stimulus.
In some examples, the platform may also capture and collect environmental data (e.g., room temperature, background noise, other individuals in the environment, etc.). The environmental data may be used in conjunction with the biometric data to inform the status indicators. For example, if it is too hot in a room, the platform may lower one or more thresholds associated with the biometric data such that assigning a positive attitude of the test subject may require a lower threshold than when the test subject is in a comfortable temperature zone.
In some implementations, the platform may also process the image data and/or audio data to supplement or assist with generating the status indicators. For example, the platform may detect facial expressions as the subject responds to stimulus presented on the subject device. In another example, the platform may detect focus or eye moment in relation to the content on the subject electronic device, such as to determine a portion of the content attracting the subjects focus. In some implementations, the platform may also perform speech to text conversion in substantially real time on audio data captured from the moderator and/or each test subject. In these implementations, the platform may also utilize text analysis and/or machine learned models to assist in generating the status indicators. For example, the platform may perform sentiment analysis that may include detecting use of negative words and/or positive words and together with the image processing and biometric data processing generate more informed status indicators. In some cases, the platform may aggregate or perform analysis over multiple test subjects. For instance, the platform may detect similar words, (verbs, adjectives, etc.) used to in conjunction with discussion of similar content, questions, stimuli, and/or products by different test subjects. In some cases, the platform may generate a report linking related sessions with different test subjects to reduce overall time associated with generating and reviewing test reports. In some cases, the reports are searchable such that a high-level summary may be provided by the platform that is linkable to corresponding data and/or recordings of the various associated sessions. For example, a CEO may receive the high-level summary and determine that the CEO should review all instances of negative feedback of a product generated by test subjects having a particular demographic (e.g., gender, age, social economic status, etc.) and the platform may cause portions of the associated session recordings to be sent to a device associated with the CEO.
As discussed above, the platform collects various types of data related to the test subject and/or the testing environment. The platform may then generate status indicators related to the test subject and/or the environment, aggregate subject data, derive trends, respective, or common feedback from the test subjects, and suggest questions to the moderator based on various models, thresholds, and the collected data.
The platform may also present the text in conjunction with the image and audio data on the first client device in substantially real-time. In some cases, the platform may also present the text to the individual clients via a second device. For instance, the image and audio data may be presented on a first device (e.g., a television or other electronic device with a large screen) that allows for a large viewing experience while the text (and in some instances the image data) is presented on a second device. In this manner, the first device may act as the virtual glass for the clients while the second device allows the clients to take notes, add comments, rewind, revisit, review particular portions of the session via a session recording.
In some examples, the platform may also allow multiple clients to interact with each other while viewing the recording session presented on the second client devices. For instance, the platform may allow for audio or text-based chat between the clients via the second devices as well as text-based or audio-based annotation, tagging, or notes. In other instances, the platform may provide a notification or alert to each client when other clients add comments, notes, or other annotations to the recording. In some implementations, the platform may include client identifiers and/or allow clients to annotate other client’s annotations. In this manner, each client may be aware of what other clients are finding interesting within the session and further facilitate real time conversation and commentary on the session that is typically suppressed in conventional focus group facilities.
The platform may also, in some cases, allow for communication between one or more clients and the moderator. In some examples, the communication between the client and the moderator may be one way from the client to the moderator as the moderator may be in conversation with the test subject during the session. In these examples, the communication may include short text-based messages that the clients may send to the moderator to assist the moderator in understanding the direction the client would like the session to take.
FIG.1 illustrates anexample architecture100 of a virtualfocus group platform102 according to some implementations. In the current example, theplatform102 may be in wireless communication with one or more test subject devices104(1)-(K) associated with a first set of test subjects106(1)-(L) as well as one or more test subject devices108(1)-(N) associated with a second set of test subjects110(1)-(M). Theplatform102 is also in wireless communication with one or more moderator systems112(1)-(Z). Thus, the current example, illustrates aplatform102 configured to facilitate a focus group consisting of one or more test subjects (e.g., the first set oftest subjects106 and the second set of test subjects110) and conducted or lead by one or more remote moderators ormoderator systems112. It should be understood that the first set of test subjects106(1)-(L) may be physically remote from the second set of test subjects110(1)-(M) and that each test subjects106(1)-(L) and110(1)-(M) may receive data (e.g., requests128 and stimuli130) from themoderator system112 via multiple devices, generally illustrated as the devices104(1)-(K) and108(1)-(N). Similarly, each test subjects106(1)-(L) and110(1)-(M) may be able to providefeedback132 to themoderator system112 via the corresponding devices104(1)-(K) and108(1)-(N).
In some implementations, the focus group may be conducted or lead by a moderator via themoderator systems112. Theplatform102 may be configured to allow the moderator may generaterequests128 and providestimuli130 to evoke a response from the test subjects106(1)-(L) and110(1)-(M) via themoderator system112 and the test subject devices104(1)-(K) and108(1)-(N). Therequests128 may include questions provided as either text, images, video, audio, or a combination thereof. For example, therequests128 may include an audio/video stream of the moderator that is provided to the test subjects106(1)-(L) and110(1)-(M) in the manner of a video chat session. In some instances, the video chat session may allow the moderator to communicate with a particular test subject106(1)-(L) and110(1)-(M) in a conversational two-way communication similar to being one on one in the same physical environment. However, it should be understood, that in some implementations, such as when the moderator is leading a focus group consisting of multiple test subjects106(1)-(L) and110(1)-(M) at different physical locations, therequests128 may provide for one-way communication from the moderator to the test subjects106(1)-(L) and110(1)-(M). In the implementation in which the video/audio stream is one-way, the test subjects106(1)-(L) and110(1)-(M) may providefeedback132 by entering, selecting, typing, or otherwise providing user inputs via the test subject devices104(1)-(K) and108(1)-(N). For instance, therequests128 may include a polling features that may allow the moderator to question the test subjects106(1)-(L) and110(1)-(M). As an illustrative example, the polling question may include arequest128 to thetest subjects106 and110 to rate an advertisement being presented to the test subjects106(1)-(L) and110(1)-(M). In this instance, the test subjects106(1)-(L) and110(1)-(M) may providefeedback132 by typing or selecting a rating (such as selecting a number from 1-10 or turning a dial up or down).
In the instance discussed above, the test subjects106(1)-(L) and110(1)-(M) are responding or rating an advertisement (e.g., thestimuli130 provided by the moderator). For example, the moderator or themoderator system112 may be configured to cause the advertisement or other content to be displayed to the test subjects106(1)-(L) and110(1)-(M) via the test subject devices104(1)-(K) and108(1)-(N). A non-exhaustive list of thestimuli130 may include images, video clips, audio, tactile responses, or combinations thereof that may be selected, generated, and/or provided by themoderator system112 to the test subject devices104(1)-(K) and108(1)-(N) via theplatform102.
In some examples, the test subject devices104(1)-(K) and108(1)-(N) may also be configured to or adapted to capture various types ofsensor data134 associated with the corresponding test subjects106(1)-(L) and110(1)-(M) and to provide thesensor data134 to theplatform102 and themoderator system112. For example, thesensor data134 may include image data (e.g., video data), audio data, biometric data (e.g., brain activity, heartrate, blink rate, EEG sentiment, visual fixation rate, Galvanic skin response, response latency, temperature, etc.), environmental data (e.g., room temperature, room occupancy, etc.). In the current example, the test subject devices104(1)-(K) and108(1)-(N) may be configured to capture thesensor data134, however, it should be understood that in some implementations, distinct devices may be utilized to capture different types ofsensor data134. For instance, the test subjects106(1)-(L) and110(1)-(M) may be located in a room that includes separate microphones or microphone arrays, cameras, biometric data collection devices (e.g., gloves, headsets, body sensors, etc.), and/or environmental sensors (e.g., smart thermostat).
In one specific example, themoderator system112 or theplatform102 may be configured to suggest or recommendstimuli130 to the moderator and/or send directly to the test subject devices104(1)-(K) and108(1)-(N) (such as in aplatform102 that implements as autonomous or virtual moderator). For instance, using thefeedback132 and/or thesensor data134 as an input, themoderator system112 or theplatform102 may select or determine thenext stimuli130 and/or request128 based on the output of one or more heads of a machine learned model or neural network.
In some implementations, theplatform102 and/or aremote database118 may be configured to receive thesensor data134 and/or thefeedback132 and to generate arecording136 of the session. In some cases, theplatform102 and/or theremote database118 may generate the speech-to-text version or transcript of the captured audio from either or both of the moderator and the test subjects106(1)-(L) and110(1)-(M). In some examples, the transcript may be translated into one or more secondary languages and presented to theclient systems114 and116 based on a preferred language of the corresponding client or clients. Therecording136 may then include both the audio/video data as well as linked or otherwise associated text version of the audio data. In this way, therecording136 may be viewed in segments based on one or more searches (e.g., as a text-based search) to reduce the overall time to review each session. In one specific example, therecording136 may be generated in substantially real-time, such that an individual watching the session may also receive the text-based version without significant gaps in time.
Theplatform102 and/or theremote database118 may also generate various types ofstatus indicators138 and/oranalytics140 associated with one or more sessions orindividual test subjects106 or110 of the session. For example, theplatform102 and/or theremote database118 may detect facial expressions from the image data of thesensor data134 as the subject responds to thestimulus130 presented on the subject device. In another example, theplatform102 and/or theremote database118 may detect focus or eye moment in relation to the content on the subject device, such as to determine a portion of the stimulus130 (e.g., content) attracting the subject’s focus. In still other examples, theplatform102 and/or theremote database118 may process the biometric data to determine a mood (e.g., happy if a test subjects106 or110 heartrate increases above a threshold). In some examples, thestatus indicators138 may be a color such as green for happy, yellow for calm, red for anger, etc. or an icon such as a laughing face for amused, a crying face for sad, etc. to allow the moderator to quickly determine the mood of thetest subjects106 or108. In some examples,common status indicators138 may be shown to, for instance, the moderator via themoderator system112 or the clients via theclient systems114 and116 as a text bubble, circle, or icon that has the designated colors or numerical values (such as 1-10) as part of the augmented virtual glass experience and/or superimposed on the image of the corresponding test subject. In some examples, lesscommon status indicators138 may be provided as textual data, such as an indicator related to the tests subject falling asleep during the session. In some examples, in addition to the status indictors128, various demographic data may be displayed as par to of virtual glass experience and/or superimposed on the image of the corresponding test subj ect to the moderator and/or the clients. For instance, images of each of the test’s subjects (either live feeds or still images) may be displayed on the moderator device together with the augmented data (e.g., the status indicators138). In this instance, below the image of each test subject may include various data related to the test subject such as name, age, sex, race, socially economic status, etc. Thus in theplatform102, the moderator no longer has to rely on notes or memory when conducting the session as the information may be presented on themoderator system112 in an easily consumable manner and updated in substantially real-time.
In the case of multiple test subjects106(1)-(L) and110(1)-(M), theplatform102 may attach or insert thestatus indicators138 over or adjacent to the image of the corresponding test subjects106(1)-(L) and110(1)-(M) to further assist the moderator in determining which individual test subjects106(1)-(L) and110(1)-(M) is experiencing which emotion. Additionally, in some instances, thestatus indicator138 may also include an aggregated indicator showing an overall mood or status of the group of test subjects106(1)-(L) and110(1)-(M). For example, the aggregated indicators may be based on normalized biometric or emotional data collected from a large sample (for instance, greater than or equal to 70 test subjects or greater than or equal to 140 test subjects) and then thesensor data134 associated with thecurrent test subjects106 and110 may be compared to the normalized data to provide a score or more meaningful metric orstatus indicator138. In some cases, the benchmarked data may be specific to demographics associated with the test subjects, similar session topics (e.g., consumer products versus pollical topics), among others.
Theplatform102 and/or theremote database118 may also aggregate or otherwise determine trends oranalytics140 based on thesensor data134 collected from one or more sessions. For example, theplatform102 may perform audio, video, or text analysis on therecording136 to identify common trends (e.g., similar responses fromdifferent test subjects106 or110), similar emotional responses, unique responses, etc. and to present, such as in a chart or graph, for instance, as part of therecording136. In some specific examples, theplatform102 and/or theremote database118 may also identify questions or stimuli to recommend to the moderator based on theanalytics140 and/or thestatus indicators138.
In some implementations, theplatform102 and/or theremote database118 may process thesensor data134 via one or more machine learned models or neural networks to generate thestatus indicators138 or generate theanalytics140. For example, machine learning techniques may include, but are not limited to, regression algorithms (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAID), decision stump, conditional decision trees), Bayesian algorithms (e.g., naive Bayes, Gaussian naive Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning algorithms (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), Flexible Discriminant Analysis (FDA)), Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet50, ResNet101, VGG, DenseNet, PointNet, and the like.
In the illustrated example, theplatform102 may also be in wireless communication with a first set of client systems114(1)-(I) and a second set of client systems116(1)-(J). It should be understood that the first set ofclient systems114 may be physically remote from the second set ofclient systems116 and that eachsystem114 and116 may include multiple devices to present data and/or receive user inputs from one or more clients. For example, a first client may receive the audio/video stream to a first device (e.g., a television) and the recording136 (e.g., the audio, video, and text-based data associated with the session) via a second device (e.g., a tablet or computer). Thus, the television may act as the virtual glass to allow the clients to view the session in a manner similar to being present in an observation room and the tablet or computer may allow the client to take notes add comments ortags142 to therecording136 which may be reviewed at a later time. In some examples, therecording136 may also include thestatus indicators138 and theanalytics140 as an integrated features or component.
In some implementations, the clients may be able to annotate (e.g., comment or tag142) content within therecording136. In some cases, the comments and/ortags142 may be added to aglobal recording136 and become visible to the other client systems114(1)-(I) and116(1)-(J) to facilitate conversation. In addition to adding each client’s comments andtags142 to theglobal recording136, theplatform102 may also generate an alert ornotification144 to the other client systems114(1)-(I) and116(1)-(J) in response to an individual client adding a comment ortag142. In some cases, thenotification144 may be visual queues (e.g., icon, flashing, color change, etc.) or audio queues (e.g., output a sound). In some instances, thenotifications144 may be associated with a specific client. For example, if a first client adds a comment ortag142, theplatform102 may cause a first notification (e.g., a red flashing icon) to be output by the client systems114(1)-(I) and116(1)-(J) and, if a second client adds a comment ortag142, theplatform102 may cause a second notification (e.g., a green flashing icon) to be output by the client systems114(1)-(I) and116(1)-(J). in this manner, each client may quickly determine if they desire to review the comment or tag142 being added based on the individual adding the comment ortag142. In some implementations, theanalytics140 may be updated based on the comments ortags142 being added by the clients. For instance, theplatform102 may identify the most or least commented section of a session or each portion (e.g., 5-15 second portion) of a session that received more than a threshold number of comments or tags142. In some implementations, visibility of the comments andtags142 may be controlled by the client that is adding the comment ortag142. For instance, the comments andtags142 may be personal, shared with a group, or shared globally.
FIG.2 illustrates an examplepictorial view200 of a test subject104 participating in a session facilitated by a virtualfocus group platform102 according to some implementations. In the illustrated example, thetest subject104 is located within aroom202, such as the test subject’s living room. Thus, unlike conventional focus group facilities, thetest subject104 may perform the session in the comfort of their own home.
Thetest subject104 is conducting a focus group session with a moderator (not shown) via an application installed on the testsubject device106. For instance, theplatform102 may sendstimuli130 and/orrequests128 to thetest subject104 via the testsubject deice106. Similarly, theplatform102 may receivefeedback132 from the test subject via theclient device106. In this example, thetest subject104 may also view content orstimuli130 presented via the television ordisplay204. Thus, it should be understood, that theplatform102 is configured to allow for a multidevice interaction for the test subject104 to more closely recreate the physical focus room experience. For instance, the user may view an advertisement on thedisplay204 while answering questions on thedevice106. In this manner, thetest subject104 may consume or review content orstimuli130 via the testsubject device106 and/or thedisplay204.
In the illustrated example, theroom202 also includes various sensors, such ascameras206 and208 andmicrophone array210. In some cases, thetest subject104 may also wear various biometric data collections devices (not shown), such as heartrate monitors or brain activity monitors. In general, the data collection devices206-210 may capture data related to the session from the environment orroom202 and send to theplatform102 assensor data134, as discussed above with respect toFIG.1.
In the current example, theplatform102 may include various cloud-based or remote services associated with conducting virtual focus groups. For example, theplatform102 may include amoderator service212, a speech-to-text service214, a testsubject monitoring service216, ananalytics service218, acomment service220, and a stimulus recommendation service222.
Themoderator service212 may be configured to allow a moderator to communicate and/or providestimuli130 andrequests128 to the test subject via thedisplay204 and/or thedevice106. In some implementations, themoderator service212 may be configured to conduct the session with the test subject as an autonomous system. For instance, themoderator service212 may be configured to conduct preprogramed sessions (e.g., a series ofstimuli130 and requests128). In other instances, themoderator service212 may be configured to utilize one or more machine learned model, neural network, and/or output of the other services214-222 to analyze thesensor data134 and to selectrequests128 andstimuli130 to provide to thetest subject104.
The speech-to-text service214 may be configured to receive the audio portion of thesensor data134 and to convert the audio data into a text-based transcript. In some cases, the speech-to-text service214 may correlate or relate the text-based transcript with the audio and/or video data to generate a recording in substantially real-time, as discussed above with respect toFIG.1.
The testsubject monitoring service216 may be configured to analyze thesensor data134 collected from the environment orroom202 and to generate the status indicators associated with the test subject. As discussed above, the testsubject monitoring service216 may utilize various machine learned models, nueral networks, or other data analytic techniques when determining the status indicators. Additionally, the status indicators may be presented to clients observing the session in various formats, such as visual (e.g., icons, colors, ratings, percentages, graphs, etc.), audio (e.g., output sounds in response to changes in mood), or text-based annotations to the recordings.
Theanalytics service218 may be configured to analyze thesensor data134 collected from the environment orroom202 with respect to other sessions or other test subjects and to generate trends, common occurrences, maximum or minimum thresholds, etc.
Thecomment service220 may be configured to allow clients to provide comments ortags142 associated with the session. For example, thecomment service220 may allow the clients to add audio, video, or text-based information to the session recording. As discussed above, the comments andtags142 may be private, shared with a select group, or global.
The stimulus recommendation service222 may configured to assist the moderator and/or themoderator service212 with conducting the session. For example, the stimulus recommendation service222 may analyze thesensor data134 collected from the environment orroom202 and to generate recommendations, sample questions, select stimulus or other content, that may be used to direct the session one way or another. For example, if the client specifies specific goals for the session, the recommendations, sample questions, stimulus or other content may be selected to assist in achieving the client goals.
FIG.3 illustrates an examplepictorial view300 of amoderator302 participating in a session facilitated by a virtualfocus group platform102 according to some implementations. In the illustrated example, themoderator302 is located within an environment302 (moderator and environment are both302), such as the moderator’s office. Themoderator302 may conduct or lead a focus group session with one or more test subjects (not shown) via an application installed on the testsubject device112. For instance, themoderator302 may receive an audio/video data (e.g., sensor data134) of the test subject as well asfeedback132 via theplatform102. Theplatform102 may also communicatively couple themoderator302 with the test subject via a video chat session. Themoderator302 may also be able to providerequests128 and/or causestimuli130 to be presented to the test subject via thedevice112 and/or theplatform102. Thus, themoderator302 may be in communication with the test subject as if themoderator302 was present in the same physical location as the test subject.
In one implementation, the moderator application installed on themoderator system112 may be configured to present session data in an organized manner to improve session flow and/or reduce complexity and distractions experienced by themoderator302. For instance, themoderator system112 may present an icon, video live stream, and/or image of each tests subject associated with a current session. Each icon associated with a test subject may also include one or more status indictor superimposed or associated with the icon representing the test subject. The status indicators may change in response to theplatform102 determining a change in status of the corresponding test subject based on the analytics of the captured biometric, audio, and visual data of the corresponding test subject. Additionally, the information presented to the moderator may include demographic information, polling answers, private chat messages, unique or flagged emotional responses to content, etc. In some cases, the additional data may be displayed below or adjacent to each test subject’s icon.
In some cases, themoderator system112 may allow themoderator302 to preload or plan a session. For example, themoderator system112 may allow themoderator302 to preload or otherwise organize a plurality ofstimuli130, such as a series of video content that may be provided to the test subject devices during a session. In this manner, themoderator302 does not need to interrupt the flow of a session to play a DVD via a DVD player as in a conventional focus group session. In one specific example, theplatform102 may reorganize the order or arrangement of the storedstimuli130 based on a progression of the session as compared to prior sessions conducted by themoderator302.
In the illustrated example, theenvironment302 also includes various sensors, such ascameras304 and306 andmicrophone array308. In general, the data collection devices304-308 may capture data related to the session from theenvironment302 and send to theplatform102 assensor data312 to be incorporated into the session record that is sent to the client systems.
As discussed above with respect toFIG.2, theplatform102 may include various cloud-based or remote services associated with conducting virtual focus groups. For example, theplatform102 may include themoderator service212, the speech-to-text service214, the testsubject monitoring service216, theanalytics service218, thecomment service220, and the stimulus recommendation service222.
FIG.4 illustrates an examplepictorial view400 of a client group, generally indicated by clients402(1)-(M), observing a session facilitated by a virtual focus group platform according to some implementations. In the illustrated example, theclients402 are located within anenvironment404, such as a conference room. Theclients402 may observe the session between the moderator and the test subjects via one or more client systems114 (e.g., the television114(1) and personal computing device114(2)). For instance, theclients402 may watch a live stream of the session on the television114(1). Theclients402 may also watch the session on the computing device114(2).
In some examples, the live stream of the session on the television114(1) may act as the virtual glass providing the augmented viewing experience. For example, theplatform102 or an administrator may configure the virtual glass display on the television114(1) by assigned bubbles or content circles associated with or over each test subject. In some cases, the bubbles may include the test subject’s demographic information,status indicators138, as well as other analytics.
On the device114(2), theindividual clients402 may be able to review the recording136 (including the text-based transcript) as well as to add comments and/ortags142 to therecord136. As discussed above, the clients may also receivenotifications144 related to the comments andtags142 being added to therecord136 in substantially real-time. For instance, the virtual glass display on the television114(1) may also include any comments ortags142 provided by one ormore clients402 via the second client devices114(2) as well as output stimuli being viewed by the test subjects. In some implementations, various sounds or other notification (e.g., flashing color, assigned colors, graphics, etc.) may display when acorresponding client402 adds a tag or comment142 via the second device114(2).
In some examples, the device114(2) may operate both as the virtual glass display mode and an interactive mode as discussed above. For example, the device114(2) may operate in the interactive mode when in a first orientation, e.g., theclient402 may add comments andtags142, review therecording136 including the transcript in one or more languages, view the analytics, stop pause or rewind therecording136, chat with other clients, etc. The device114(2) may then operate in the virtual glass display mode when the device114(2) is in a second orientation. For example, in the virtual glass display mode the device114(2) may display the augmented live stream of the session similar to the television114(1). In the virtual glass display mode, the live session may be displayed including the overlays and/or augmented data provided by theplatform102, such as thestatus indicators138, demographic information, stimuli being viewed by the test subjects, alerts and notifications to other client’s tags or comments, etc. Thus, in this example, theclient402 may utilize the same device114(2) as both the virtual glass and in the interactive mode.
As discussed above with respect toFIG.2, theplatform102 may include various cloud-based or remote services associated with conducting virtual focus groups. For example, theplatform102 may include themoderator service212, the speech-to-text service214, the testsubject monitoring service216, theanalytics service218, thecomment service220, and the stimulus recommendation service222.
FIGS.5-7 are flow diagrams illustrating example processes associated with theplatform102 ofFIGS.1-4 according to some implementations. The processes are illustrated as a collection of blocks in a logical flow diagram, which represent a sequence of operations, some or all of which can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable media that, which when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, encryption, deciphering, compressing, recording, data structures and the like that perform particular functions or implement particular abstract data types.
The order in which the operations are described should not be construed as a limitation. Any number of the described blocks can be combined in any order and/or in parallel to implement the process, or alternative processes, and not all of the blocks need be executed. For discussion purposes, the processes herein are described with reference to the frameworks, architectures and environments described in the examples herein, although the processes may be implemented in a wide variety of other frameworks, architectures or environments.
FIG.5 illustrates an example flow diagram showing anillustrative process500 for providing a virtual focus group according to some implementations. As discussed above, the focus group platform, discussed herein, replicates and enhances the one-way mirror experience of being physically present within a research facility without the geographic limitations of traditional focus group facilities. In some implementations, aplatform102 may include a testsubject device106, a moderator system ordevice112, and a client system ordevice114.
At502, theplatform102 may receive moderator instructions from themoderator system112 related to a focus group session. For example, the moderator may provide instructions to present content or stimulus on a display or ask the test subject to answer one or more questions.
At504, theplatform102 may send data to the test subject device. For example, theplatform102 may identify content, stimulus, or requests to present to the test subject based on the moderator instructions. Theplatform102 may select one or more devices associated with the test subject to receive the content, stimulus, or request. In one example, the content or stimulus may be provided to a display device while the request may be provided to an input/output device.
At506, the testsubject device106 may perform operations based on the data received. For example, thedevice106 may display content or stimulus and/or request user input to requests.
At508, the test subject device106 (and/or other sensors associated with the test subject device106) may capture sensor data from the environment and, at510, the testsubject device106 sends the sensor data to theplatform102. For example, the sensor data may include image data, video data, audio data, biometric data, environmental data, among other type of data associated with the test subject.
At512, theplatform102 may perform text-based analytics based on the sensor data. For example, theplatform102 may covert audio data captured by the testsubject device102 to text using one or more speech-to-text conversion techniques. Theplatform102 may then preform text-based analytics on the text-based transcript of the audio data. For instance, theplatform102 may detect words or phrases repeated by the test subject, uncommon or unique words or phrases, words or phrases common to other test subjects, emotional words or phrases, among others.
At514, theplatform102 may perform biometric analytics based on the sensor data. For example, the testsubject device106 may capture brain activity data, heartrate data, temperature data, or other data associated with the test subjects physical state. Theplatform102 may then determine mood and/or emotional responses based at least in part on the biometric data.
At516, theplatform102 may perform visual analytics based on the sensor data. For example, the testsubject device106 may capture image data and perform facial analysis or eye tracking on the image data. Theplatform102 may then determine a mood or emotional reaction to specific content, stimuli, or requests.
At518, the platform may send the text-based analytics, the biometric analysis, and the visual analysis to one ormore client systems114 and, at520, theclient systems114 may present the text-based analytics, the biometric analysis, and the visual analysis to one or more client systems on a display. For example, the text-based analytics, the biometric analysis, and the visual analysis may be presented on the display in conjunction with an audio/video feed of the session.
At522, theplatform102 may generate a status indicator based at least in part on the text-based analytics, the biometric analysis, and the visual analysis. For example, theplatform102 may determine a mood or emotional state of the test subject based at least in part on the text-based analytics, the biometric analysis, and the visual analysis which may be used to generate the status indicator, as discussed above. In other alternative examples, theplatform102 may also determine the status indictors directly from the sensor data and/or a combination of the text-based analytics, the biometric analysis, and the visual analysis, and the sensor data.
At524, theplatform102 may send the status indicator to themoderator system112 and, at526, themoderator system112 may receive the status indictors. In some cases, themoderator system112 may present the status indicators to the moderator to assist the moderator in evaluating the status or state of the session and/or the test subject. In some implementations, theplatform102 may also send the status indicator to theclient system114. In these implementations, the status indicators sent to themoderator system112 may be the same as or may differ from the status indicators sent to theclient system114. For example, the status indictors sent to theclient system114 may be more detailed or contain more information than the status indictors sent to themoderator system112.
At528, themoderator system112 may generate updated moderator instructions, for instance, based at least in part on the status indictors and send the updated moderator instructions to theplatform102, as discussed above.
FIG.6 illustrates an example flow diagram showing anillustrative process600 for providing a virtual focus group according to some implementations. In some implementations, theplatform102 may be configured to replicate and enhance the conventional focus group experience. In these implementations, the experience for the client or focus group observer may be configured for multiple device, such as first client device114(1) and second client device114(2), interaction, as described below.
At602, theplatform102 may receive image data, biometric data, environmental data, audio data, and/or other sensor data associated with a test subject. For example, the image data, biometric data, environmental data, audio data, and/or other sensor data may be collected or captured by a test subject device or one or more peripherals associated with the test subject device.
At604, theplatform102 may perform text-based analytics based on the audio data collected by the test subject device. For example, theplatform102 may covert audio data captured by the testsubject device102 to text using one or more speech-to-text conversion techniques. Theplatform102 may then preform text-based analytics on the text-based transcript of the audio data. For instance, theplatform102 may detect words or phrases repeated by the test subject, uncommon or unique words or phrases, words or phrases common to other test subjects, emotional words or phrases, among others.
At606, theplatform102 may perform visual analytics based on the image data collected by the test subject device. For example, the testsubject device106 may capture image data and perform facial analysis or eye tracking on the image data. Theplatform102 may then determine a mood or emotional reaction to specific content, stimuli, or requests.
At608, theplatform102 may perform biometric analytics based on the biometric data collected by the test subject device. For example, the test subject device may capture106 brain activity data, heartrate data, temperature data, or other data associated with the test subjects physical state. Theplatform102 may then determine mood and/or emotional responses based at least in part on the biometric data.
At610, theplatform102 may perform environmental impact analysis based on the environmental data collected by the test subject device. For example, theplatform102 may determine if it is too hot or too cold within the environment occupied by the test subject. In some cases, theplatform102 may adjust one or more of the text-based analytics, the biometric analysis, and the visual analysis based on the environmental analysis. For instance, a threshold associated with a positive test subj ect response may be decreased if the environmental conditions are poor and likely to aggravate the test subject.
At612, theplatform102 may send the image data and the audio data to the first client device114(1) and, at614, the first client device114(1) may output the image data and the audio data via a display. For instance, as discussed above, the first client device114(1) may be a large display that acts as a virtual glass for viewing the test subject and/or the moderator during the session. Thus, in this example, the live audio/video stream may be presented to the display to replicate the experience of watching the session in person.
At616, theplatform102 may send the text-based analytics, the visual analysis, the biometric analysis, and the environmental impact analysis to the second client device114(2), such that each individual client may review the analytics and analysis at their own pace and without interrupting the virtual glass on the first client device114(1). In some examples, the analytics and analysis may be provided to the second client device114(2) as part of a recording of the session together with the image data and the video data.
At618, the first client device114(2) may receive user inputs including a comment or tag. For example, the comment or tag may be associated with a particular portion of the recording.
At620, the first client device114(2) may send the comment or tag to theplatform102 and, at622, theplatform102 may send or cause the comment or tag to be displayed by the first client device114(1) (such as, a notification as to the comment) and at least a third client device. For example, the comment or tag may be shared via a global recording or with a specific group or subset of clients.
FIG.7 illustrates an example flow diagram showing anillustrative process700 for providing a virtual focus group according to some implementations. As discussed above, the platform may enhance the conventional focus group experience by allowing the clients to discuss, talk, or interact with each other during the session. Conventionally, the clients located in the observation room had to maintain a state of quiet to avoid interrupting the session happening in close proximity. However, unlike the conventional facility, the platform, discussed herein, not only allows interaction but encourages it.
At702, the first client device114(1) associated with a first client may receive the session recording. In some cases, the session recording may be provided in substantially real-time and may include various analytics, status indictors, the audio/video data, as well as a text-based transcript of the session.
At704, the first client device114(1) may receive user inputs including a comment or tag. For example, the comment or tag may be associated with a particular portion of the recording. The comment or tag may include thoughts, insights, and/or questions related to the portion or the recording.
At706, the first client device114(1) may send the comment or tag to a second client device114(2) and, at708, the second client device114(2) may output the comment or tag in conjunction with the image data and the audio data of the session. For example, the comment or tag may be shared via a global recording or with a specific group or subset of clients.
At710, the first client device114(1) may send the comment or tag to a third client device114(3) and, at712, the third client device114(3) may output the comment or tag in conjunction with the image data and the audio data of the session. For example, the comment or tag may be presented as part of a recording of the session including the video, audio, and text-based transcript of the session. In some cases, outputting the comment or tag may include providing an indication or icon associated with the comment or tag, such as an indicator of type of comment (e.g., question, feedback, position marker, review marker, etc.), a position within the content, the individual that posted the comment, a time stamp, etc.
At714, the third client device114(3) may also output an alert or notification. For example, the alert or notification may be configured to bring the comment or tag to the attention of an individual associated with the third client device114(3). In some cases, the alert or notification may be visual-based (e.g., icon, flashing, color change on the display), audio-based (e.g., output of sound based on the comment or tag), or tactile-based (e.g., a vibration of the device), among others.
FIG.8 illustrates anexample platform102 for providing a virtual focus group according to some implementations. In the illustrated example, theplatform102 includes one ormore communication interfaces802 configured to facilitate communication between one or more networks, one or more system (e.g., testsubject systems106 or108,moderator systems112, and/orclient systems114 ofFIG.1). The communication interfaces802 may also facilitate communication between one or more wireless access points, a master device, and/or one or more other computing devices as part of an ad-hoc or home network system. The communication interfaces802 may support both wired and wireless connection to various networks, such as cellular networks, radio, WiFi networks, short-range or near-field networks (e.g., Bluetooth®), infrared signals, local area networks, wide area networks, the Internet, and so forth.
Theplatform102 includes one ormore processors804, such as at least one or more access components, control logic circuits, central processing units, or processors, as well as one or more computer-readable media806 to perform the function of theplatform102. Additionally, each of theprocessors804 may itself comprise one or more processors or processing cores.
Depending on the configuration, the computer-readable media806 may be an example of tangible non-transitory computer storage media and may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable instructions or modules, data structures, program modules or other data. Such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other computer-readable media technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, solid state storage, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store information and which can be accessed by theprocessors804.
Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media806 and configured to execute on theprocessors804. For example, as illustrated, the computer-readable media806stores moderator instructions808, speech-to-text instructions810, testsubject monitoring instructions812,analytics instructions814, comment and taginstructions816,stimulus recommendation instructions818, reportinginstructions820 as well as other instructions, such as an operating system. The computer-readable media806 may also be configured to store data, such assensor data822 collected or captured with respect to the test subjects and/or moderators,session recordings824, analytics andstatus indicators826, stimulus orcontent828, and/orvarious models830 for preforming the various operations and analysis of theplatform102.
Themoderator instructions808 may be configured to allow a moderator to communicate and/or provide stimuli andcontent826 to the test subject via a client display and/or device. In some implementations, themoderator instructions808 may be configured to conduct the session with the test subject as an autonomous system. For instance, themoderator instructions808 may be configured to conduct preprogramed sessions (e.g., a series of stimuli and requests). In other instances, themoderator instructions808 may be configured to utilize one or more machine learnedmodel830, neural network, and/oranalytics826 to provide to the test subject.
The speech-to-text instructions810 may be configured to receive the audio portion of thesensor data822 and to convert the audio data into a text-based transcript. In some cases, the speech-to-text instructions810 may correlate or relate the text-based transcript with the audio and/or video data to generate a recording in substantially real-time, as discussed above.
The testsubject monitoring instructions812 may be configured to analyze thesensor data822 collected from the environment associated with the test subject and to generate thestatus indicators826 associated with the test subject. As discussed above, the testsubject monitoring instructions812 may utilize various machine learnedmodels830, neural networks, or other data analytic techniques when determining the status indicators. Additionally, thestatus indicators826 may be presented to clients observing the session in various formats, such as visual (e.g., icons, colors, ratings, percentages, graphs, etc.), audio (e.g., output sounds in response to changes in mood), or text-based annotations to the recordings.
Theanalytics instructions814 may be configured to analyze thesensor data822 collected from the environment associated with the test subject with respect to multiple test sessions or test subjects and to generateanalytics826 associated with trends, common occurrences, maximum or minimum thresholds, etc. over the various sessions.
The comment and taginstructions816 may be configured to allow clients to provide comments or tags associated with the session. For example, the comment and taginstructions816 may allow the clients to add audio, video, or text-based information to the session recording. As discussed above, the comments and tags may be private, shared with a select group, or global. In some examples, the comment and taginstructions816 may be configured to detect new comments associated with a current or previously conducted and recorded session and to generate alerts or notifications related to the newly detected comment or tag. In some cases, individual users may save or store filters or searches that cause the requesting induvial user to receive an alert or notification upon detection of a newly added comment with respect to specified sessions.
Thestimulus recommendation instructions818 may configured to assist the moderator and/or themoderator instructions808 with conducting a session. For example, thestimulus recommendation instructions818 may analyze thesensor data822 collected from the environment associated with a test subject and generate recommendations, sample questions, select stimulus or other content, that may be used to direct the session one way or another. For example, if the client specifies specific goals for the session, the recommendations, sample questions, stimulus or other content may be selected to assist in achieving the client goals.
The reportinginstructions820 may be configured to generate a summary or report of each session that may be reviewed after the session ends and include links to theactual recording824 of the corresponding session or sessions. For example, the report generated by theplatform102 may include one or more of the transcript or dialog within a first column, the analytics within a second column, the tags within a third column, and any corresponding chat within a fourth column. In each case, the content of each column may align according to the corresponding portion of the transcript and be linked to therecording824, such that an individual may quickly review the report and access therecording824 for any part of the session the individual desires to watch or otherwise consume.
FIG.9 illustrates an example testsubject system104 associated with the platform ofFIG.8 according to some implementations. In the illustrated example, thedevice104 includes one ormore communication interfaces902 configured to facilitate communication between one or more networks, one or more system (e.g.,platform102 ormoderator systems112 ofFIG.1). The communication interfaces902 may also facilitate communication between one or more wireless access points, a master device, and/or one or more other computing devices as part of an ad-hoc or home network system. The communication interfaces902 may support both wired and wireless connection to various networks, such as cellular networks, radio, WiFi networks, short-range or near-field networks (e.g., Bluetooth®), infrared signals, local area networks, wide area networks, the Internet, and so forth.
Thedevice104 may also include one ormore sensors systems904. For example, thesensor systems904 may be configured to capture data associated with the test subject and/or the environment associated with the test subject. In some cases, thesensor systems904 may include image data capture components, video data capture components, biometric data capture components, environmental data capture components (e.g., temperature), and audio data capture components. In general, the data captured by thesensor system904 may be stored assensor data922 and provided to theplatform102 ofFIG.1 via the communication interfaces902.
Thedevice104 also includes input interfaces906 and theoutput interface908 may be included to display or provide data (e.g., the stimulus and content924) to and to receive test subject inputs. Theinterfaces906 and908 may include various systems for interacting with thedevice104, such as mechanical input devices (e.g., keyboards, mice, buttons, etc.), displays, input sensors (e.g., motion, age, gender, fingerprint, facial recognition, or gesture sensors), and/or microphones for capturing natural language input such as speech. In some examples, theinput interface906 and theoutput interface908 may be combined in one or more touch screen capable displays.
Thedevice104 includes one ormore processors910, such as at least one or more access components, control logic circuits, central processing units, or processors, as well as one or more computer-readable media912 to perform the function associated with the virtual focus group. Additionally, each of theprocessors910 may itself comprise one or more processors or processing cores.
Depending on the configuration, the computer-readable media912 may be an example of tangible non-transitory computer storage media and may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable instructions or modules, data structures, program modules or other data. Such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other computer-readable media technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, solid state storage, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store information and which can be accessed by theprocessors904.
Several modules such as instruction, data stores, and so forth may be stored within the computer-readable media912 and configured to execute on theprocessors910. For example, as illustrated, the computer-readable media912 storescontent output instructions914,user input instructions916, testsubject monitoring instructions918, as well asother instructions920, such as an operating system. The computer-readable media912 may also be configured to store data, such assensor data922 collected or captured with respect to the test subjects and/or moderator as well as stimulus orcontent924.
Thecontent output instructions914 may be configured to receive instruction orcontent924 from the moderator system (e.g., the stimulus and requests) and in response to cause the stimulus924 (such as image or video data) to be output by the output interfaces908 of thedevice104. Theuser input instructions916 may be configured to receive user inputs via theinput interface906 and to store or send the user inputs as feedback to the platform via the communication interfaces902.
The testsubject monitoring instructions918 may be configured to cause thesensor systems904 to capture or collected thesensor data922 from the environment associated with the test subject. As discussed above, the testsubject monitoring instructions918 may capturesensor data922 associated with audio in the environment, the facial expression of the test subject, the eye movement of the test subject, various biometrics (e.g., heartrate, brain activity, etc.) of the test subject, the condition of the environment (e.g., temperature), among others.
FIG.10 illustrates anexample moderator system112 associated with the platform ofFIG.8 according to some implementations. In the illustrated example, thesystem112 includes one ormore communication interfaces1002 configured to facilitate communication between one or more networks, one or more system (e.g.,platform102 or testsubject devices104 or108 ofFIG.1). The communication interfaces1002 may also facilitate communication between one or more wireless access points, a master device, and/or one or more other computing devices as part of an ad-hoc or home network system. The communication interfaces1002 may support both wired and wireless connection to various networks, such as cellular networks, radio, WiFi networks, short-range or near-field networks (e.g., Bluetooth®), infrared signals, local area networks, wide area networks, the Internet, and so forth.
Thesystem112 may also include one ormore sensors systems1004. For example, thesensor systems1004 may be configured to capture data associated with the moderator. In some cases, thesensor systems1004 may include image data capture components, video data capture components, biometric data capture components, environmental data capture components, and audio data capture components. In general, the data captured by thesensor system1004 may be stored assensor data1022 and provided to theplatform102 ofFIG.1 via thecommunication interfaces1002 to be incorporated into the session recording by theplatform102.
Thesystem112 also includesinput interfaces1006 and theoutput interface1008 may be included to display or provide information to and to receive inputs from the moderator. Theinterfaces1006 and1008 may include various systems for interacting with thesystem112, such as mechanical input devices (e.g., keyboards, mice, buttons, etc.), displays, input sensors (e.g., motion, age, gender, fingerprint, facial recognition, or gesture sensors), and/or microphones for capturing natural language input such as speech. In some examples, theinput interface1006 and theoutput interface1008 may be combined in one or more touch screen capable displays.
Thedevice104 includes one ormore processors1010, such as at least one or more access components, control logic circuits, central processing units, or processors, as well as one or more computer-readable media1012 to perform the function associated with the virtual focus group. Additionally, each of theprocessors1010 may itself comprise one or more processors or processing cores.
Depending on the configuration, the computer-readable media1012 may be an example of tangible non-transitory computer storage media and may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable instructions or modules, data structures, program modules or other data. Such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other computer-readable media technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, solid state storage, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store information and which can be accessed by theprocessors1004.
Several modules such as instruction, data stores, and so forth may be stored within the computer-readable media1012 and configured to execute on theprocessors1010. For example, as illustrated, the computer-readable media1012 stores status indicator processing instructions1014, stimulus recommendation instructions1016,moderator monitoring instructions1018, as well asother instructions1020, such as an operating system. The computer-readable media1012 may also be configured to store data, such assensor data1022 collected or captured with respect to the moderator and/or the test subject as well as stimulus orcontent1024.
The status indicator processing instructions1014 may be configured to receive a status indicator from theplatform102 and to determine how to present the status to the moderator. For example, the status indicator processing instructions1014 may determine from the sensor data1022 a level of concentration or involvement of the moderator with the session and determine to present the status indicator as an icon on a video feed of the test subject being displayed to the moderator. In other examples, the status indicator processing instructions1014 may present statistical data associated with the status of the test subject, such as the heartrate, to the moderator to provide additional insight during the session.
The stimulus recommendation instructions1016 may configured to assist the moderator with conducting a session. For example, the stimulus recommendation instructions1016 may process the status indicators and/or analysis provided by theplatform102 and associated with a test subject to generate recommendations, sample questions, select stimulus or other content, that may be used to direct the session one way or another.
Themoderator monitoring instructions1018 may be configured to cause thesensor systems1004 to capture or collect thesensor data1022 from the environment associated with the moderator. Themoderator monitoring instructions1018 may capturesensor data1022 associated with audio in the environment, the facial expression of the test subject, the eye movement of the test subject, various biometrics (e.g., heartrate, brain activity, etc.) of the test subject, the condition of the environment (e.g., temperature), and among others.
FIG.11 illustrates anexample client system114 associated with the platform ofFIG.8 according to some implementations. In the illustrated example, thesystem114 includes one ormore communication interfaces1102 configured to facilitate communication between one or more networks, one or more system (e.g.,platform102 ofFIG.1). The communication interfaces1102 may also facilitate communication between one or more wireless access points, a master device, and/or one or more other computing devices as part of an ad-hoc or home network system. The communication interfaces1102 may support both wired and wireless connection to various networks, such as cellular networks, radio, WiFi networks, short-range or near-field networks (e.g., Bluetooth®), infrared signals, local area networks, wide area networks, the Internet, and so forth.
Thesystem114 also includesinput interfaces1104 and theoutput interface1106 may be included to display or provide information to and to receive inputs from the moderator. Theinterfaces1104 and1106 may include various systems for interacting with thesystem114, such as mechanical input devices (e.g., keyboards, mice, buttons, etc.), displays, input sensors (e.g., motion, age, gender, fingerprint, facial recognition, or gesture sensors), and/or microphones for capturing natural language input such as speech. In some examples, theinput interface1106 and theoutput interface1108 may be combined in one or more touch screen capable displays.
Thesystem114 includes one ormore processors1108, such as at least one or more access components, control logic circuits, central processing units, or processors, as well as one or more computer-readable media1110 to perform the function associated with the virtual focus group. Additionally, each of theprocessors1108 may itself comprise one or more processors or processing cores.
Depending on the configuration, the computer-readable media1110 may be an example of tangible non-transitory computer storage media and may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable instructions or modules, data structures, program modules or other data. Such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other computer-readable media technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, solid state storage, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store information and which can be accessed by theprocessors1108.
Several modules such as instruction, data stores, and so forth may be stored within the computer-readable media1110 and configured to execute on theprocessors1108. For example, as illustrated, the computer-readable media1110 stores comment andtag instructions1112,live stream instructions1114,recording instructions1116, alert ornotification instructions1118, as well asother instructions1120, such as an operating system. The computer-readable media1110 may also be configured to store data, such assensor data1122 collected or captured with respect to the moderator and/or the test subject.
The comment andtag instructions1112 may allow a user (e.g., an individual client) to insert comments and tags to the recording of the session. For example, the comment may be a question for other clients, personal notes, feedback for other clients, etc. The tags may include various prepopulated bookmarks, tabs, etc. that may be applied to portions or segments of the session. For example, the tag may indicate an intent to review at a later time and be assigned a particular color. In some cases, tags may include underlining, highlighting, circling, etc. of text within the transcript of the session.
Thelive stream instructions1114 may be configured to cause the audio and video data captured with respect to one or more test subject and/or a moderator to be displayed via the output interfaces as the session is progressing.
Therecording instructions1116 may be configured to cause a recording of a session to be displayed by theoutput interfaces1106 either as session is live or at a time subsequent to the session. In some examples, the recording may be presented on a first output interface1106 (or device) and the audio and video data may be presented on a second output interface1106 (or device). In some cases, the recording may include a transcript of the session linked to the video and audio such that the transcript is searchable via various types of text-based searches and that upon selection of a portion of the transcript the corresponding video and audio data may be presented to the client via theoutput interface1106.
In some examples, therecording instructions1116 may also include one or more editor modes available to the clients via the client systems. For example, therecording instructions1116 may allow for a clip extractor that allows one or more clients to extract or automatically flag a predetermined period of time (such as 30 seconds) around search terms, types of tags, particular tags, designated comments, etc. In some cases, the predetermined period of time may be a first predetermined period prior to the search term and a second predetermined period after the search term (e.g., 15 seconds prior to the search term and 15 seconds following the search term may be extracted to form a 30 second clip). Therecording instructions1116 may also include a second more detailed editor mode that may operate in a manner similar to a video editor.
The alert ornotification instructions1118 may be configured to generate an alert or notification in response to detecting a new comment or tag within a session or in response to receiving a notification from theplatform102 ofFIG.1. In some cases, the types of alerts and/or notification may be set by the user of theclient system114 or by the type of device of thesystem114. In some cases, the user may be able to set or the alerts to issue in response to particular clients adding comments, particular content of a session being tagged or commented upon, and/or particular user’s making a comment or tagging content of a session.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.